Artificial intelligence algorithms are computer programs that can recognize patterns in unimaginably complex data. AI is being used today to improve the way virtual assistants like Alexa and Siri can recognize and respond to human speech. It’s being used to power the facial recognition capabilities verifying identities at new Nexus kiosks in airports. AI also is being used to improve health care—and the use of such advanced technology provokes some ethical issues.
From an innovation perspective, Medcan is vigilant about new advances in evidence-based healthcare that could elevate health outcomes of our clients. At the same time, we are stewards of our clients’ data and privacy, and will exercise the right ethical and moral tests in selecting and adopting solutions. Productivity is not the test—health outcomes and client obligations are.
One way that AI is being used in health care is to improve the diagnosis of illness. An algorithm will take a clinician through a series of questions about the patient’s condition, then suggest a specific diagnosis. Early results are promising. One of the new AI-based diagnostic systems to be approved for clinical use by the U.S. Food and Drug Administration can autonomously detect and diagnose diabetic retinopathy (a disorder affecting the vision of individuals who have diabetes).
Other examples include an image-recognition system that was able to detect cancerous skin lesions more accurately than human dermatologists. In another example, IBM’s Watson computer system was able to diagnose a patient with a rare form of leukemia in just 10 minutes by comparing more than 20 million oncology records held by the University of Tokyo.
AI-based diagnostic systems seem likely to improve human health. As the technology advances, the systems will meet or exceed the performance of human experts in diagnosing disease. In addition, AI-based diagnostic systems also seem likely to be much more accessible than human diagnosticians. That said, AI-based diagnostic systems will be most impactful when complemented with expertly trained human clinicians, who can leverage AI for diagnosis, where suitable, while also providing emotional and psychological support.
AI in health care comes with risks, however. To build the requisite algorithms, healthcare providers need to collect a vast range of personal health and genetic data. The scope for misuse of this data is broad, especially since an individual’s genetic and health characteristics are often immutable. Implications exist for privacy, human rights, freedom from discrimination, and fair criminal procedure.
Such data, for example, could conceivably be used by an insurance company to deny a person health coverage based on genetic factors that are beyond their control. In a recent case from California, police were able to identify a serial killer because his distant relatives had submitted their DNA samples to a family ancestry website.
International human rights law provides a universally accepted framework for evaluating the impacts of AI on individuals and society. The ethics of artificial intelligence (AI) should expand to consider the human-rights implications of these technologies. Business must ensure that the technology is deployed in a rights-respecting manner. As the use of AI gains momentum in healthcare, securing human privacy ought to be a priority. Certainly, we’re taking such issues seriously at Medcan.
Our fundamental right to life seems likely to be positively impacted by the introduction of AI diagnostic systems, which may improve diagnoses, improve access to quality health care, and reduce the cost burden on public healthcare systems. But as with most things, the benefits come with costs. Businesses can minimize the costs by deploying AI in an ethical manner that respects human rights.
Ashim Khemani is the president of Medcan. He is the author of Canadian Group Insurance Benefits—A Practitioner’s Guide and Reference Manual, and the co-author of Global Health Care Systems: A Perspective on Issues, Practices and Trends Among OECD Nations.