Ethics of artificial intelligence in medicine and healthcare: a suggested framework
This month, I am invited to elucidate my perspective on a relatively convoluted topic: ethics of artificial intelligence in healthcare. What is especially disconcerting is that the recent trajectory of advances in artificial intelligence is exponential while our discussions of ethics (and other issues on regulation and law) are lagging behind.
While the ethical issues of artificial intelligence in healthcare can be particularly daunting as it is essentially the convergence of two complex domains. One strategy that can clarify this conundrum is to first deconstruct the entire ethical imbroglio into its constituent parts: patients, clinicians, and machines (really data scientists and artificial intelligence). With the entry of the machine into the former patient-clinician dyad, we now have three separate dyads instead of one:
- Physician and Patients: This is the original relationship that is until now an established albeit sometimes paternalistic one; it is now altered by the emergence of artificial intelligence as an additional partner. The ethical issues are not just how this relationship will change with the availability of artificial intelligence but also with how much the role of artificial intelligence in the decision making process physicians will need to disclose.
- Physician and Machine: The ethics for this new dyad depend partly on where in the spectrum of the clinician-to-machine integration and collaboration, and how much of this needs to be a part of quality of care determination. In addition, there is the ethics of future education and training using artificial intelligence and how to train the future generations of clinicians to maintain empathy and judgment.
- Patient and Machine: This dyad is vulnerable to a myriad of bias issues. The data science workflow will need to be vigilant about racial, sex, and age equity in all three aspects: data (acquisition and curation), algorithm (the so-called algorithmic bias), and deployment with followup and feedback. Of note, the IEEE announced its P7003 Standards Project Addressing Algorithmic Bias Considerations for methodologies that will help to certify the elimination of negative bias in the creation of algorithms. In addition, there are of course issues of privacy and confidentiality with data and its utilization as well as this as a social good with its opt-in or opt-out perspectives.
The final delineation of ethical issues surround the now complex triadic relationship of all three stakeholders. This cannot be simply the integration of the above three dyadic relationships. The triadic relationship is far more nuanced: for example, in the event of a poor outcome, which of the two (clinician or machine) will be given the accountability? What if the patient insisted on the artificial intelligence tool (over the clinician) to make the final decision (and conversely, can a patient refuse an artificial intelligence tool to help render a decision?).
We also need to balance all the ethical concerns of implementation of artificial intelligence in clinical and healthcare with the counterargument: What are the ethical issues of not implementing artificial intelligence? Will we be eventually accountable for not deploying an artificial intelligence solution to a problem (such as prevention of medical errors) and is this equivalent to not wearing a safety belt while driving?
Finally, this complex dimension of ethics in healthcare mandates the collective wisdom of all the stakeholders: clinicians, ethicists, AI experts, academicians, lawyers, policymakers, and most of all, patients and families. We will need to assure that explainability is available for all stakeholders, including patients and families and not merely clinicians.