What ethical limitations are associated with artificial intelligence?
Algorithm development presents challenges
Artificial intelligence is a big umbrella under which fall many tasks such as data collection, analysis, reasoning, decision-making and learning. It offers heretofore unimaginable opportunities to advance orthopedic care.
From one perspective, artificial intelligence (AI) is a tool that can be no more ethical or unethical than a mallet or nail. And yet, the misapplication of even simple tools can easily wreak havoc on susceptible structures.
One significant challenge with AI algorithms is that we don’t always know exactly how the algorithm arrived at its prediction, decision or action. This “interpretability problem” remains an area of active investigation and seems especially vexing in models of unstructured data (eg, images, video, text), where AI can significantly outperform more traditional yet understandable statistical methods.
Another challenge with AI concerns the data and objectives used in developing an algorithm. Not enough data? Biased data? Mistaking mathematical stereotypes built on population-level data for individual characteristics and preferences? A singular focus on a misguided objective? A lack of awareness on downstream ramifications? These are just some of the issues we continue to learn about and grapple with, and there is no doubt there are more challenges yet to come.
Despite the best of intentions, and even in a framework that emphasizes fairness and equality, we can let our hubris for AI overshadow our ever-evolving humility on how best to care for our patients. Caring for one another isn’t artificial, and caring itself doesn’t require much intelligence.
Patrick J. Tighe, MD, MS, is an associate professor in the departments of anesthesiology, orthopedics and information systems/operations management, and is co-director of the Perioperative Cognitive Anesthesia Network at University of Florida in Gainesville, Florida.
AI is expected to have a major impact on medical research and practice. Most saliently, AI promises significant advances in diagnosis, prognosis and drug development. But the use of AI also raises complex ethical challenges. Much of today’s AI is the result of machine learning, in which AI is trained on huge data sets. Since these data typically involve sensitive medical information, one challenge is to develop measures for protecting patients’ privacy. Another major challenge is to prevent biases. For example, if women or ethnic minorities are underrepresented in an AI’s training data, as is often the case, the AI is likely to make systematic mistakes when it is applied to representatives of these groups. Because one algorithm can be applied to arbitrarily many cases, any bias it has can potentially cause immense harm. The problem of bias in AI is aggravated by the fact that the decision-making processes in modern AI use are typically difficult to explain: If one doesn’t understand how an AI arrives at specific decisions, it is difficult to detect and, hence, to eliminate bias.
The problem of developing “explainable AI” is a major topic of current research. Since more powerful AIs will likely be more complex and thus harder to understand, it might be difficult to find a permanent solution to this problem. Nevertheless, human judgment is also flawed and often biased. Overall, it can prove to be highly beneficial if human judgment is aided—or even, sometimes, replaced—by AI.
Jens Kipper, PhD, is an assistant professor in philosophy at University of Rochester in Rochester, New York.