Artificial intelligence has become the backbone of data analytics. There are few industries, including the sick care systems industry, where data scientists are not using it to solve both clinical , patient and doctor experience and business problems. The applications are seemingly endless to the point where medicine itself is turning into a data business that takes care of patients, rather than vice versa.
As AI gains widespread adoption and penetration is sick care systems, it is creating legal, regulatory, social and economic challenges that regulators and policy makers will have to address. For example:
- Jobs and workforce development shifts
- How to educate and train the medical workforce in digital health and AI in particular
- Security and confidentiality of massive amounts of data
- Data overload and fatigue
- Reimbursement changes for electronic services like when an Avatar of bot responds to a patient request. Here’s an example of what I mean.
- How to pay for services that require an integration of man and machine
- Liability issues when “the computer made me do it”
- FDA regulatory standards and compliance issues for AI and future applications. For example, when is AI a medical device?
- The economic consequences and costs when hospital systems consider AI application ,integration into legacy systems or replacing them
- The technical and systems challenges of updating AI with new data sets
- Trust in technology
- Transparency about how a particular machine was trained to make a certain decision.
How can we forecast, prevent, and (when necessary) mitigate the harmful effects of malicious uses of AI?A landmark review of the role of artificial intelligence (AI) in the future of global health published in The Lancet calls on the global health community to establish guidelines for development and deployment of new technologies and to develop a human-centered research agenda to facilitate equitable and ethical use of AI.
Human-human risk homeostasis and automation bias are two potential risks of AI in medicine. Here are several others concerning the use of bots.
Innovators are leading indicators and policy makers and regulators are laggards. However, those that push forward, ignoring regulatory, IP and reimbursement demands of a highly regulated environment, will crash and burn. Asking for forgiveness usually does not work and until and unless we include policy makers as research and development collaborators, along with payers, practitioners, patients and product makers, dissemination will crash on the shoals of regulatory and reimbursement sclerosis. As much as entrepreneurs might dislike it, getting permission is a better long term strategy. Rules create or destroy innovative ecosystems that drive business models that support innovation. The sooner we educate policy making partners and lobby for change, the better for patients through the deployment of AI innovation.