Skip to content

The legal, regulatory, behavioral and economic challenges of AI

Artificial intelligence has become the backbone of data analytics. There are few industries, including the sick care systems industry, where data scientists are not using it to solve both clinical , patient and doctor experience and business problems. The applications are seemingly endless to the point where medicine itself is turning into a data business that takes care of patients, rather than vice versa.

Elon Musk is worried.

As AI gains widespread adoption and penetration is sick care systems, it is creating legal, regulatory, social and economic challenges that regulators and policy makers will have to address. For example:

  1. Jobs and workforce development shifts
  2. How to educate and train the medical workforce in digital health and AI in particular
  3. Security and confidentiality of massive amounts of data
  4. Data overload and fatigue
  5. Reimbursement changes for electronic services like when an Avatar of bot responds to a patient request. Here’s an example of what I mean.
  6. How to pay for services that require an integration of man and machine
  7. Liability issues when “the computer made me do it”
  8. FDA regulatory standards and compliance issues for AI and future applications. For example, when is AI a medical device?
  9. The economic consequences and costs when hospital systems consider AI application ,integration into legacy systems or replacing them
  10. The technical and systems challenges of updating AI with new data sets
  11. Trust in technology
  12. Transparency about how a particular machine was trained to make a certain decision.

Here are the top ten legal considerations for use and/or development of artificial intelligence in health care.

How can we forecast, prevent, and (when necessary) mitigate the harmful effects of malicious uses of AI?A landmark review of the role of artificial intelligence (AI) in the future of global health published in The Lancet calls on the global health community to establish guidelines for development and deployment of new technologies and to develop a human-centered research agenda to facilitate equitable and ethical use of AI.

Human-human risk homeostasis and automation bias are two potential risks of AI in medicine. Here are several others concerning the use of bots.

Innovators are leading indicators and policy makers and regulators are laggards. However, those that push forward, ignoring regulatory, IP and reimbursement demands of a highly regulated environment, will crash and burn. Asking for forgiveness usually does not work and until and unless we include policy makers as research and development collaborators, along with payers, practitioners, patients and product makers, dissemination will crash on the shoals of regulatory and reimbursement sclerosis. As much as entrepreneurs might dislike it, getting permission is a better long term strategy. Rules create or destroy innovative ecosystems that drive business models that support innovation. The sooner we educate policy making partners and lobby for change, the better for patients through the deployment of AI innovation.

Show Buttons
Hide Buttons