“Science is knowledge which we understand so well that we can teach it to a computer. Everything else is art.”
Donald Knuth, American computer scientist
In past newsletters, we discussed the emergence of digital twins and threads as well as federated and swarm learning as technologies that are related to AI with potential to make significant contributions in clinical medicine and healthcare. This week we will discuss some of the advances in deep learning that will render AI more relevant and impactful in clinical medicine and healthcare.
Deep learning has made significant impact in clinical medicine, particularly in the form of convolutional neural networks (CNN) in medical image interpretation and recurrent neural networks (RNN) in time series data. One of the major shortcomings of supervised and reinforcement learning, however, is the lack of labelled examples and training interactions, respectively.
In addition, AI in clinical medicine and healthcare has thus far only been deployed for very narrow, specialized tasks but not in a broader context in the real world. Even AI models for medical image interpretation are found to be relatively fragile and therefore do not generalize well in the real world.
One strategy to decrease these limitations (proposed by neuroscientist Gary Marcus) is to have a hybrid artificial intelligence approach: combining symbolic systems with neural networks. Another strategy (proposed by the deep learning group headed by Geoff Hinton) is to improve the current neural network architectures as to include a myriad of human-like intelligence elements like reasoning, common sense, and causal inference.
Both schools have their respective followings but the common underpinnings are based on incorporation of more human cognitive elements in the neural networks and concomitantly decrease the burden of human-labeled data.
There are already significant advances in deep learning to incorporate human cognition. One such advance is the advent of a new type of deep learning model that incorporates attention called transformers: tools that can learn without abundant labelled data. Transformers are already used in natural language processing tools such as GPT-3 and Meena.
Another advance is the concept of “system 2 deep learning” (from Daniel Kahneman’s book Thinking, Fast and Slow) that will reconcile the major problems of neural networks: causal inference, transfer learning, symbol manipulation, and generalization.
In addition, Geoff Hinton’s capsule networks is inspired by the brain architecture and incorporates the hierarchical relationships of features to each other in order to understand the three-dimensional geometric relationships. In a similar fashion of drawing inspiration from the human brain, recursive cortical network is a generative model that can learn from relatively small amount of data.
Finally, an exciting area of development in the area of deep learning is multitask learning, or MTL. This collective learning shares representations from various related tasks so that the model can better generalize in the real world.
The aforementioned advances can all contribute a better deep learning portfolio for clinical medicine and healthcare.