Making AI Part of Healthcare
January 17, 2020
- Lygeia Ricciardi, Carium Chief Transformation Officer
This article is lightly adapted from the original, published on Medium as “Can Artificial Intelligence Make Us More Human?”
At CES 2020, I moderated a panel with three artificial intelligence (AI) experts from radically different companies that are helping to reshape health and healthcare today: Dr. Ang Sun, VP of Enterprise Data Science and Cognitive AI at Humana; Jeroen Tas, Chief Innovation & Strategy Officer at Philips; and Scott Kim, CEO of Neofect USA, the self-described “Netflix for rehabilitation,” which offers immersive gaming options that guide the body and engage the imagination as a means toward healing.
What’s Most Compelling About AI
The panelists saw AI as a “companion to interpret data” that can increase efficiency and accuracy relative to humans. Scott highlighted AI’s power to personalize interactions with patients. For example, through playing a game similar to virtual hopscotch, Neofect’s software learns a person’s unique strengths and weaknesses and is able to tailor the game in real-time to strengthen the player’s balance.
Meanwhile, Ang explained that Humana uses AI to enrich every touchpoint of a patient’s health journey. With multiple data points on its 13 million members, Humana is able to analyze population health, which enables predictive analytics in addition to more personalized interactions with a particular individual.
The panel agreed that the sheer growth in the number of data points potentially available for analysis is staggering and exciting. As a society, we are still learning how best to make use of the data points we already have, even as their volume continues to grow exponentially, opening the door to possibilities we haven’t even thought of.
Downsides and Risks of AI
One of the challenges inherent in data analysis is the quality of the underlying data. Inaccurate data undermines any interpretation. Building on these data quality concerns, Jeroen from Philips, which incorporates AI into most of its products, highlighted the importance of data context. For example, an unusually high heart rate could signal a problem if a person is at rest, but if he or she is running, it makes perfect sense.
Another issue is bias; while we tend to assume that quantitative data is more accurate and precise than qualitative information, the collection and/or interpretation of data can be biased in ways that are only amplified if they are baked into AI algorithms. For example, a drug trial based disproportionately on tests run on white men may be dangerous or inappropriate for members of other racial or gender groups.
And, as the panel pointed out, data is most useful if it can be widely shared and combined, which, in healthcare, it often isn’t, an issue encompassing both a willingness among data holders to share it (“data blocking” by health care providers and EHR vendors is an issue so pervasive that Congress required the Department of Health and Human Services to address it) and semantic operability — ensuring that data formats are compatible.
Another concern related to AI is the cost in energy required to store and process data. As data volumes continue to grow, this is becoming a significant environmental concern.
Finally, no discussion of data risks would be complete without mentioning data privacy. In addition to putting privacy and security safeguards in place, Scott pointed out how important it is to explain to people — especially patients — why you are collecting it, and how it will be used.
Fortunately, several companies I spoke with are adopting the “best practice” of giving people immediate value in exchange for their data — a reward which might be monetary or social, or even just sharing information so they can better understand their own bodies.
The Blurring Line Between Human and Machine
Investor Vinod Khosla is credited with the claim back in 2012 that 80% of what doctors do will be replaced by technology. The statement, which came up in our panel, appears to have confused many people.
It doesn’t imply, for instance, that most doctors will be replaced by computers. Rather, it implies that many of the repetitive, predictable tasks doctors and other humans perform can be outsourced to machines, which can do them more effectively.
Humans should consider technology, especially more sophisticated AI, not as a threat, but as a partner on which we can offload some of our less-rewarding tasks.
According to Jeroen, at the core of our humanity is empathy. If AI can free us up — especially in healthcare — to be more empathetic, we all win.
Humans are also needed to drive and govern AI — to use our judgement to decide which problems to address and how to interpret the related data. Also, as Ang said, there is an element of enthusiasm and joy that is uniquely human.