How Will AI Impact the Future of Healthcare?
-
bookmark
-
print
Artificial intelligence promises to be a disruptive force. In a 2023 report, Experts suggest generative AI has the potential to increase global GDP by 7%, or nearly $7 trillion, over a 10-year period.1
In the healthcare industry, AI’s potential benefits include everything from help with administrative tasks to predicting health risks and enabling preventative care. But what will it take to make a possible AI-fueled sea change a reality?
There has been a lot of hype around AI over the past year, but we’re clearly in the initial stages of this disruption. To get some perspective on where we are now, I recently moderated a panel discussion with three experts on this subject:
-
Daniel Barsky, Partner, Holland & Knight, a global law firm with the nation’s largest healthcare practice.
-
Edmund Jackson, Ph.D., Co-founder and CEO, Unity AI, a start-up that applies AI technologies to help hospitals optimize patient flows.
-
Will Smith, Partner, McChrystal Group, a global management consulting firm.
We discussed everything from the uncertain regulatory environment to whether all the hype around AI is justified. Following is a summary of our conversation.
Risks and limitations
The generative AI2 tools currently grabbing all the headlines, such as ChatGPT and Google Gemini, are built on large language models3, or LLMs. Unlike traditional AI, these tools are not trained on specific databases to follow specific rules. Instead, they synthesize and reproduce information from vast amounts of existing content. That means while generative AI can create new content, the results they provide may be based on information that’s outdated or of low quality. It’s also why they’ve been known to “hallucinate,” or make up information presented as factual. For these reasons, the panelists agreed that LLM-based AI tools are not currently suitable for tasks like patient care.
“If you unleash a generative AI model, your liability is going to be insane,” Barsky said. “These are tools that are best used as helpers in their current state.”
Ultimately, the usefulness AI is in the healthcare space will come down to input from the industry itself. “AI is about the data you put in, and the healthcare industry has a lot of data,” Barsky said. “When you put these tools in the hands of healthcare professionals, they can come up with hundreds of ideas of where this technology could be widely and usefully deployed. Everything from helping interpret medical imaging all the way down to creating a more customized patient experience. Maybe AI can generate a nice welcome letter, or summary notes to take home with information relevant to the patient’s specific needs.”
Getting started with AI
When it comes to implementing AI in a healthcare setting, Smith said a receptive workforce is the key to a successful rollout. “The number one thing we’ve observed across this industry is that workforce readiness is the thing senior leaders don’t think about early enough,” Smith said. “Are people open-minded? Are they fearful that they’re going to lose their jobs? Workforce readiness is the first and easiest thing to think about.”
Smith also noted that a successful AI pilot program likely won’t be driven by the IT department. If the people who need to embrace and leverage the technology aren’t the ones driving the project, you run the risk of user resistance.
“If it’s a sales-enabled tool or it’s a front-line caregiver who wants to use this technology, they need to be the ones leading the pilot project,” Smith said. “The closer you can get to the person who’s supposed to benefit from it, the better.”
Jackson recommended determining where you have opportunities for AI to improve efficiency. “Find places where people are doing a lot of busy work, and where you have opportunities to streamline and improve their work,” he said.
Barsky referred to a study on the impact of AI on call centers4. The centers that replaced humans with generative AI bots resulted in user frustration. But the results were different in call centers that used generative AI in collaboration with humans.
“The top-performing call center workers got a bit better,” Barsky said. “The bottom performers got much better. That delta between the top and bottom performers got a lot smaller, the cost of training employees went down, and customer satisfaction went up. Using it as a replacement is potentially going to be a massive liability and result in a lot of blowback from your customers, because you don’t have a way to guarantee accuracy. It’s a helper, not a replacement.”
When getting started, Smith also recommends testing the technology for internal use cases first. “If you want to do a pilot project, do it as far away from your customers as possible,” Smith said. “See how your colleagues react, then inch closer to your customers.”
Regulatory uncertainty
Given the stakes involved, including concerns about job losses, privacy and misinformation, most people agree that some kind of regulatory framework around AI in healthcare is necessary. But Barsky characterized the current environment as “a mess,” with several ongoing issues that require careful attention.
He noted that President Joe Biden’s recent executive order5 on AI will spark a wave of proposed regulations. At the same time, however, the U.S. Supreme Court is deliberating on whether to overturn a 40-year-old legal precedent known as the Chevron deference, under which courts defer to the expertise of federal administrative agencies to interpret ambiguous congressional statutes. If the court abolishes the Chevron deference, Barsky said all kinds of regulations could be rendered moot, leaving the interpretation of statutes to individual judges on a case-by-case basis.
“It would put us in a tenuous regulatory position,” Barsky said. “Regulation by case law would result in a lot of litigation. Florida, where I reside, has three [federal judicial] districts—Southern, Middle and Northern. If you’re in Miami, you could get one ruling on a statute. In Orlando, they could interpret it entirely differently, and in Jacksonville they could say we interpret it a third way. There are 94 federal judicial districts in the U.S. You could have that situation play out across the country.”
Given that healthcare is one of the most regulated industries, particularly when it comes to how data is used, Barsky said the industry is trying to figure out how to proceed.
Believe the hype?
The panelists characterized the current buzz around AI as both over- and underhyped. For Smith, what’s overhyped is the notion that using generative AI will offer immediate improvements in the workplace, noting that we’re still about two years away from that scenario. What’s underhyped is the ability of AI to help forge great leaps in research and development, such as bench scientists using AI to automate certain tasks, such as identifying data trends or simulating complex scenarios. User acceptance, however, is the sticking point.
“The higher the education level, the more resistance we see,” Smith said. “We could have massive scientific breakthroughs sooner than we expect, but the actual users [in scientific research] are some of the slowest to adopt it.”
Barsky said while LLM-based generative AI tools are often fun to use, they’re currently inaccurate to replace many tasks. Still, he acknowledged that AI in general has the potential to be a game-changer for the healthcare industry. “People have not yet truly understood the true power and fundamental shift that this is.”
2 https://research.ibm.com/blog/what-is-generative-AI
3 https://www.ibm.com/topics/large-language-models?mhsrc=ibmsearch_a&mhq=large%20language%20model
4 https://mitsloan.mit.edu/ideas-made-to-matter/workers-less-experience-gain-most-generative-ai
How Will AI Impact the Future of Healthcare?
Managing Director, Healthcare Investment Banking, BMO Capital Markets
Ish McGee is a Managing Director of Healthcare Investment Banking at BMO Capital Markets who covers the pharma services and mental health sectors. Ish has 20+ years…
Ish McGee is a Managing Director of Healthcare Investment Banking at BMO Capital Markets who covers the pharma services and mental health sectors. Ish has 20+ years…
VIEW FULL PROFILE- Minute Read
- Listen Stop
- Text Bigger | Text Smaller
Artificial intelligence promises to be a disruptive force. In a 2023 report, Experts suggest generative AI has the potential to increase global GDP by 7%, or nearly $7 trillion, over a 10-year period.1
In the healthcare industry, AI’s potential benefits include everything from help with administrative tasks to predicting health risks and enabling preventative care. But what will it take to make a possible AI-fueled sea change a reality?
There has been a lot of hype around AI over the past year, but we’re clearly in the initial stages of this disruption. To get some perspective on where we are now, I recently moderated a panel discussion with three experts on this subject:
-
Daniel Barsky, Partner, Holland & Knight, a global law firm with the nation’s largest healthcare practice.
-
Edmund Jackson, Ph.D., Co-founder and CEO, Unity AI, a start-up that applies AI technologies to help hospitals optimize patient flows.
-
Will Smith, Partner, McChrystal Group, a global management consulting firm.
We discussed everything from the uncertain regulatory environment to whether all the hype around AI is justified. Following is a summary of our conversation.
Risks and limitations
The generative AI2 tools currently grabbing all the headlines, such as ChatGPT and Google Gemini, are built on large language models3, or LLMs. Unlike traditional AI, these tools are not trained on specific databases to follow specific rules. Instead, they synthesize and reproduce information from vast amounts of existing content. That means while generative AI can create new content, the results they provide may be based on information that’s outdated or of low quality. It’s also why they’ve been known to “hallucinate,” or make up information presented as factual. For these reasons, the panelists agreed that LLM-based AI tools are not currently suitable for tasks like patient care.
“If you unleash a generative AI model, your liability is going to be insane,” Barsky said. “These are tools that are best used as helpers in their current state.”
Ultimately, the usefulness AI is in the healthcare space will come down to input from the industry itself. “AI is about the data you put in, and the healthcare industry has a lot of data,” Barsky said. “When you put these tools in the hands of healthcare professionals, they can come up with hundreds of ideas of where this technology could be widely and usefully deployed. Everything from helping interpret medical imaging all the way down to creating a more customized patient experience. Maybe AI can generate a nice welcome letter, or summary notes to take home with information relevant to the patient’s specific needs.”
Getting started with AI
When it comes to implementing AI in a healthcare setting, Smith said a receptive workforce is the key to a successful rollout. “The number one thing we’ve observed across this industry is that workforce readiness is the thing senior leaders don’t think about early enough,” Smith said. “Are people open-minded? Are they fearful that they’re going to lose their jobs? Workforce readiness is the first and easiest thing to think about.”
Smith also noted that a successful AI pilot program likely won’t be driven by the IT department. If the people who need to embrace and leverage the technology aren’t the ones driving the project, you run the risk of user resistance.
“If it’s a sales-enabled tool or it’s a front-line caregiver who wants to use this technology, they need to be the ones leading the pilot project,” Smith said. “The closer you can get to the person who’s supposed to benefit from it, the better.”
Jackson recommended determining where you have opportunities for AI to improve efficiency. “Find places where people are doing a lot of busy work, and where you have opportunities to streamline and improve their work,” he said.
Barsky referred to a study on the impact of AI on call centers4. The centers that replaced humans with generative AI bots resulted in user frustration. But the results were different in call centers that used generative AI in collaboration with humans.
“The top-performing call center workers got a bit better,” Barsky said. “The bottom performers got much better. That delta between the top and bottom performers got a lot smaller, the cost of training employees went down, and customer satisfaction went up. Using it as a replacement is potentially going to be a massive liability and result in a lot of blowback from your customers, because you don’t have a way to guarantee accuracy. It’s a helper, not a replacement.”
When getting started, Smith also recommends testing the technology for internal use cases first. “If you want to do a pilot project, do it as far away from your customers as possible,” Smith said. “See how your colleagues react, then inch closer to your customers.”
Regulatory uncertainty
Given the stakes involved, including concerns about job losses, privacy and misinformation, most people agree that some kind of regulatory framework around AI in healthcare is necessary. But Barsky characterized the current environment as “a mess,” with several ongoing issues that require careful attention.
He noted that President Joe Biden’s recent executive order5 on AI will spark a wave of proposed regulations. At the same time, however, the U.S. Supreme Court is deliberating on whether to overturn a 40-year-old legal precedent known as the Chevron deference, under which courts defer to the expertise of federal administrative agencies to interpret ambiguous congressional statutes. If the court abolishes the Chevron deference, Barsky said all kinds of regulations could be rendered moot, leaving the interpretation of statutes to individual judges on a case-by-case basis.
“It would put us in a tenuous regulatory position,” Barsky said. “Regulation by case law would result in a lot of litigation. Florida, where I reside, has three [federal judicial] districts—Southern, Middle and Northern. If you’re in Miami, you could get one ruling on a statute. In Orlando, they could interpret it entirely differently, and in Jacksonville they could say we interpret it a third way. There are 94 federal judicial districts in the U.S. You could have that situation play out across the country.”
Given that healthcare is one of the most regulated industries, particularly when it comes to how data is used, Barsky said the industry is trying to figure out how to proceed.
Believe the hype?
The panelists characterized the current buzz around AI as both over- and underhyped. For Smith, what’s overhyped is the notion that using generative AI will offer immediate improvements in the workplace, noting that we’re still about two years away from that scenario. What’s underhyped is the ability of AI to help forge great leaps in research and development, such as bench scientists using AI to automate certain tasks, such as identifying data trends or simulating complex scenarios. User acceptance, however, is the sticking point.
“The higher the education level, the more resistance we see,” Smith said. “We could have massive scientific breakthroughs sooner than we expect, but the actual users [in scientific research] are some of the slowest to adopt it.”
Barsky said while LLM-based generative AI tools are often fun to use, they’re currently inaccurate to replace many tasks. Still, he acknowledged that AI in general has the potential to be a game-changer for the healthcare industry. “People have not yet truly understood the true power and fundamental shift that this is.”
2 https://research.ibm.com/blog/what-is-generative-AI
3 https://www.ibm.com/topics/large-language-models?mhsrc=ibmsearch_a&mhq=large%20language%20model
4 https://mitsloan.mit.edu/ideas-made-to-matter/workers-less-experience-gain-most-generative-ai
You might also be interested in
How NASA and IBM Are Using Geospatial Data and AI to Analyze Climate Risks
Leading Johnson & Johnson to a Vaccine Triumph: A Conversation with Alex Gorsky
NextGen Treasury: Protecting Your Organization from a Cybersecurity Attack
Cloud, Data and Zero-trust: Here’s Where VCs are Putting Their Cybersecurity Investments