Healthcare must control the 'whether, when and how' of AI development and deployment

Photo: Mike Miliard/HIMSS Media

CHICAGO – HIMSS23 kicked off in full force here on Tuesday, in a full-to-capacity opening keynote with an audience from around the world. HIMSS CEO Hal Wolf noted that the organization’s membership has now surpassed 122,000 – a 60% increase over the past five years – with an increasingly global feel. Healthcare and technology leaders from more than 80 countries are represented at the show, all trying to tackle similar challenges.

“We’ve had to solve a lot of problems in the past three years,” said Wolf. 

Beyond the pandemic, other hurdles to health and wellness remain, in U.S. and worldwide: aging populations, chronic disease, geographic displacement and challenges with health access, financial pressures, staff-shortages and fundamental shifts in care delivery such as the rise of consumerism and the move toward telehealth and home-based care.

To solve those challenges, “the need for actionable information is stronger now than at any time in the past,” said Wolf.

And management of those enormous troves of information is increasingly being powered by fast-evolving artificial intelligence – the topic of a sometimes amicably contentious opening panel discussion.

AI and machine learning can “open up new horizons if – IF – we use them appropriately,” said Wolf. He nodded jokingly to the recent wave of publicity around OpenAI’s ChatGPT by noting that he recently asked the AI model a simple question: “how to solve the global healthcare challenges?”

In seconds, the software returned a 300-plus word answer.

Those challenges are “complex and multifaceted, and therefore require a comprehensive approach involving multiple stakeholders, strategies, and solutions,” said ChatGPT, which listed improved access, investments in preventative care, technological innovation, addressing health disparities and global collaboration among its top suggestions.

When Mayo Clinic Chief Information Officer Cris Ross – who moderated the panel discussion said that healthcare is only in the early stages of “creating and learning how to manage these emerging AI tools,” it was hard to argue.

Ross convened the discussion, “Responsible AI: Prioritizing Patient Safety, Privacy, and Ethical Considerations,” with a quartet of AI innovators who have been thinking hard about the very real challenges and opportunities of this transformative technology.

Andrew Moore, founder and CEO of Lovelace AI; Kay Firth-Butterfield, CEO of the Centre for Trustworthy Technology; Peter Lee, vice president of research and incubation at Microsoft, and Reid Blackman, author of the book Ethical Machines and CEO of Virtue all were tasked with exploring a simple question about AI posed by Ross: “Just because we can do a thing, should we?”

As he has in the past, Ross contrasted what he calls Big AI – “bold ideas, like machines that can diagnose disease better than physicians – with Little AI, the “machines that are already listening, writing, helping – and irrevocably changing how we live and work.”

Those AI tools are already helping their users do “increasingly bigger things,” he said. And it’s through the accretion of Little AI advancements that big AI will emerge.

And it’s happening quickly. For that reason, Moore argued that health systems should get their arms around the challenges now.

Even though the fast-advancing capabilities of large language models like ChatGPT might feel uncanny to some, “I would expect a responsible hospital should be using large language models now,” he said, for tasks such as customer service and call center automation.

“Don’t wait to see what happens with the next iteration,” said Moore. “Start right now, so you’ll be ready.”

the capabilities of generative AI are showing themselves and emerging in ways that could benefit healthcare significantly.

The useful use cases are already well-visible: integrating generative AI to improve clinical note taking, for instance – see Epic’s generative AI announcement with Microsoft and Nuance this week – or medical schools deploying the tools so AI can “play act the role of a patient.”

But there are “also some scary risks,” said Lee. “It’s not simple, and there’s a lot to learn.”

To manage those risks, Lee implored the crowd at HIMSS23 that “this community needs to own ‘whether, when and how’ these AI technologies are used in the future.”

Yes, “there are tremendous opportunities,” he said. “But also risks, some of which we may not yet know about.”

So the “healthcare community needs to assertively own” how the development and deployment of these tools evolve – with a keen eye on safety, efficacy and equity.

Source: Read Full Article