Luma's Operational AI Saves 2MM+ Hours in 2025: Read more here >>
X
Season 5 | Episode 10
You’ve Piloted AI. Now What?
Haris Shuaib Founder and CEO at Newton’s Tree
Listen on Spotify

In this episode

 

Haris Shuaib shares insights from his journey as a medical physicist to founder and CEO of Newton’s Tree, an AI company focused on supporting healthcare AI deployment.

In this episode, he explains how necessity drove the automation of clinical workflows, how health systems can better integrate AI into existing operations, and what’s required to move from pilot projects to sustainable, system-wide execution:

“Medicine used to be an herbal science. It then became a biochemical science. And it will become an information science.”

– Haris Shuaib

Key takeaways

 

 

Shuaib discussed how AI is reshaping healthcare and what organizations must do to succeed as the technology matures. Here’s what he covered:

 

The rise of generative AI reshaped healthcare priorities but brought infrastructure and education back into focus.

 

Shuaib explained that the introduction of large language models has “completely changed the art of the possible… [it] opened up interest and excitement in nonclinical use cases.” AI has evolved from primarily image processing to a broader, more versatile technology capable of addressing operational and administrative challenges. He noted that health systems face “fundamental problems that AI could help solve, like billing and scheduling and patient communication.” By automating routine tasks, organizations can redirect focus toward more strategic and high-impact work.

 

AI is also influencing access and education. As patients and staff become more familiar with AI in everyday life, it’s become easier to have conversations about AI within health systems. According to Shuaib, that familiarity has “made education and patient communication a lot easier.”

 

As organizations move beyond experimentation, the conversation shifts to measurable ROI and operationalization. Leaders are now asking: how do we make this business as usual? That shift requires more intentional governance, workflow integration, and responsible deployment.

 

Successful AI adoption requires embedding it into the organization, similar to how health systems manage cybersecurity.

 

Shuaib observed that some health systems are still taking early steps in AI governance, while others are moving quickly ahead. Adoption varies widely across the industry. To be effective, AI must be supported by strong digital infrastructure. As healthcare becomes increasingly dependent on digital systems, information governance and security become even more critical. That includes investment in cybersecurity tooling, processes, and infrastructure.

 

When information is authenticated and policies are in place, we can benefit from the upside, just like with digital technologies and cybersecurity,” he explained.

 

Shuaib also emphasized the role of clinicians. “Doctors have always been not only the deliverers of benefit, but also the mitigators of risk.” That perspective positions healthcare professionals to play a central role in evaluating, implementing, and governing AI within health systems. Sustainable integration requires what he described as “a mixture of policy, platform, and people.”

 

To scale AI, health systems must shift from isolated pilots to coordinated execution.

 

Shuaib compared AI’s impact on healthcare to the introduction of tools like the stethoscope or blood tests. While many organizations have piloted AI for specific use cases, the next phase requires coordination across the system. To scale effectively, he stressed the importance of designing for agility across infrastructure, investments, and processes. “As a health system leader… you won’t be able to keep up” without the ability to adapt quickly in a rapidly changing environment.

 

He recommended evaluating technical infrastructure and governance frameworks with agility and safety in mind.

 

When a health system adopts AI, it must have a plan for oversight, assurance, and accuracy. Organizations need to understand what’s happening within their AI systems and have the ability to intervene if something goes wrong. These safeguards allow health systems to “move fast, but move safe.”

Central Image
Subscribe to Digital Health: On Air

Get the latest episodes delivered directly to your inbox.