Luma's Operational AI Saves 2MM+ Hours in 2025: Read more here >>
X
Season 5 | Episode 10
Policy, Governance, and the Groundwork for Effective AI
Mark Sendak, MD Co-Founder and CEO at Vega Health
Listen on Spotify

In this episode

 

Mark Sendak, MD, Co-Founder and CEO of Vega Health, joined the podcast to discuss the responsible use of AI in real-world healthcare environments. Sendak detailed how the “problem-first” methodology developed at Duke influenced the creation of the Health AI Partnership. He explained that most hospitals are unable to realize the full potential of AI unless they invest in more than just the technology itself.

In this episode, Dr. Sendak discussed the obstacles that hinder full AI implementation and how administrative, financial, and policy barriers complicate adoption. He also offered suggestions for how hospitals can mitigate these challenges and responsibly operationalize AI systems.

“Technology on its own really doesn’t solve healthcare problems.”

– Dr. Mark Sendak

Key takeaways

 

 

Dr. Sendak unpacked what it takes to move AI from promise to practice in healthcare. Here’s what he covered:

 

Healthcare AI succeeds when organizations focus on real frontline problems rather than the technology itself.

 

Sendak’s team annually identifies “strategic priority areas” and then asks frontline clinicians to define the problems that need solving. “Every project starts with a problem statement, and it starts with a frontline clinician who faces that problem and has been very motivated and activated to try to solve the problem,” he said. His team then works with clinicians to “build, implement… [and] evaluate the solution.”

 

He emphasized that his group is “actually not an AI team.” Instead, much of their work involves “mundane change management” and “process improvement.” A key focus is “solving problems within the organization, with people… [and] with technical competencies.”

 

This problem-first approach helped move his team “far ahead… in terms of [their] maturity of using AI in clinical care.” It also led the Moore Foundation — one of the largest philanthropies funding healthcare quality work — to ask his team how to build these capabilities at a national level. That effort eventually became the Health AI Partnership.

 

Scaling AI requires fluency in both clinical reality and messy, real-world data.

 

“I literally spent the first five years doing my job cleaning healthcare data,” Sendak said. Drawing on his medical training, he focused on understanding clinical problems directly from clinicians and identifying what was meaningful within the data.

 

The effectiveness of AI in healthcare depends not only on analysis, but also on how insights are presented. “How do I visualize something for a domain expert where they can instantly see what they’re looking for?” he asked. Learning how to work with complex data sources is foundational to making AI useful in practice.

 

Reflecting on his long-term work in areas like sepsis and partnerships with community providers, Sendak noted, “It’s just taught me how hard it is to take a best-in-class technology and scale it.” Healthcare AI, he emphasized, is not built overnight. It requires sustained collaboration and steady operational work.”

 

AI delivers significant value in community healthcare settings, but only when organizations have support to evaluate and operationalize it.

 

Sendak described how many healthcare organizations lack the internal expertise to independently evaluate AI tools. He referenced a community hospital that “[did] not employ a data scientist, [had] no statisticians on staff,” and needed assistance with local technology evaluation. That left leaders asking vendors, “What does this mean?… Is this product good? Should we use it?” Even when vendors conducted detailed analyses, organizations still needed “somebody who they perceived as objective” to translate technical results into clinical decisions.

 

AI plays a different role in community hospitals and federally qualified health centers than it does in large academic systems. Predicting no-shows is not just a convenience — it can be “literally top-line revenue,” and improving attendance may determine whether an organization keeps its doors open. Similarly, AI scribes are often adopted not primarily for clinician well-being, but for “the speed with which they can close charts [and] send bills,” helping organizations increase cash on hand and sustain access to care.

 

Sendak acknowledged that “the real world is messy, and these are real problems.” With more than 6,000 hospitals in the United States and only a fraction supported so far, he expressed concern about the pace of technology adoption compared to the speed at which organizations can build necessary capabilities. Without broader investment in technical assistance and infrastructure, the benefits of healthcare AI will remain uneven, leaving many organizations without the support needed to adopt proven technologies responsibly.

Central Image
Subscribe to Digital Health: On Air

Get the latest episodes delivered directly to your inbox.