CEO Munjal Shah of Hippocratic AI Stresses 'Safety First' With New Healthcare AI

Entrepreneur Munjal Shah’s Hippocratic AI is taking a highly responsible approach to developing and testing his company’s generative AI healthcare agents.

Munjal Shah is passionate about the life-changing potential of large language models in healthcare. A seasoned entrepreneur whose previous companies were acquired by Google and Alibaba, he co-founded Hippocratic AI in 2023 to use generative artificial intelligence to help address the chronic shortage of healthcare professionals and increase access to care.

In just under two years, the company developed and trained a healthcare-specific LLM that powers a variety of generative AI healthcare agents focused on a range of nondiagnostic tasks. These agents are now being tested by dozens of healthcare providers on a range of safety and effectiveness metrics. Shah successfully demoed the company’s generative AI agents at NVIDIA’s GTC conference in March. But as he has traversed the country expounding on how generative AI healthcare agents can help address the critical shortage of healthcare professionals, reduce their burnout, and increase patient engagement, Shah is quick to start with an important caveat: Safety has to be the primary concern.

"I cannot be more clear," he said in a recent interview. "It's right underneath our logo: Do no harm.”

What Does Hippocratic AI Do?

To prioritize safety, Hippocratic AI has designed generative AI agents built atop a “constellation” of proprietary LLMs that are trained on evidence-based medical data and undergo rigorous assessment from human healthcare professionals.

Hippocratic AI’s agents engage patients over the phone, handling nondiagnostic tasks like chronic care management, preoperative screening, postdischarge follow-ups, and health risk assessments.

"I don't think in America we have a problem with diagnoses. We do in some rare cases, but by and large the healthcare system does a good job of diagnoses, especially of our chronic diseases,” says Munjal Shah. “Do we really misdiagnose diabetes that often? No. Our problem is a lack of adherence to the care plan due to insufficient patient engagement. That’s one of the major problems we have that a purpose-built generative AI product can address now.”

Munjal Shah’s Shift to Healthcare

It was a personal health scare that prompted Shah to pivot from his previous ventures in e-commerce and computer vision toward healthcare. In 2017, the day after selling his visual search startup to Google, Shah — then 37 years old — experienced chest pains during a run and ended up in the emergency room.

"My dad had his first heart attack in his mid-40s, so it wasn't like I had good genes," Shah recalls. The episode forced him to lose 30 to 40 pounds and develop a passion for nutrition and preventive medicine. He even took a class on endocrinology at Stanford University, discovering a new interest in the systems that keep the body healthy.

"I realized I maybe would have chosen this over computer science if I'd taken it in college," says Shah. "I was just fascinated."

Healthcare Staffing Shortages

From that fascination emerged a new healthcare venture centered around generative AI — technology Shah believes can address some of the industry's most acute staffing shortages and access issues. The World Health Organization projects a global deficit of 10 million healthcare workers by 2030.

Shah's solution? Deploying generative AI "super-staffing" to shoulder routine, labor-intensive tasks — things like medication onboarding, appointment booking and follow-ups, and health coaching — providing an elevated level of patient engagement that would ideally take place but often doesn’t because of staffing shortages. Super-staffing can also free up clinicians spending time on these tasks to focus on higher-skilled responsibilities that demand human expertise.

"If you had an infinitely scalable pharmacist available at a reasonable cost, what would you do today that you don't do?" asks Shah. "I would call every patient two days after they start taking a new medication."

Such ubiquitous follow-ups are currently unattainable because the workforce is far too limited in size to provide them, he explains, leaving many patients to fend for themselves after being prescribed new drugs. Many patients don’t adhere to their care plans due in part to a lack of proactive engagement from their care providers to discuss a drug’s side effects or simply remind patients how and when to take them.

Hippocratic’s generative AI healthcare agents, on the other hand, can engage patients at scale and at a nominal cost per hour, addressing side effect concerns and providing reminders, among many other tasks. Shah envisions similar use cases for preoperative checklists, post-discharge monitoring, and regular wellness check-ins — especially for elderly patients at higher risk of health complications.

"My mother is 81 and has knee pain. She had an issue that got really bad because we didn't check in with her," he laments. "If she had been called every few days, it may have avoided an ER visit."

Ensuring Safety

While companies like Amazon and Google have developed medical LLMs tailored for drug discovery, Shah says Hippocratic's AI agents represent a new frontier: generative AI for patient interaction and care delivery workflows. It's built atop a custom 70 billion to 100 billion parameter LLM, and it’s trained on licensed medical data and textbooks currently unavailable on the open internet.

More than 40 hospitals, insurers, digital health firms, and pharmaceutical companies are already involved in beta testing of the company's initial AI agents. Hippocratic AI has also assembled physician and nurse advisory councils to provide human feedback. The company does not benchmark safety against generic AI capabilities. Instead, it puts its models through a kind of Turing test with licensed healthcare professionals. Nurses, pharmacists, and doctors engage with Hippocratic’s generative AI agents in simulated patient conversations, evaluating, for example, whether they exhibit sufficient medical knowledge and empathy to interact safely with real patients.

"The only person who can judge safety is the clinician who does the job," explains Shah. "When we get enough of them saying 'yes' as a percentage, that's when we'll release it."

Hippocratic's thorough development and testing approach is supported by its recent $53 million Series A round that brought its valuation to $500 million. Investors like General Catalyst and Premji Invest seem to appreciate Shah's focus on long-term healthcare transformation over rushed monetization.

Safety "is not optional for us," Shah says. "We named the company Hippocratic after the famous oath because we believe the best way to make AI safe is to use the clinicians who we trust to ensure it's safe."

Munjal Shah is betting that thoroughness — and a strong emphasis on real-world testing over hype — will make Hippocratic AI’s products worthy of widespread trust among clinicians and patients alike. The company shows how generative AI could mitigate widespread shortages and increase patient engagement when the technology is sufficiently optimized to respond to patients with knowledge and empathy.

Hippocratic AI aims to make that aspiration a reality — one meticulously trained, clinically validated AI agent at a time.

Previous Post Next Post