Can artificial intelligence (AI) meaningfully improve disease detection without losing sight of human needs and good medicine? The question is not whether AI could one day replace doctors and nurses but whether the technology can assist the medical professionals in better decision making with precision.
The question was the recurring theme at the AI Impact Summit 2026 New Delhi where a standout session on “Scaling AI in Healthcare” was organized to brainstorm ideas related to the cross-sectional domain of AI and Healthcare.
Perhaps, the most memorable moment at the session came from Ziad Obermeyer, Associate Professor at the University of California, Berkeley, whose personal story underscored how easily serious conditions can be missed even by trained doctors.
Remembering the early days of his academic career, Obermeyer recalled the pressure and quiet anxiety he felt before his first seminar as a new faculty member in the economics department at the University of California, Berkeley. The department was friendly, but the moment still mattered a lot to him. A few days before the seminar, he started feeling unwell without being able to explain why. When he mentioned it to his wife, she asked if his stomach hurt or if he was just nervous. He did not think it was serious and focused on his presentation.
He delivered the seminar as planned, but during the talk the vague discomfort turned into a sharp pain in the lower right side of his stomach. After the seminar ended, he drove himself to the emergency room, where a CT scan confirmed the diagnosis. He had acute appendicitis and needed immediate surgery. What made this especially uncomfortable, he said, was that he had been trained as an emergency room doctor, and appendicitis is precisely the kind of condition doctors are taught not to miss. Yet he had ignored the symptoms for several days. That experience changed how he thought about healthcare decisions and set the stage for a broader discussion on how important health choices are often made with too little data.
Doctors Take Health Decisions With Too Little Data
When people come to hospitals, doctors often have very limited information. Symptoms are unclear, time is short and decisions must be taken quickly. Without a diagnosis, there is no treatment. Without data, diagnosis becomes guesswork. Most useful medical data lives inside health systems, and if patients are not already connected to those systems, that data is hard to access. This leads to under‑diagnosis, which experts at the summit described as a global problem.
Heart attacks show how serious this problem can be. Studies using cardiac MRI scans in high‑income countries suggest that between 40% and 80% of heart attacks are silent. People had heart attacks without realising it, and doctors never diagnosed them. As one researcher explained, “These patients have scars on their hearts, but no one ever knew they had a heart attack.” That means no treatment and no long‑term care. In low and middle‑income countries, the situation is even worse. In parts of India, including Tamil Nadu, many people have high blood pressure or diabetes but do not know it, and advanced diagnostic tools are limited, so many conditions go undetected.
AI Needs Data, Not Hype
Artificial intelligence is often described as magical, but it is not. AI systems only work when they have good data to learn from, accurate outcomes and large, high‑quality datasets. In healthcare, this is difficult because diagnoses are often delayed or incomplete. Still, there is an opportunity in areas where data is routinely collected. ECG machines, for example, are cheap, widely available and already used in many settings. When ECG data is combined with AI, it can reveal patterns that humans miss.
Studies using AI with ECGs show that around 10 percent of people flagged by AI had evidence of a previous heart attack. Many of them had no traditional risk factors. This suggests that current systems are missing large numbers of high‑risk patients who could benefit from earlier detection and care.
Why India is Central to Health AI?
India, speakers argued, plays a critical role in the future of health AI. The country has large patient volumes, lower costs and strong engineering talent, allowing new tools to be tested and improved quickly. Research also shows that AI models trained in one country can work well in others. Algorithms tested in Europe have performed well in the United States and India. As one speaker noted, “These algorithms generalize incredibly well across countries.”
However, technology alone does not solve access problems. Distance from hospitals, poverty, discrimination and language barriers all limit access to care. Even the best AI tools face real‑world constraints. That is why AI must be built carefully. Data must be protected, and systems must be fair and trustworthy if they are to be widely adopted.
Supporting Care Beyond Hospitals
Another recurring theme was that most healthcare happens at home, not in hospitals. Family members provide most of the care, often without training or confidence. Shahed Alam, co-founder and co‑CEO of Noora Health explained. “Our work starts in healthcare facilities,” he said. “Families are anxious. They want to help, but they don’t know how.”
Noora Health trains family caregivers while patients are still hospitalized. After discharge, families stay connected to trained health workers. AI supports the system. For example, by helping to personalise information or manage follow‑up at scale, but humans remain central. Alam emphasised three lessons, “Start with people, not technology. Be very clear about what good care means. And keep humans in the loop while you learn and improve.”
How Governments Should Think About Evidence
AI models change fast, but traditional medical trials take years, creating tension for policymakers. Zameer Brey, Deputy Director of Technology Diffusion at the Gates Foundation, addressed this challenge. “We should not abandon evidence,” he said. “But we need the right evidence at the right time.”
He explained that early studies can focus on safety and feasibility, with larger trials coming later. Adaptive and pragmatic trials are needed to help governments move faster without sacrificing rigour. Rob Sherman, Vice President and Deputy Chief Privacy Officer at Meta, warned against treating governance as a barrier. “Governance should be a guide, not a gate,” he said.
He noted that many people are willing to share their data if the benefits are clear and protections are strong. “Worrying about privacy should not stop us from using data in ways that can deliver real health benefits.” Clear goals, responsible design and representative data are essential if AI is to be used responsibly in health systems.
So can AI Help Detect Silent Heart Attacks?
The discussion circled back to the central question. Can AI help detect silent heart attacks and improve health outcomes? According to experts, the answer is cautiously optimistic. With good data, careful evaluation, human involvement and strong governance, AI can strengthen health systems and save lives.






