When Your Son Becomes Your Doctor: How One Man Used AI to Catch Errors in His Mother's Cancer Care
When Pratik Desai's mother was diagnosed with stage 4 duodenal adenocarcinoma, doctors essentially told the family to prepare for the worst. But Desai, a 34-year-old AI entrepreneur with no medical background, refused to accept that outcome. Instead, he built a simple AI workflow that would become his mother's second opinion, catching at least three critical errors in her medical care and potentially saving her life multiple times during her final 76 days .
Desai's story represents a quiet but powerful shift in healthcare: non-medical experts using accessible AI tools to become informed advocates for their loved ones. His approach wasn't sophisticated or proprietary. It was practical, free, and built from tools already available to anyone with internet access .
What Exactly Is "Vibe Coding" in Healthcare?
"Vibe coding" is an informal term for using AI tools intuitively and creatively to solve real-world problems without formal training or a rigid technical framework. In Desai's case, it meant combining multiple AI platforms to synthesize his mother's medical data and ask critical questions that doctors weren't addressing .
His workflow was elegantly simple. Each day, Desai exported his mother's medical records from Epic, the hospital's electronic health records system. He uploaded those records into NotebookLM, Google's AI tool designed to analyze documents, along with notes about any symptoms his mother reported. Then he asked NotebookLM to synthesize the data. Next, he took that synthesis to Claude, an advanced large language model (LLM) created by Anthropic, and asked it to explain what the information meant and what questions he should ask at upcoming appointments .
As his mother's medical records grew to 1,600 pages, Desai eventually switched from Google's AI Studio to Claude because the model's capabilities were improving faster. The context window, the amount of information an AI can process at once, became critical as the medical file expanded beyond what a single conversation thread could handle .
How Did AI Catch Errors That Doctors Missed?
The most striking example came when analyzing a critical CAT scan report. Desai's AI workflow identified two misdiagnoses in the scan and three instances where the wrong cancer type was listed in the report. Neither Desai nor his mother would have caught these errors on their own, but the AI flagged them for further investigation .
On Christmas Day, Desai noticed something subtle in his mother's behavior: an odd change in her walking, breathing, and speech, combined with her reluctance to spend time with family. The medical system wasn't returning calls on the holiday. Desai typed what he was observing into his AI workflow, and it determined she was experiencing complications from a pulmonary embolism, a potentially fatal blood clot. He alerted his cousin, a physician, who immediately advised getting her to the hospital. Without that AI-assisted diagnosis, the emergency line might have taken four to five hours to respond .
The workflow also detected a dangerous pattern: seven days after blood transfusions, his mother would begin bleeding internally. Desai and the AI identified that doctors were moving her from a liquid-only diet to solid food 48 hours after transfusions, which irritated an ulcer and accelerated bleeding. The blood loss became so critical it threatened her life. By catching this pattern twice, the family was able to work with doctors to adjust her care plan .
Steps to Building Your Own AI-Assisted Care Workflow
- Gather Your Data: Export medical records from your hospital's patient portal or electronic health records system. Most systems like Epic allow patients to download their records. Organize them chronologically and include any personal notes about symptoms or observations.
- Choose Your AI Tools: Use a document analysis tool like NotebookLM to synthesize large medical files, then move to a more advanced LLM like Claude for deeper analysis and question generation. Both are accessible to the general public.
- Ask Specific Questions: Rather than asking vague questions, prompt the AI with concrete scenarios: "What should I know about tomorrow's appointment?" "What doesn't look right here that I should push back on?" "What second opinions should I ask for?"
- Cross-Reference with Medical Professionals: Use the AI's insights to inform conversations with doctors, not to replace them. Present findings as questions and observations, not diagnoses.
- Monitor for Patterns: Ask the AI to identify recurring patterns in symptoms, test results, or treatment responses. These patterns often reveal issues that individual appointments might miss.
Why Doctors Are Skeptical, and Why They Shouldn't Be
When Desai discussed his approach with physicians, some pushed back, citing AI's known limitations. They noted that large language models are only correct about 70% of the time and can "hallucinate," generating plausible-sounding but false information .
Desai's response was pointed: "What if we took the medical system and graded it the same way?" The medical system itself is far from perfect. Diagnostic errors, missed symptoms, and communication failures happen regularly in hospitals. The difference is that AI's limitations are openly discussed, while healthcare's failures are often normalized .
The reality is that Desai's workflow didn't replace doctors. It augmented them. It caught errors, identified patterns, and helped a non-medical person ask better questions. In at least three instances, it likely prevented his mother's death. That's not perfection, but it's meaningful impact .
The Broader Implications for Caregiving and Patient Advocacy
Desai has since shared his workflow with friends and family members facing similar healthcare challenges. One friend used the approach to study his mother's case so thoroughly that when he called a meeting with her medical team, the doctors were "thoroughly impressed" at his understanding of her entire case without a single note in front of him .
This democratization of medical knowledge represents a shift in the caregiver role. Historically, caregivers provided emotional support and logistical help. Now, with AI tools, they can become informed advocates who understand their loved one's medical situation deeply enough to question treatment decisions and catch errors .
Desai emphasizes that his app isn't perfect or elegant, but it's accessible and free. That accessibility matters. Healthcare disparities often stem from information gaps. Families with resources can hire patient advocates or consult specialists. Families without those resources are left to navigate complex medical systems alone. AI tools like the ones Desai used level that playing field, at least partially .
Desai's mother lived for 76 days after her diagnosis, 67 of which were spent in the hospital. Throughout that time, Desai was at her bedside from 5 a.m. to 10 p.m. every single day. NotebookLM and Claude were his second opinion and coach. By the end, his mother had the time she needed to say her goodbyes. She was able to kiss Desai's two-year-old daughter. In the context of stage 4 cancer, that's not a cure. It's something more precious: dignity and closure .