Generative AI and the Self-Diagnosis Pandemic
About DeekSeek, healthcare data, credible sources, and why people should stop diagnosing themselves online.
The new Chinese AI tool, DeepSeek, has been providing us quite a few wtf moments in the last couple of weeks. One of those was an incident that happened when a patient asked DeepSeek to interpret their blood test results: DeepSeek misinterpreted the results, leading to an incorrect diagnosis of chronic kidney failure, leading to an urgent recommendation for dialysis or a transplant, leading to the patient totally freaking out. The patient shared that it turned out DeepSeek had made a mistake, interpreting 101 as 10 in one of the lab results. This incident highlights the risks involved if people were to trust AI systems with their health without safety measures.
And just like the days when people used to diagnose themselves by googling their symptoms, turning a mild headache into a self-diagnosis of a terminal brain tumor, we now see people starting to use tools like ChatGPT or DeepSeek for self-diagnosis. And then freaking out.
Generative AI tools put a lot of knowledge in the hand of patients, and knowledge is power.
Don’t get me wrong. Generative AI tools put a lot of knowledge in the hand of patients, and knowledge is power. Using AI to learn more about a condition or educating yourself on what the latest research says about it can be super helpful - if that information comes from credible, reliable sources. But what if the answer comes from social media content written by some rando, that the AI model was trained on? Not so cool anymore, is it.
And no, a lot of likes or supportive community notes do not make a source credible or reliable. And, speaking of community notes - no, community notes cannot replace fact checking. Are we really OK with popularity being used to rewrite history?
But I derail. Back to the topic.
There’s a lot of momentum around Generative AI in healthcare - and for a good reason. AI can help with some of the most urgent problems in healthcare, issues like access to care, burnout of healthcare professionals and staff shortages, all around the world. But it needs to be done right.
The healthcare industry generates around 80MB of data per patient per year. However, more than 90% of this data is not used.
Unused Data: A Missed Opportunity
The healthcare industry generates massive amounts of data. Per stats, 80MB of data is generated for a single patient each year. However, it is estimated that more than 90% of this data remains unused. That healthcare data is also multimodal – it comes in different types: clinical notes, medical imaging, videos, audio and more. There’s a missed opportunity here, to use all this data to create insights that could improve patient outcomes.
Generative AI models are different than search engines - they can receive a larger context and background about the patient. And the multimodal Generative AI models can analyze diverse types of data. This means Generative AI can rely on a more holistic picture of the patient that could potentially lead to better outcomes.
So. Huge potential. But.
Many of the generic models and tools were not trained for healthcare. Many of them do not have healthcare-specific safety mechanisms.
Not all models are created equal
Many of the generic models and tools were not trained for healthcare. Many of them do not have healthcare-specific safety mechanisms. This means that mistakes are inevitable.
And when patients use those tools, they typically ask anecdotal questions, just like they did when using a search engine in the past, not including context or additional data that could have provided that holistic picture.
The last couple of weeks had a lot of action, with new models and tools being released. DeepSeek seemed to put some pressure on OpenAI, who hurried up to release a few updates to ChatGPT, and announced its new model O3-mini, as well as a new feature called Deep Research. More about that in one of the upcoming blogs, so make sure to subscribe for updates.
AI-Powered Patient Engagement
One of the most promising applications of AI in healthcare is patient interaction. When built responsibly and based on credible information, AI-powered agents can provide patients with quick answers to common questions and explain complex medical information. These tools make it easier for patients to access information in a self-serve way, and they can even reduce the burden on healthcare providers by answering recurring questions.
AI-powered agents can provide patients with quick answers to common questions and explain complex medical information.
One such example is a project my team did with Galilee Medical Center, where AI is used to translate complex radiology reports into plain language, making them easier for patients to understand. We shared this story in a Microsoft blog few months ago. This kind of innovation doesn’t replace the doctor-patient relationship - it enhances it by improving communication.
The Need for AI Literacy
A lot has been said about the need for training healthcare professionals in AI. For AI to be effective, clinicians need to understand how to use it responsibly. This means training healthcare professionals in how AI works, how to interpret the results and how to question the outputs. One of my previous blog posts explained that in detail.
However, there is a critical need for AI literacy for patients as well. Understanding how to work with AI, how to look at its results, understanding its limitations, telling what’s fake and what’s real, and recognizing the difference between credible information and nonsense.
There is a critical need for AI literacy for patients. To truly benefit from AI, we need to educate the patient population.
And while AI literacy is critical for patients, large portions of the global population lack the digital fluency and critical thinking to benefit from AI. As AI continues to evolve so quickly, this gap continues to grow, and a lot of the patients are left behind.
To truly benefit from AI, we need to educate the patient population. And this education needs to start from a young age. Rushing to implement AI in healthcare systems without basic education of the patient population would mean leaving many people behind or, worse, exposing them to risks.
Incidental Findings and Over-Testing
Not a popular opinion: AI-driven self-diagnosis could sometimes lead to unnecessary follow-up medical exams. When patients use AI to interpret test results, they could encounter incidental findings - minor anomalies that are flagged as potential issues. Often, those anomalies are harmless, sometimes they are not. These findings can trigger additional tests and consultations, without meaningful outcomes. And don’t forget creating real anxiety for the patient. In my native language we call that “eating films”. Doesn’t translate well, but you get the sentiment. This highlights the need for human oversight over AI, and the importance of involving clinicians in interpreting results to avoid spiraling into over-testing.
AI-driven self-diagnosis could sometimes lead to unnecessary follow-up medical examinations.
On the other hand, lack of access to care and over-burden on healthcare systems lead people to this self-diagnosis, as an alternative that is out there. The solution? Reducing the overload on healthcare systems with credible, healthcare-specific tools for patient engagement, and leaving diagnosis to medical professionals.
Easier said than done, though.
Copilots, Not Autopilots
Healthcare is fundamentally human. It’s so very physical. It’s about trust, years of hands-on experience, as well as judgment and responsibility of the clinical experts.
AI will certainly change how medical professionals work, helping them in their work, reducing burnout, surfacing relevant information and supporting their decisions. But AI should not be doing things autonomously. Not in medicine.
As for patients - AI can bring a lot of value, engaging with patients, explaining medical topics, answering common questions, automating routine tasks, but decisions should always be guided by human clinicians.

About Verge of Singularity.
About me: Real person. Opinions are my own. Blog posts are not generated by AI.
See more here.
LinkedIn: https://www.linkedin.com/in/hadas-bitran/
X: @hadasbitran
Instagram: @hadasbitran
Recent posts:
An anecdote from the tests of deepseek,
During testing, researchers noticed that the model would spontaneously switch between English and Chinese while it was solving problems. When they forced it to stick to one language, thus making it easier for users to follow along, they found that the system’s ability to solve the same problems would diminish.
Source,
https://time.com/7210888/deepseeks-hidden-ai-safety-warning/
Great post and column. Will amplify across my socials with a special call for #patients to read, learn, and understand the power and limitations of current AI symptom checkers et al,