What People Want to Know About AI in Healthcare
On HLTH, FOMO, and questions that people often ask us about AI in Healthcare.
Life is unexpected. This week, my plan was to attend HLTH Europe as a speaker, and take part in a panel session called “AI 101: Everything You Wanted to Know About AI (But Were Too Afraid to Ask)”, answering audience questions. Exciting.
But the universe had other plans. The airport was shut down and all flights were cancelled due to escalation in the region. So. Not going.
Grateful to my colleague Elena Bonfiglioli who backed me up and covered the session.
HLTH is an awesome conference, I can tell you that. It’s a high energy event, with cutting-edge content about healthcare technology and innovation. They often feature top-tier speakers who aren't afraid to make bold predictions or challenge the status quo. It’s interesting, fresh, and intense. The European event is smaller, but has a great vibe to it.
Oh, the FOMO.

Learned a few things this week, though. Turns out, sleeping in a small shelter alongside six others isn’t great. Also, the dog snores. Noted.
And somewhere between the 2:00am alarms and the daily push to meet our goals, I figured it would be interesting to share with you all some of the questions I expected the audience to bring up in that HLTH Europe session. Questions we hear often.
So, what do people typically ask? The answer depends on who you're talking to.
Across the ecosystem, there are a few recurring things that people actually want to know when it comes to AI in healthcare. Doctors, nurses, patients, business executives, radiologists, IT people… all have their unique perspective. And those questions are important, because answering them is key to adoption.
Those questions are important, because answering them properly is key to adoption.
Here are some examples to questions we hear often. Not a comprehensive list by any means, but it illustrates the diverse perspectives of the different stakeholders in the industry. Sometimes, there are overlaps.
Questions Doctors ask
What they really want to know:
Will this system actually save me time, or will it create more admin work for me?
Can I trust its output to make decisions about my patient? How do I verify the output? How do I know it's not biased or hallucinating?
Will it make me obsolete - or will it make me more effective?
Does it integrate with my workflow (e.g. EHR)? Or do I need to work with multiple systems now?
Bonus: They often ask if AI can “recap the patient’s history” or “write the discharge summary”.
What Healthcare Executives & Leaders ask
What they actually want to know:
What’s the business value of this? What applications / use cases does it enable?
What’s the ROI? How much money will this system save us?
Will it actually improve patient outcomes or throughput?
Is it compliant with HIPAA/FDA/whatever-regulation?
Are our competitors already using it?
Translation: Show me the business case, not the cool demo.
Things Patients want to know
What they often ask:
Will it help me get an earlier appointment? Does this mean faster diagnosis? And does it mean I get better treatment?
Will I still see a human doctor? Will AI replace the human doctor? Is the doctor still in charge?
Is my personal data being exposed?
What if the AI makes a mistake that would hurt me?
Bonus question: “should I just ask ChatGPT?”.
Will AI replace the human doctor? Should I just ask ChatGPT?
What Nurses Actually Ask About AI in Healthcare
Things nurses want to know:
Will this reduce my charting time, or will it just add new documentation requirements?
Can it help prioritize patients? Can it detect early deterioration on the floor?
Would it actually support my work, or is it just another noisy “smart alert”?
If something goes wrong, am I liable for following AI recommendations?
Is this designed with nurses, or just dumped on us with no input from us?
Bonus question: “Can I override it?”
Things Radiologists Ask
Their questions include:
Is this actually saving me time, or do I now need to double-check the AI results?
Is it part of my workflow? Does it integrate into my PACS/RIS, or does it require toggling between systems?
Is it doing what it claims to do, or is it just creating noise of false positives?
How was the model validated? Against what gold standard?
Does this AI model have FDA clearance for this indication?
One more: “Is this tool assisting me, replacing me, or what?”
Is this actually saving me time? Is it part of my workflow?
What AI Developers in Healthcare want to know
Some of the questions they’re often asking:
Are we solving a real problem here?
How do I get good, clean, annotated medical data?
How do I evaluate my model against clinical ground truth?
Should I fine-tune a generic model for a clinical use case?
What’s the best technical approach for this use case?
What are the regulatory requirements for this? And how do I actually validate the AI results in a way that satisfies regulation?
What’s the best way to pilot my thing in a healthcare setting?
And since this is where my team is in this ecosystem, one of my own questions: “Where’s my traffic?!”.
Things Responsible AI people ask
The questions they ask include things like:
What do you do to ensure the practice of Responsible AI? Like, what kind of processes do you have in place?
Is the model fair across genders, ages, ethnicities, identities?
How can the end-user understand how the AI system got to its conclusion? Does the AI system provide explainability?
Does the system provide evidence for its conclusions? How do you ensure answers are grounded?
What type of safety testing did the system go through?
How do you ensure privacy of patient PHI (Protected Health Information)?
Is the system intended to replace humans, or is it augmenting them? What is the role of the humans using the system, how do they apply their clinical judgment?
Is this considered Software as a Medical Device (SaMD)?
What Healthcare IT wants to know
What healthcare IT professionals typically ask us:
How does it integrate with our existing EHR systems? Does it plug into Epic/Cerner/Meditech or am I signing up for months of custom middleware?
Where does the system run? Where does the data live and who owns it? Where is it stored and why? Does any PHI leave our environment?
What are the security implications? Is this system vulnerable to prompt injection, model inversion, or just plain old ransomware bait?
How much compute does it need - and who's paying? Am I spinning up GPUs in the cloud now, or trying to make this run on a 2013 on-prem server?
Is it compliant? (HIPAA, GDPR, HITRUST, etc.) - Like, if regulators come knocking, can I produce a clean audit trail?
What’s the SLA? How resilient is it when something goes wrong? How do you recover from errors? And who supports this if there are issues?
Will it scale in production? How can we track performance over time? Where’s the dashboard? I want latency metrics, accuracy over time, and alerts if it breaks.
What happens when clinical needs change? Can we adapt this system? Or will it be stuck in 2020 guidelines? What if I want it to do more?
The CISO is typically the very serious - often gloomy - person in the room, who thinks of security risks, compliance audits, and possible data breaches.
The CISO
Last but not least - the Chief Information Security Officer (CISO). I’ve presented to dozens of CISOs of hospital systems during the last decade. This is typically the very serious, often gloomy, person in the room, who immediately thinks of security risks, compliance audits, and possible data breaches.
Here are some of the questions a CISO typically asks:
Where is the patient data going, and who can access it? Is PHI (Protected Health Information) being sent to a third-party or to another geo or region?
Is the system storing PHI? Or is it stateless? Is data being saved, and if yes where, and is it used for future training of models?
What’s the vendor’s security posture? Is the system SOC 2? HITRUST? ISO 27001? Do they even have MFA for their admin console? Can I bring my own encryption keys?
What regulatory obligations does this trigger? Are we now under AI Act scope? Does this fall under the FDA definition of Software as a Medical Device (SaMD)?
How do we prevent prompt injection or adversarial attacks? Could a malicious prompt compromise the system into doing something it shouldn't?
Is there auditability and logging? Can we track who queried what, when, and what the model said back? Can we investigate if something goes wrong?
What’s the recovery path? How do we mitigate if something goes wrong?
And many more questions.
Security is serious business, even more so in this era. And especially when it comes to healthcare. This deserves separate focus, so - follow for future blog posts.
Creating innovation that is impactful means translating it into systems that people actually put into practice, systems that solve real problems.
What does it all mean?
Many questions. And those are just examples. But if you are building AI in Healthcare, you need to have answers.
The questions above are important ones. Many of them require depth to answer properly. Some answers would depend on the specific system. For some of them, there’s a lot of background context to understand.
And as part of our work, we constantly address those questions. Because if we want to create innovation that is impactful - meaning, translating innovation into systems that people can put into practice, systems that solve real problems - we have to meet our customers where their concerns actually are.
And that’s why we are all here. Because we aspire to make a real positive difference in the world. Even when it means pushing through pain and challenges.
Drop additional questions you often hear from healthcare stakeholders in the comments below. ⬇️
About Verge of Singularity.
About me: Real person. Opinions are my own. Blog posts are not generated by AI.
See more here.
LinkedIn: https://www.linkedin.com/in/hadas-bitran/
X: @hadasbitran
Instagram: @hadasbitran
Recent posts:
Excellent stuff Hadas,
As always