Inception of AI Agents in Healthcare
About ambient clinical intelligence, what is Inception, and how copilot agents for patients differ than those intended for medical professionals.
At HIMSS this year, Microsoft announced Dragon Copilot, which was a proud moment for us - our team made major contributions to the underlying tech.
Microsoft Dragon Copilot is what we call Ambient Clinical Intelligence. It is an AI assistant for clinical workflows, that - to put it in simple terms - listens to the medical encounter and generates a proposal for the clinical note, to be reviewed, modified and approved by the doctor. And it’s doing much more. You can read more about it here.
Nobody goes to medicine school to write clinical documentation.
Microsoft Dragon Copilot is intended for medical professionals. It’s intended to help medical professionals with clinical documentation.
Keep that in mind - nobody goes to medicine school to write clinical documentation. People choose the medical professions because they want to take care of patients. Yet, the burden of clinical documentation creates serious burnout for medical professionals, and that burnout is the number one cause of attrition, which is adding up to the existing shortage in medical professionals worldwide.
This makes Microsoft Dragon Copilot no less than a game changer in the future of practicing medicine, if you ask me.
Microsoft Dragon Copilot is a game changer in the future of practicing medicine.
Microsoft Dragon Copilot includes advanced copilot agent capabilities. One of those is a conversational agent that allows the doctor to interact with it in natural language.
It is key to understand that copilot agents that are intended to be used by medical professionals are different than copilot agents that are created for patients.
How do you make them different? Let’s talk about a concept named Inception.
What is Inception in the context of Generative AI?
In the film “Inception”, Leo DiCaprio’s job was to sneak into people’s dreams to steal information. He was then tasked with planting an idea so deeply into the dream, that the dreamer thought it was their own idea. This might be where the industry has borrowed the concept from.
Inception is a technique that plants a sense of role and audience into an AI agent. We’re telling the copilot agent how to behave and who it is talking to. This technique is sometimes called persona priming or user modeling, but I like the name “inception” because it effectively conveys the concept of planting the idea deeply into the agent.
Inception is a technique that plants a sense of role and audience into an AI agent.
So, how do you do it?
Some of the inception can be done in the System Prompt (sometimes referred to as Context Prompt) which is the prompt behind the scene that tells the model how to behave and helps the model adapt its tone and potentially its content. The system prompt is different from the user prompts, which ask specific questions.
Let’s look at an example. Say you wanted to create an agent that provides career counseling to teenagers.
Here’s how the inception part of the system prompt could look like:
You are a veteran career counselor with 20+ years of experience guiding high school students. You’ve worked with thousands of teens from diverse backgrounds, many of whom are confused, anxious, or overwhelmed by the idea of choosing a future path. You are empathetic, calm, and realistic—but you also know how to spark excitement and offer clarity.
The user is a teenager, around 16, who feels uncertain about the future. They might be afraid of making the wrong choice, disappointing others, or picking something that won’t make them happy. What they’re really looking for is reassurance, relatable examples, and the sense that it’s okay not to have it all figured out. They want practical ideas, not generic platitudes. They respond best to conversational, supportive language and subtle encouragement—not pressure.
Your job is to meet them where they are, help them feel seen, and guide them with curiosity, confidence, and care.
Then a user prompt could be:
“I like art and science, but I don’t really know what I can do with that. Also I’m not super confident, like I’m not a genius or anything.”
And the answer from the model could be:
“Totally get that—it’s a big question, and you definitely don’t have to have it all figured out right now. First off, you don’t need to be a “genius” to have a really cool, meaningful career. You like art and science? That’s actually an awesome mix.
There are careers where those two worlds meet….”
Etc. The dialog could go on and on, going back and forth in multiple turns of the conversation. You get the gist of it.
Sounds easy, right? Well, not quite.
Copilot Agents in Healthcare
Copilot agents would behave differently in a healthcare setting, depending on who they are intended for, be it a patient, a nurse, a doctor, and they could even behave differently for medical professionals in different specialties.
But it’s not that simple. Adapting the copilot agent to the end user is not just about inception in the system prompt. It may also mean changing the sources that the agent is grounded on, and changing what it is allowed to do. They might even need different safeguards.
If you are not familiar with the concept of grounding, read more about it and how it’s done with retrieval-augmented generation (RAG) in this previous blog post.
Copilot agents for medical professionals are different than copilot agents for patients.
Copilot agents for patients would typically be adapted to use patient-friendly language, trying to simplify the content to make it more accessible to patients or their family members, who may not fully understand the medical jargon. Those type of agents would likely be grounded on credible sources that are intended to be patient-facing. They are sometimes more administrative in their nature, aiming to provide services in a self-serve way. And they would, by-design, know that they are not intended to replace a medical professional, often saying that explicitly when responding to healthcare questions.
Copilot agents for medical professionals, on the other hand, would likely use and understand a more professional language and medical jargon. They would be grounded on clinician-facing credible sources, sometimes even organization-specific ones. And they would be able to refer to clinical guidelines and publications, for a start. And they typically are not going to send the end-user to consult with their doctor.
What does this all mean? It means that creating copilot agents in healthcare is a subtle, sensitive task, and you should use tools that are designed specifically for healthcare, rather than generic frameworks. The Microsoft healthcare agent in Copilot Studio is such a tool. Read more about it here.
About Verge of Singularity.
About me: Real person. Opinions are my own. Blog posts are not generated by AI.
See more here.
LinkedIn: https://www.linkedin.com/in/hadas-bitran/
X: @hadasbitran
Instagram: @hadasbitran
Recent posts:
super interesting. thank you. I wonder, after your prompt for grounding, system inception and user inception, how do you prompt the model for a required task, meaning the format of the product to be generated by the model