Introduction
You are in the middle of a consultation, and you turn to your computer to check a guideline using an AI tool. The patient watches you, silent. What are they thinking?
In 2025, patients are aware of AI, but their understanding is often shaped by sensationalist headlines. When they see a doctor using technology, they may worry: Are they listening to me? Is the computer making the decision? Is my data safe?
Building trust in the age of AI requires a new set of communication skills. It requires proactive transparency. This guide provides a practical script library and a framework for explaining your use of AI tools like iatroX to patients, turning a potential source of suspicion into a signal of safety and diligence.
The 3 patient fears
To explain AI well, you must first understand what patients are afraid of. Research and public engagement work in the UK highlight three core anxieties:
- The "Replacement" Fear: “The doctor doesn't know the answer, so they are asking a robot.” Patients worry that reliance on AI signals incompetence or a lack of personal care.
- The "Privacy" Fear: “Is my personal story being sent to a tech company?” With high-profile data breaches in the news, patients are rightfully protective of their confidential health information.
- The "Mistake" Fear: “What if the computer is wrong?” Patients understand that technology can glitch. They fear an algorithmic error will lead to a missed diagnosis or a wrong prescription.
A clinician script library (short, usable phrasing)
Don’t over-explain the technology. Focus on the benefit to the patient. Here are short scripts you can adapt:
Scenario 1: Using an AI Scribe (e.g., Accurx/Heidi)
“I want to focus completely on listening to you today, rather than typing. I use a secure tool that writes up my notes for me while we talk. It means I can look at you, not the keyboard. Is that okay with you?”
Scenario 2: Checking a Guideline (e.g., iatroX/OpenEvidence)
“I’m going to take a moment to check the very latest national guidance on this. I want to be absolutely sure we are using the most up-to-date evidence for your treatment.”
Scenario 3: Brainstorming a Differential (e.g., Glass/iatroX)
“You have a complex set of symptoms. I’m going to use a specialist tool to double-check I haven’t missed any rare possibilities. It helps me be as thorough as possible.”
When to disclose and how to frame responsibility
Disclosure is trust. If you are using a tool that processes patient data (like a scribe), you must disclose it and get consent. If you are using a tool to check a guideline, disclosure is good practice because it frames you as diligent rather than distracted.
The Golden Rule of Framing: Always position the AI as a subordinate tool, like a stethoscope or a reference book.
- Good Framing: "I am using this tool to check my decision." (You are in charge).
- Bad Framing: "The computer says we should do X." (You have abdicated responsibility).
What not to say (overpromising)
Avoid language that attributes human qualities or infallibility to the AI.
- Don't say: "The AI is really smart, it knows everything." (It doesn't; it predicts text).
- Don't say: "This tool will diagnose you." (It won't; only a clinician can diagnose).
- Don't say: "It saves me time." (Patients don't care about your time; they care about their care. Frame it as "It gives me more time to focus on you.")
Where iatroX fits (trust via citations)
Tools like iatroX offer a unique advantage in this conversation because they are "provenance-first."
When a patient asks, "How do you know the computer is right?", you can show them.
“This tool doesn't just give me an answer; it shows me the exact page from the official NHS or NICE guidelines. It’s like having the medical library in my pocket, but faster.”
This transparency—being able to click a link and see the NICE CKS or BNF source—is a powerful way to build trust. It moves the conversation from "trusting a black box" to "trusting the evidence."
FAQ
Do I need written consent to use an AI scribe? Follow your local Trust or ICB policy. Generally, verbal consent that is documented in the notes is the minimum standard, but some organisations require specific patient information leaflets or posters in the waiting room.
What if a patient refuses AI use? Respect it immediately. Revert to manual note-taking or standard resources. Trust is more important than efficiency.
How do I explain hallucinations? If a patient asks about AI making mistakes, be honest: "Yes, general AI can make mistakes. That is why I only use specialist medical tools that show their sources, and why I verify everything myself before we make a decision."
