Introduction
Physician Associates (PAs), Advanced Nurse Practitioners (ANPs), and Advanced Clinical Practitioners (ACPs) are the engines of modern UK general practice. You handle high volumes of undifferentiated patients, often with complex comorbidities. In this high-pressure environment, AI tools offer a tempting promise: speed.
But speed without safety is dangerous. Using a general-purpose chatbot for clinical decisions introduces risks of hallucination and data breaches. This guide defines the "Green Zone" of safe AI use, explains why citation-first tools are the only professional choice, and provides a practical workflow for using platforms like iatroX and OpenEvidence to enhance, not replace, your clinical judgement.
The “Green Zone / Red Zone” for AI in primary care
To practise safely, you need to draw a hard line between administrative tasks and clinical data.
🟢 The Green Zone (Safe & Professional)
- Administrative Drafting: Using AI to draft (anonymised) referral letters, funding requests, or practice policies.
- General Learning: Asking for summaries of conditions or pharmacology (e.g., "Explain the mechanism of action of SGLT2 inhibitors").
- Patient Communication: Drafting plain-English explanations of conditions for patient leaflets (always reviewed by you).
🔴 The Red Zone (Unsafe & Unprofessional)
- Patient Identifiable Data (PID): Never, ever paste a patient's name, NHS number, or recognisable history into a public AI tool. This is a GDPR breach.
- "Black Box" Clinical Answers: Relying on an answer from a chatbot that does not provide a link to the source. If you can't verify it, you can't use it.
Why citations matter: “answer engines” vs “source engines”
The biggest risk in clinical AI is "hallucination"—when an AI confidently invents a fact.
- Answer Engines (e.g., ChatGPT): These predict the next word in a sentence. They are designed to be fluent, not factual. They can invent a drug dose that "sounds" right but is lethal.
- Source Engines (e.g., iatroX, OpenEvidence): These use Retrieval-Augmented Generation (RAG). They find a trusted document first (like a NICE guideline), read it, and then summarise it with a link. They are designed to be accurate, not just chatty.
The Rule: If an AI tool cannot show you the primary source link (the "citation provenance"), it is not safe for clinical decision support.
Tool roles in a safe workflow
iatroX (UK-First Point-of-Care Retrieval)
- Role: Your daily driver for UK clinical practice.
- Why: It is built on a "walled garden" of UK-specific sources (NICE, CKS, SIGN, BNF). It understands the context of the NHS (e.g., "referral pathways," "red flags").
- Use for: Quick guideline checks, prescribing verification, and "what do I do now?" queries during a clinic.
OpenEvidence (Evidence Orientation)
- Role: Your research assistant for complex or unusual cases.
- Why: It excels at scanning the global peer-reviewed literature to answer nuanced questions where guidelines might be silent.
- Use for: Deep dives, "grey area" clinical questions, and understanding the evidence base behind a treatment.
A 3-step “cited answer” method
Don't just "ask AI." Use this professional workflow to ensure safety.
- Ask: Input a structured, de-identified query.
- Bad: "Treating sore throat."
- Good: "What is the NICE CKS antibiotic choice for a penicillin-allergic adult with a FeverPAIN score of 4?"
- Check Provenance: Before you even read the answer, look at the citations.
- Does it link to a real NICE/BNF page?
- Is the date recent?
- Click the link to verify the specific dose or threshold.
- Apply + Document: Apply the guidance to your specific patient context. In your notes, document the primary source you verified, not the AI tool.
- Note entry: "Plan based on NICE NG84 guidance: Phenoxymethylpenicillin prescribed..."
Prompt patterns that reduce hallucination risk
You can force the AI to be safer by how you ask.
- The "Constraint" Prompt: "Using only UK sources (NICE, BNF), outline the management of..."
- The "Quote" Prompt: "Find the specific section in the NICE guideline regarding driving with epilepsy and quote it."
- The "Negative" Prompt: "If there is no clear evidence for this intervention, please state that explicitly."
FAQ
Can I get in trouble for using AI? If you use it recklessly (e.g., uploading patient data or relying on hallucinations), yes. If you use it as a search tool to find and verify official guidance, it is a legitimate professional aid.
Is OpenEvidence free? OpenEvidence is generally free for verified healthcare professionals (requires registration).
Why not just use Google? Google prioritises SEO (advertisements, patient forums, US health sites). Tools like iatroX prioritise clinical relevance and UK national guidance, saving you from filtering out irrelevant information.
