Artificial intelligence is no longer a future concept; it's arriving on wards, in clinics, and in primary care. Tools that draft discharge summaries, ambient scribes that listen to consultations, and new triage assistants are being deployed now.
1) The problem we are solving
For frontline staff—junior doctors, nurses, and allied health professionals (AHPs)—this creates an immediate, practical problem: what is safe to use, say, or paste?
The challenge is twofold. On one hand, NHS England has issued specific 2025 guidance on AI-enabled ambient scribes, highlighting the risks of un-assured tools and the need for clear adoption principles. On the other, the World Health Organization (WHO) warns that Large Multimodal Models (LMMs) can "hallucinate" or produce confident-sounding errors.
This guide provides a simple framework for using these new tools responsibly, keeping you, your registration, and your patients safe.
2) The four pillars of responsible clinical AI (UK lens)
Your professional judgment remains the most important part of the clinical process. When using AI, that judgment must be applied through four key pillars of responsibility.
- Lawful & fair data use: All AI use must comply with UK GDPR and the Data Protection Act 2018. The Information Commissioner's Office (ICO) is clear: patient-identifiable data must only be processed by approved tools that have a clear, lawful basis and a completed Data Protection Impact Assessment (DPIA).
- Clinical safety & oversight: You must remain the "human-in-the-loop." NHS England's 2025 ambient scribe guidance states that the clinician is responsible for verifying all AI-generated output. You are signing off on the note, not the AI.
- Provenance-first answers: To counter the risk of hallucination, you must prioritise tools that show their work. This means using AI that is "grounded" in a gated knowledge base (e.g., iatroX) and clearly cites its sources, such as NICE, SIGN, or the BNF.
- Accountability & audit: You must be able to justify your actions. This means documenting that an AI tool was used. A simple note in the clinical record, such as "AI-drafted, clinician-verified," enables audit, supports training, and maintains a clear line of accountability, as required by the GMC.
3) What to use (and what not to use)
The line between safe and unsafe use is defined by your trust's approval process.
-
Use:
- NHSE-approved or locally procured AI scribes that have a completed DPIA and clinical safety case.
- Retrieval-Augmented Generation (RAG) tools designed for clinical lookup that have UK citations (e.g., iatroX).
- Any NHS-hosted AI service that is listed in the official NHS AI Knowledge Repository.
-
Don't use:
- Public-facing, consumer chatbots (like ChatGPT, Gemini, etc.) for any live patient data, even if you think it's anonymised.
- Unvetted "helpful" apps that lack clear Information Governance (IG), a DPIA, or evidence of a clinical safety file (DCB0129/0160).
4) A simple “SAFE” framework for frontline staff
Before you use any AI tool, run it through this simple four-step check.
- S — Source it: Does the tool show you where the answer came from? If it's a clinical knowledge tool, does it cite NICE, CKS, or another UK-based source? If not, you must verify the output manually.
- A — Anonymise: Are you on an approved, IG-cleared system? If not, you must remove all patient-identifiable data. This includes names, DOB, and NHS numbers, but also rare disease identifiers or complex histories that could lead to identification. The ICO stresses "data minimisation" as a core principle.
- F — Final clinician check: The AI only ever drafts; the clinician signs. This is a mandatory requirement in NHS England's ambient-scribe guidance. Read every word. Check for subtle errors, omissions, or "automation bias" where you are tempted to over-trust the output.
- E — Enter in record: Copy and paste the checked text into the EPR. Add a simple note like “AI-drafted, clinician-verified by [Your Name/Role]” to maintain a clear audit trail.
5) Role-specific guidance
Your professional standards still apply, regardless of the technology you use.
- For junior/resident doctors: Use approved AI to draft referral letters, structure differential diagnoses, or quickly pull up NICE thresholds. However, the GMC's 2024 Good Medical Practice standards are clear: you are responsible for all decisions. Never delegate final diagnosis or any prescribing decision to an AI. Always document your edits to its suggestions.
- For nurses (ward/community): Approved tools are safe for drafting care plan wording, structuring patient discharge education, or helping to summarise documentation, after all data is checked. The Royal College of Nursing (RCN) states that AI must "enhance personal nursing care, not replace it." Always check the output against your local trust policies and your own clinical judgment.
- For AHPs (physio/OT/SLT): AI is ideal for creating note templates or translating complex concepts into patient-friendly explanations. However, the Allied Health Professions Federation (AHPF) notes that accountability "rests with" the AHP. Do not upload patient recordings or images unless the AI product is explicitly named on your trust’s approved list and DPIA.
6) Why provenance-first tools matter (iatroX example)
Generic AI tools invent answers based on broad internet data. In healthcare, this is dangerous.
A "provenance-first" tool like iatroX operates differently. It sits on a carefully gated UK knowledge base and uses algorithmic search combined with Retrieval-Augmented Generation (RAG).
This means its outputs are not "invented"; they are grounded in trusted UK sources like NICE, CKS, SIGN, and the BNF. This design directly addresses the WHO’s call for traceable sources and the NHS IG requirement to know where clinical content comes from.
For a junior clinician, the workflow becomes safe:
- Ask the AI a clinical question.
- Get an answer clearly cited back to a specific UK guideline.
- Copy that cited text into your note, applying your clinical judgement.
- Stay inside your scope of practice, backed by an auditable source.
7) Training & upskilling pathway
- 15-minute induction: All staff need a mandatory induction covering: what AI is, what data you may/may not enter, and a link to the trust's list of locally approved tools.
- Prompt hygiene: Clinicians need to learn structured prompts (e.g., "Context: [acute ward], Patient: [anonymised summary], Task: [draft a discharge summary], Constraint: [cite all sources]"). This reduces low-quality, generic outputs.
- Quarterly refresh: Check your trust's intranet and the NHS AI Knowledge Repository for newly assured tools and updated NHSE guidance.
8) Risks to flag early
- Data leakage: The biggest risk. If staff paste patient-identifiable information (PII) into non-NHS, unapproved tools, it is a serious data breach. The ICO has flagged this as a key risk.
- Over-trust (automation bias): It is easy to "glance and approve" an AI-drafted note, especially when busy. This is how errors get into the patient record. NHS England's guidance is clear that staff must review every draft.
- Out-of-date models: The AI may have been trained on old data. Always check the model/update dates and prioritise tools that use live, RAG-based connections to current guidance.
9) Calls to action
- For trusts/ICBs: Surface an explicit, easy-to-find “Approved AI Tools” list on the intranet. Link it directly to the national NHS AI Knowledge Repository.
- For educators: Add a one-hour “Responsible AI in Practice” session to all foundation year, nurse preceptorship, and AHP induction programmes, anchored to the new WHO and NHSE guidance.
- For clinicians: Make responsible practice your default. Always ask "is this tool approved?" and "where does this answer come from?". Default to provenance-first, UK-gated tools (like iatroX), and document your human verification every time.
