ChatGPT for UK doctors: safe use, confidentiality, and a GP-ready prompt pack

Featured image for ChatGPT for UK doctors: safe use, confidentiality, and a GP-ready prompt pack

ChatGPT is now part of day‑to‑day clinical life for a meaningful minority of UK doctors—often quietly, and often without a shared governance playbook.

Recent UK analysis (RCGP/Nuffield Trust) found 28% of GPs report using AI at work, with 11% using self‑provided tools such as ChatGPT (and a further proportion using practice‑provided tools). Many report using AI for documentation, administrative tasks, and professional development—but describe a “wild west” of inconsistent guidance and variable organisational support.

That reality creates an immediate question for UK clinicians:

How do I use ChatGPT safely, lawfully, and clinically sensibly—without creating avoidable confidentiality or governance risk?

This guide is designed to be practical. It covers:

  • why UK doctors use ChatGPT
  • what regulators and governance frameworks are effectively expecting
  • hard rules for confidentiality and safe use
  • a GP‑ready prompt pack (non‑identifiable, structure‑first)
  • a verification workflow you can actually follow in clinic
  • when you should use a guideline‑first tool (like iatroX) instead of asking ChatGPT to “summarise NICE”

Why UK doctors use ChatGPT (and what they use it for)

Most UK clinicians are not using ChatGPT to “diagnose patients”. They are using it to reduce friction in work that is otherwise repetitive, time‑consuming, or cognitively draining.

Common real‑world uses include:

1) Admin and correspondence

  • drafting non‑identifiable referral letter structures
  • patient information leaflet drafts
  • clinic templates and macros
  • coding/wording suggestions

2) Clinical reasoning support (as a thinking tool)

  • generating hypothesis lists (never as a final diagnosis)
  • prompting for red flags you might want to rule out
  • suggesting follow‑up questions to clarify a presentation

3) Education and CPD

  • structured explanations of concepts
  • scenario practice (“talk me through how you’d approach…”)
  • revision aids and memory hooks

Used well, it can feel like a fast, always‑available “thinking partner”. Used badly, it can become a confidentiality risk or a false‑certainty generator.


The clinical governance reality in the UK

A useful mindset is this:

ChatGPT is not “banned” as an idea; but your use of it must be defensible under data protection, confidentiality, and clinical governance.

In UK general practice, regulators and NHS guidance increasingly emphasise that new AI tools must be implemented with:

  • clear governance
  • risk management
  • appropriate training
  • transparency with patients where relevant
  • data protection and security assurance

Even when guidance is written for specific AI categories (e.g. ambient scribes), it sets the tone for what “responsible adoption” looks like across AI use: assign clinical safety leadership, perform a DPIA, document clinical risk controls (e.g. DCB0160), and manage hazards such as output errors or missing critical context.

Separately, CQC guidance for GP services makes the expectation explicit: practices should be able to demonstrate data protection assurances (e.g. DPIA, cybersecurity, DSPT), staff training, and transparent communication about the use of AI.

Practical takeaway: even if your organisation has no ChatGPT policy, you should behave as if a policy exists—because you may later need to justify your use.


Hard rules: what never to paste into ChatGPT

These are the rules that keep you safe most of the time.

Never paste identifiable patient information

That includes (non‑exhaustive):

  • names, DOB, NHS number, address, phone, email
  • specific dates/times linked to a person
  • unique clinical narratives that could re‑identify (rare diseases + exact locality + timeline)
  • screenshots of EMIS/SystmOne or clinic lists
  • copy‑pasted consultation text with identifiable detail

Never paste sensitive organisational information

  • internal incident reports
  • safeguarding case details
  • complaints / medico‑legal correspondence
  • disciplinary matters
  • internal policy documents that are confidential

Never use ChatGPT as the “source of truth” for patient care

Treat it as:

  • a drafting tool, or
  • a thinking aid, not a clinical authority.

If you think you need to paste patient data, stop

That is the point you involve:

  • your practice manager / ICB guidance
  • IG lead / DPO
  • clinical safety officer (where applicable)

…and you require a formal organisational approval route (typically including DPIA, DSPT alignment, and contractual assurances for data processing).


“Is ChatGPT allowed in the NHS?” (the practical answer)

In day‑to‑day terms:

  • There is no single universal “yes/no” for all NHS settings.
  • What matters is how you use it and whether your organisation has done the relevant governance work for the specific tool and use case.

If you use it only for non‑identifiable drafting and learning support, your risk profile is much lower.

If you input patient data or use it for operational clinical decisions without governance, the risk profile becomes high.


Confidentiality: a simple, defensible approach

Use a three‑tier mindset:

Tier 1 (generally acceptable with common sense)

Non‑identifiable drafting and education

  • rewriting generic patient leaflets
  • drafting a template referral letter (no identifiers)
  • explaining a concept
  • generating a differential list “as hypotheses” for an anonymised scenario

Tier 2 (useful but requires stronger discipline)

Clinical thinking support on de‑identified scenarios

  • you provide minimal case facts
  • you avoid rare/unique details
  • you do not include dates/locations that allow re‑identification
  • you always verify against trusted sources

Tier 3 (do not do without formal governance)

Any real patient data / clinical documentation automation

  • copying consultations
  • generating final plans for a named patient
  • using ChatGPT as a de facto clinical system

If you stay in Tier 1–2 and keep a strong verification routine, you can get value without creating obvious confidentiality hazards.


The GP‑ready prompt pack (safe, non‑identifiable, structure‑first)

All prompts below are designed to work without patient identifiers.

Important: before pasting any clinical content, remove anything that could identify a person.

Prompt 1 — Differential generation (as hypotheses)

Use this when you want a broad hypothesis list, not a diagnosis.

You are helping a UK GP think through a case. Do NOT give a single diagnosis. Generate a ranked differential diagnosis list as hypotheses only.

Presentation (de‑identified):

  • Age range: [e.g., 40–50]
  • Sex: [M/F]
  • Key symptoms: [list]
  • Duration: [e.g., 2 weeks]
  • Relevant PMH: [generic]
  • Medications: [generic]
  • Red flags present/absent: [list]

Output requirements:

  1. Top 10 differentials with 1–2 reasons each
  2. 6–10 focused follow‑up questions that would change the ranking
  3. A small list of “don’t miss” diagnoses to actively rule out
  4. A short section: “What would make this urgent today?”

Prompt 2 — “Red flags to consider” checklist

Use this when you fear you might be missing urgency.

Create a UK GP red‑flag checklist for this presentation (de‑identified). Separate into:

  • Immediate emergency red flags
  • Same‑day assessment red flags
  • 2‑week‑wait / urgent referral red flags (generic)

Then add: what I should ask/examine first in a 10‑minute appointment.

Presentation: [brief de‑identified summary]

Prompt 3 — “NICE-style pathway summary request” (use with caution)

This is useful for structure, but you must treat it as a draft and verify.

Summarise the likely NICE/NHS-style approach for: [condition/topic].

Output format:

  • Stepwise pathway (first‑line → next‑line → escalation)
  • Thresholds that change management (where relevant)
  • Safety‑netting and review timing
  • Referral triggers
  • Key contraindications/cautions

Important constraints:

  • Do not invent numbers. If you are unsure, say “verify in NICE guidance”.
  • Prefer UK framing.

Better alternative for actual pathway work: use a guideline‑first tool with explicit provenance and a clear pathway structure.

Prompt 4 — Patient-friendly explanation (no identifiers)

Use this for clear communication drafts.

Write a patient-friendly explanation (UK English) for: [condition / test result / medication].

Requirements:

  • Reading age ~11–13
  • Use short paragraphs and headings
  • Include “when to seek urgent help” bullet points
  • Avoid absolute reassurance
  • No personalised advice; keep generic

Prompt 5 — Referral letter skeleton (template only)

Use this to create a clean structure, not the final patient letter.

Create a UK GP referral letter template for: [specialty/service].

Include headings for:

  • Reason for referral
  • Relevant history
  • Examination findings
  • Investigations and results
  • Current meds and allergies
  • Relevant PMH
  • Social context (generic)
  • What I’m asking the service to do

Do NOT include any patient identifiers.

Prompt 6 — “Turn my notes into a structured assessment” (for fictionalised notes only)

Only use this if your notes are de‑identified and non‑unique.

Convert the following de‑identified summary into a structured format:

  • Presenting complaint
  • History of presenting complaint
  • PMH / DH / Allergies
  • Exam
  • Investigations
  • Assessment (hypotheses)
  • Plan (options, not directives)
  • Safety-netting

Text: [de‑identified]


The verification workflow (two-pass, GP-friendly)

The most reliable way to use ChatGPT clinically is to treat it as a draft generator, then verify.

Pass 1: Generate

  • Ask for hypotheses, questions, structure, or a draft explanation.
  • Force uncertainty handling (“If unsure, say verify”).

Pass 2: Verify

Use one of these verification routes:

Then reconcile:

  • Does the output match UK guidance?
  • Are thresholds/steps correct?
  • Are red flags missing?
  • Would local pathways change the plan?

If you cannot verify, treat it as untrusted.


When to use iatroX instead of ChatGPT

ChatGPT is most useful for:

  • drafting
  • simplifying text
  • generating hypothesis lists
  • prompting questions

But when the job is operational clinical pathway work, you should switch to a guideline‑first workflow.

Use iatroX when you need:

  • thresholds and escalation logic
  • stepwise pathways
  • rapid scan summaries for UK-style practice
  • a consistent clinician knowledge hub at point of need

Direct links:


A simple “safe use” checklist to keep in your head

Before you paste anything into ChatGPT:

  1. Is there any chance this could identify a patient? If yes, stop.
  2. Am I using this for drafting/learning, not decision delegation?
  3. Have I forced uncertainty handling and asked for red flags?
  4. Do I have a verification route ready? (guideline-first + local policy)
  5. Will I document my own reasoning, not the model’s reasoning?

If you can’t answer “yes” to 2–5, do not use it for that task.


Further reading (UK governance / context)


Share this insight