Privacy, Accountability, and the ‘Two-Tab Rule’: A UK Clinician’s Practical Boundary Model for Health AI

Featured image for Privacy, Accountability, and the ‘Two-Tab Rule’: A UK Clinician’s Practical Boundary Model for Health AI

The question every Practice Manager is currently fielding: "Can we use ChatGPT at work?"

With the launch of ChatGPT Health, the answer has shifted from a hard "No" to a nuanced "Yes, but..."

OpenAI has introduced a dedicated health workspace with enhanced privacy controls, but for UK clinicians, the rules of engagement are stricter than for the general public. We operate under GMC professional standards and NHS information governance laws that do not care about "cool new features."

This guide provides a defensible, safe framework for using AI in 2026: The Two-Tab Rule.

Why privacy is the real story of ChatGPT Health

For years, the advice was simple: never put patient data into ChatGPT because it trains the model.

OpenAI has now bifurcated its product.

  • Standard ChatGPT: Conversations may be used to train their foundation models (unless you opt-out).
  • ChatGPT Health: A separate "walled garden." Data, files, and conversations here are not used to train models by default, and the space has additional "purpose-built encryption and isolation."

The Catch: This feature is not yet live in the UK. Even when it arrives, NHS policy generally prohibits inputting PID (Patient Identifiable Data) into non-commissioned cloud platforms.

The Two-Tab Rule

To stay safe, mentally and technically separate your AI use into two distinct browser tabs. Never mix them.

Tab A: Consumer/Patient Tools (The "Empathy" Layer)

  • Tool: Standard ChatGPT / ChatGPT Health (when available).
  • Purpose: Understanding what the patient is seeing. Drafting generic letters.
  • Rule: Zero PID. Treat this tab as if it is a public noticeboard.
  • Use Case: "Write a letter to a housing officer explaining why mould exacerbates asthma, generally."

Tab B: Clinician-Grade Evidence Tools (The "Decision" Layer)

  • Tool: iatroX / BMJ Best Practice / NICE CKS.
  • Purpose: Clinical decision making.
  • Rule: Verification First. This tab must provide citations.
  • Use Case: "What is the step-up therapy for asthma in a 12-year-old per NICE NG80?"

The Goal: Keep "patient context" (Tab A) and "clinical verification" (Tab B) distinct. You use AI to generate text, but you use evidence tools to generate decisions.

Professional accountability (UK)

The GMC has updated its advice on "Innovative Technologies." The core principle remains: You are responsible for the output.

  • If an AI suggests a dosage and you prescribe it, the error is yours, not the AI's.
  • You must be competent to check the output. If you ask an AI to interpret an MRI and you cannot interpret it yourself, you are acting outside your competence.

Information governance reality check

NHS England guidance emphasises the "lawful and safe use of data."

  • Lawful: Do you have a legal basis to upload patient data to a US server? (Usually: No).
  • Safe: Is the tool transparent? (Generic LLMs are "black boxes"; they do not explain why they reached a conclusion).

The ICO (Information Commissioner's Office) reiterates that "fairness" and "transparency" are key. Using a patient's data to generate a referral letter without their knowledge—using a tool that might hallucinate—fails the fairness test.

The practical boundary table

Print this out for your practice meeting.

ZoneStatusAllowed Use Cases
GREENSafe• Drafting generic patient leaflets (e.g. "Explain gout").<br>• Creating education summaries for yourself.<br>• Drafting non-clinical admin (e.g. "Rota apology email").<br>iatroX queries (Clinical questions without patient names).
AMBER⚠️ Caution• Summarising a de-identified history (e.g. "Summarise a 40M with chest pain...").<br>Risk: It is very hard to truly de-identify data. A postcode + age can be unique.
RED🛑 Stop• Pasting a hospital letter with a Name/NHS Number.<br>• Asking for a diagnosis ("What does this rash look like?").<br>• Uploading raw CSV data from practice audits to a public chatbot.

A “clinic-ready” mini policy

Adopt these 10 points to govern AI use in your practice tomorrow:

  1. No PID: Never enter a Name, NHS Number, or Address into a public AI tool.
  2. Human in the Loop: AI drafts; Humans sign. You must read every word.
  3. Two-Tab Rule: Use separate tools for "generic drafting" and "clinical checking."
  4. Declaration: If AI heavily assists a document (e.g., a report), declare it.
  5. Verification: Never trust an AI citation unless you click the link (Citation Theatre).
  6. Accountability: The clinician whose login is used is responsible for the output.
  7. Consent: Do not record patient consults with AI scribes unless they are NHS-approved (e.g., Corti, Heidi) and the patient consents.
  8. Bias Check: Be aware AI can stereotype. Check outputs for equity.
  9. Training: Don't use AI for tasks you don't know how to do manually.
  10. Escalation: If an AI output looks dangerous, report it to the Practice Manager as a significant event (SE).

Where iatroX fits

The "Two-Tab Rule" works best when Tab B is fast. If checking the evidence takes 20 minutes, you won't do it.

iatroX is built to be your Tab B.

  • It is designed for Clinical Verification.
  • It does not require you to input patient data to get an answer.
  • It checks GMC/NICE/MHRA sources by default, not the open web.

Use generic AI to write the letter (Tab A). Use iatroX to check the dose (Tab B).

Summary In UK practice, the safest model is separation of concerns: use AI to accelerate understanding, but anchor decisions in accountable sources and document that verification step.


Share this insight