From “Dr Google” to “Dr ChatGPT”: how clinicians can prepare for AI-informed patients—without losing the human touch

Featured image for From “Dr Google” to “Dr ChatGPT”: how clinicians can prepare for AI-informed patients—without losing the human touch

Executive summary

The era of the patient arriving with a stack of printouts from a keyword search—the "Dr Google" phenomenon—is rapidly evolving. Today, patients are moving to conversational AI advisors like ChatGPT, creating the age of "Dr AI." This shift amplifies both patient knowledge and patient anxiety; AI can sound supremely confident even when it is dangerously wrong, and we know that health-information searches are already strongly linked to escalating worry, a condition often termed "cyberchondria" (PMC). As a result, UK clinicians will increasingly meet patients who are pre-informed, and sometimes significantly misinformed.

In this new landscape, UK professional and policy guidance is clear: while AI tools can be useful adjuncts, they do not replace sound clinical judgement. The core professional duties of effective communication, shared decision-making, and robust clinical governance remain paramount. This article provides a practical playbook for navigating this new reality (GMC UK, NHS Transformation Directorate, World Health Organization).

What’s changing: from search boxes to dialogue agents

The shift from "Dr Google" to "Dr ChatGPT" is more than just a technological upgrade; it changes the nature of the clinical conversation.

  • The Dr Google era was defined by fast, unfiltered search results. The documented link to health anxiety stems from the sheer volume of information and the user's difficulty in assessing its quality (PMC).
  • The Dr ChatGPT era is defined by natural-language answers delivered in a persuasive, authoritative style. The accuracy is highly variable, yet laypeople are commonly asking for direct medical advice via these chatbots (Deutsche Welle).

The bottom line is that patient expectations are rising. The consultation now often starts further along the reasoning path, but the evidence that path is built on can be shaky.

What the evidence says about AI advice quality (so far)

Research into the clinical utility of large language models (LLMs) shows a mixed but revealing picture. When used by physicians as a co-pilot, some studies have shown LLMs can improve diagnostic accuracy, while others show they can hinder it or be neutral unless used with strict, clinician-led oversight (JAMA Network, UVA Health Newsroom, Stanford Medicine).

The workflows that consistently benefit today are those focused on administrative tasks: AI is proving highly effective at assisting with documentation, drafting patient letters, and summarising clinical options—but always when the outputs are verified by a human clinician (PMC). The implication for practice is clear: instead of debating AI's "accuracy in the abstract," the goal is to build guard-railed use cases where AI safely augments the clinician.

Anticipating patient reactions: anxiety, confidence, and bias

  • Cyberchondria 2.0: An AI’s fluent, confident tone can heighten a patient's certainty about a potential diagnosis, dramatically amplifying their worry. Health literacy is a key factor that modulates this risk. Clinical teams must be equipped to recognise the patterns of this new, AI-amplified anxiety (PMC, BioMed Central).
  • Equity concerns: Generic models are often trained on limited datasets and may under-serve certain demographic groups. It is important to invite patients to share the sources of their information so you can co-review its quality and applicability to them as an individual (World Health Organization).

The clinician’s edge: empathy, context, and shared decisions

In an age of AI, your uniquely human skills are more valuable than ever. These are your durable advantages and are central to GMC guidance on good medical practice:

  • Relationship & trust: You know the patient, their history, and their context.
  • Nuance: You can read between the lines of what is said.
  • Priorities: You can help a patient navigate trade-offs between different treatment options.
  • Safety-netting: You know how to provide clear advice on what to do if things change.

Frame every AI-informed discussion around these skills. A powerful way to reassure a patient is to say: “That’s a really helpful starting point. AI can be great at helping us list the options; you and I will now decide together what fits best with your values and your specific situation.” (Language aligns with GMC’s seven principles on shared decision-making).

A conversation playbook for “Dr AI” consultations

  1. Open well: Start by inviting the patient to share their findings without judgment. Ask open questions: “What have you been reading or asking the AI? What part of that worries you the most?”
  2. Validate + calibrate: Acknowledge their effort to be informed. Invite them to show you screenshots or links. Quickly assess for any red-flag gaps or dangerous misinformation.
  3. Co-review sources: Use the patient's information as a bridge to trusted sources. Compare the AI's claims with a NICE CKS summary or other official guidance. Explain concepts like uncertainty and evidence quality in plain English.
  4. Agree actions & safety-netting: Clearly document the shared decision in the patient's record. Provide reputable aftercare resources (like NHS.uk pages) and establish a clear safety net. This supports your GMC duties on providing information, managing risk, and keeping good records (GMC UK).

Practical clinic assets to deploy this quarter

  • Patient-facing handout/website page: Create a simple guide titled “How to use AI health tools safely,” with tips, links to reputable UK sources, and clear advice on when to call 111 or 999.
  • Waiting-room signage: A simple poster that says, “Have you been researching your symptoms online? Bring your sources—let’s review them together.”
  • Team micro-training: Run a 30-minute session to role-play an AI-informed patient consultation and practise the playbook.
  • EHR template phrases: Create a standardised phrase for notes, such as: “Patient consulted an AI chatbot prior to appointment; we reviewed the information together against NICE guidance. Final agreed plan is [...]. Clear safety-net advice given.”

Safe, value-adding clinician use of AI (inside your workflow)

  • Documentation & admin: Follow NHS England guidance on AI-enabled scribes, always keeping human verification and patient consent as mandatory steps.
  • Rapid evidence Q&A (for you, not the patient): Use professional, UK-focused clinical tools with clear citations, like iatroX, Medwise, or AskTrip, to get rapid answers to your own questions. Always record the source of the information in your notes.
  • Governance & IG: Run a Data Protection Impact Assessment (DPIA) for any new tool and follow NHS AI information governance guidance. Crucially, never enter patient-identifiable data into consumer-grade chatbots.

Policy compass clinicians can cite in meetings

  • WHO guidance on LLMs in health (2025): Outlines the benefits, risks, and governance recommendations for generative AI.
  • GMC standards: The core principles of good communication, consent, and documentation are unchanged by AI and remain the anchor for your practice.
  • RCGP stance: The college views generative-AI literacy as an emerging core skill for GPs and has provided guidance on its ethical use in training and WPBA.
  • NHS England guidance: Provides practical guardrails for specific uses like ambient scribing and sets clear information-governance expectations.

Risks to watch—and mitigations

  • Over-trust of AI outputs: Always require sources. Compare AI suggestions with national/local guidance. Explain uncertainty to patients.
  • Scope creep: Do not let chatbots replace established crisis pathways or safeguarding assessments. These require direct human interaction.
  • Bias & equity: Monitor patient outcomes and feedback across different demographic groups to ensure AI tools are not causing harm to any specific population.
  • Data protection: Default to a "no patient-identifiable information" rule in any consumer tool. Follow NHS IG guidance meticulously.

Measuring success (simple KPIs)

  • Consultation quality: Track patient-reported understanding and shared-decision documentation rates.
  • Operations: Note the time spent clarifying misinformation or re-attendance rates for the same concern.
  • Safety: Monitor escalation appropriateness and any incident reports linked to AI-informed patient decisions.

Future outlook (12–24 months)

Expect to see more patient-facing AI tools that come with medical-grade disclaimers and are grounded in traceable evidence. The clinicians who lean into the skills of curation, interpretation, and compassion will be the ones who sustain patient trust. Professional bodies will continue to expand AI competencies for clinicians, and local policies will increasingly mirror national guidance on safe deployment and information governance.

Closing call-to-action

The era of the AI-informed patient is here. The most effective response is not to resist it, but to engage with it professionally and safely. Adopt the conversation playbook, publish a patient handout on your practice website, and rehearse these new consultation styles as a team. Use AI where it reliably saves you time (in your own workflow for notes and summaries), but keep the uniquely human skills of tact, shared decision-making, and robust safety-netting at the very centre of your practice.


Share this insight