The headline is irresistible. Every few months, a new study shows an AI model matching or exceeding doctors on a medical exam, and the same prediction follows: AI will replace GPs. It makes for good copy. It makes for anxious dinner-party conversations among trainees. And it is, on the evidence available in 2026, wrong — not because AI is unimpressive, but because the question misunderstands what a GP actually does.
This article takes the question seriously and answers it with evidence rather than opinion.
What AI Can Now Do in General Practice
The capabilities are real and should not be understated.
AI can transcribe and document consultations. Ambient scribing tools like Heidi and TORTUS listen to consultations and generate structured clinical notes. NHS England's evaluation suggests these tools save two to three minutes per consultation and increase direct patient interaction time. This is genuine, measurable value.
AI can triage and route patient requests. Online consultation tools, AI receptionists, and digital triage systems can capture patient intent, assess urgency, and route requests to the appropriate clinician or pathway. They can handle routine administrative queries without human involvement.
AI can retrieve clinical guidelines rapidly. Tools like iatroX provide citation-first answers to clinical questions in seconds, grounded in NICE, CKS, SIGN, and BNF content. This accelerates the guideline-checking that GPs do dozens of times per day.
AI can match or exceed human performance on standardised medical exams. Multiple studies have shown large language models achieving passing or high scores on USMLE, UKMLA-equivalent, and other medical assessments. This is technically impressive, though what it demonstrates is pattern-matching on structured questions rather than clinical practice.
AI can identify patterns in imaging and diagnostics. In radiology, dermatology, pathology, and ophthalmology, AI systems have demonstrated performance comparable to or exceeding specialists in specific, well-defined diagnostic tasks.
What AI Cannot Do — and Why It Matters for General Practice
The capabilities that AI lacks are precisely the ones that define general practice.
AI cannot manage uncertainty. General practice is the medicine of undifferentiated presentations. A patient says "I feel tired." The differential spans anaemia, depression, diabetes, hypothyroidism, cancer, domestic abuse, and "life is hard." The GP's job is not to diagnose — it is to navigate uncertainty, decide what to investigate, what to watch, what to safety-net, and what to explore further. This requires judgement under ambiguity — the one cognitive task that AI performs most poorly.
AI cannot build therapeutic relationships. For patients with chronic conditions, mental health problems, or complex social circumstances, the GP relationship is itself a therapeutic intervention. Continuity of care — seeing the same doctor who knows your history, your family, your fears — has measurable health outcomes. AI cannot provide continuity. It does not remember you in any clinically meaningful sense.
AI cannot integrate social context with clinical reasoning. The patient's housing situation, employment status, caring responsibilities, family dynamics, cultural background, and personal values all influence what the right clinical decision is. A GP who knows that a patient is a single parent working night shifts will manage their diabetes differently from an algorithm that sees only the HbA1c value.
AI cannot perform physical examinations. Despite advances in remote monitoring, the physical examination remains a core GP skill — palpating an abdomen, listening to a chest, examining a rash, checking a joint. These require human hands and clinical interpretation in real time.
AI cannot navigate the system on behalf of patients. GPs coordinate care across multiple services: secondary care, community services, social care, mental health services, palliative care. They advocate for patients, chase referrals, write supporting letters, and make the NHS work for individuals. This navigational and advocacy role has no AI equivalent.
AI cannot exercise professional accountability. When a clinical decision goes wrong, someone is accountable. That someone has a GMC registration number, a professional duty of care, and a relationship with the patient. AI has none of these. Professional accountability is not a technical feature — it is a social contract.
What the Research Actually Shows
The evidence is more nuanced than the headlines suggest.
A large-scale evaluation of medical hallucinations in foundation models, evaluating eleven models across multiple clinical reasoning tasks, found that even models developed specifically for medical use remained vulnerable to domain-specific hallucinations — errors arising from reasoning failures rather than knowledge gaps. A global survey of clinicians within the same study found that over 90% had encountered medical hallucinations from AI, and approximately 85% considered them capable of causing patient harm.
A study published in Nature Communications Medicine showed that when clinical vignettes contained a single planted error (a fake lab value, sign, or disease), leading LLMs repeated or elaborated on the planted error in up to 83% of cases. A mitigation prompt halved the rate but did not eliminate it.
These findings do not mean AI is useless. They mean AI is unreliable when operating autonomously on complex clinical reasoning — which is exactly what general practice demands.
Meanwhile, the evidence for AI augmenting clinicians is strong and growing. Documentation tools demonstrably save time. Guideline retrieval tools demonstrably improve access to evidence. Triage tools demonstrably improve front-door efficiency. The pattern is consistent: AI works best as a support layer, not as a replacement.
The Augmentation Model
The future of AI in general practice is not replacement. It is augmentation — AI handling the tasks it does well so that GPs can focus on the tasks that require human judgement, empathy, and accountability.
A well-augmented GP in 2026 might work like this: an AI receptionist captures the patient's request by phone. An online consultation tool structures the digital demand. A triage algorithm suggests priority. The GP reviews, decides, and consults. An ambient scribe documents the consultation. A clinical knowledge tool like iatroX provides instant guideline verification when a clinical question arises. The GP writes the prescription, safety-nets the patient, and arranges follow-up.
In this model, AI has saved the GP time at every step. But the GP has made every clinical decision. The GP has maintained the relationship. The GP has exercised the judgement. The GP is accountable.
This is what iatroX is designed to be: a clinical knowledge layer that makes GPs faster and more accurate, not a system that replaces their reasoning. Its citation-first architecture ensures that the GP sees the evidence, verifies the source, and makes the decision — with AI accelerating the process, not substituting for it. The Brainstorm tool supports structured clinical reasoning. The Q-Bank helps GPs maintain and expand their knowledge through spaced repetition. These are augmentation tools, not automation tools.
What Patients Actually Want
Patient surveys consistently show that what patients value most in general practice is not speed or technical accuracy alone. It is being listened to, being understood, having their concerns taken seriously, and having a doctor who knows them. These are relational qualities that AI cannot provide.
Patients may accept AI for routine administrative tasks — booking appointments, ordering prescriptions, checking test results. But for the consultation itself — the conversation about their symptoms, their fears, their options — the evidence consistently shows that patients want a human doctor.
This does not mean patients reject AI. It means they want AI in the right place: handling the friction so the GP can focus on the care.
The Honest Answer
Will AI replace GPs? No. Not because AI is incapable, but because general practice is not a set of tasks that can be decomposed into automatable components. It is a relational, contextual, uncertainty-managing discipline that requires exactly the capabilities AI lacks: judgement under ambiguity, therapeutic relationships, social integration, physical presence, and professional accountability.
What AI will do — and is already doing — is augment GPs. It will make them faster, better informed, less burdened by administrative tasks, and more able to focus on the work that only a human clinician can do.
The GPs who thrive in this environment will be the ones who use AI tools wisely: for documentation, for guideline retrieval, for learning, and for workflow efficiency. Tools like iatroX are built for exactly this model — augmenting clinical knowledge and supporting professional development, while keeping the GP at the centre of every decision.
The question is not whether AI will replace GPs. It is whether GPs will use AI well. The evidence says the opportunity is enormous. The responsibility, as always, is human.
