The EHR knows the patient. The evidence tool knows the medicine. The future of UK primary care AI is not one replacing the other — it is the safe joining of patient context with evidence-grounded clinical reasoning. But how that joining happens matters enormously for clinical safety, transparency, and trust.
The Convergence Is Real
Doctolib's acquisition of Medicus — combined with its prior acquisitions of Aaron.ai (AI telephone reception), Typeless (speech-to-structured text), and Siilo (clinical messaging) — points toward an integrated clinical operating system where AI is not an add-on but a native capability.
The pattern is global. In the US, OpenEvidence embedded into Mount Sinai's Epic system — placing AI evidence retrieval inside the physician's primary workflow. UpToDate integrated into Microsoft Dragon Copilot, reaching 600,000+ clinical documentation users. Tandem Health embedded into Doctor Care Anywhere's virtual care platform and Humanitas in Italy. The AI answer is moving from the separate browser tab into the clinical operating environment.
For UK general practice, this convergence means that AI-assisted documentation, evidence retrieval, clinical coding, and decision support may increasingly appear inside the GP clinical system — surfaced by the EHR vendor as part of the workflow, rather than accessed by the clinician through a separate tool.
Why This Matters for Clinical Trust
When a clinician opens a separate clinical AI tool — Ask iatroX, OpenEvidence, ChatGPT — they make a conscious decision to consult an AI system. They ask a specific question. They receive a specific answer. They evaluate the citation. They decide whether to trust it. The boundary between the AI's output and the clinical decision is visible.
When AI is embedded natively in the EHR, that boundary can become invisible. A clinical suggestion appearing alongside the patient record may look like a system feature rather than an AI-generated output. A coding suggestion may be accepted without the clinician consciously evaluating its source. A prescribing prompt may be followed because it appears in the workflow, not because the clinician independently verified its clinical basis.
This is not inherently dangerous — EHR-native AI may be more accurate and more useful than standalone tools because it has access to the patient's full clinical context. But it requires robust transparency about where AI-generated outputs appear in the workflow, what sources they are derived from, how confident the system is in its output, and how the clinician can verify the basis for any suggestion.
The Transparency Requirements
For EHR-native AI to earn clinical trust, five transparency requirements should be met.
Source visibility. Every AI-generated clinical suggestion should show its source — the specific guideline, formulary entry, or evidence base that supports it. A prescribing suggestion without a visible source is a black box. A suggestion citing NICE NG136 paragraph 1.4.7 is verifiable in seconds.
Confidence indication. The system should distinguish between high-confidence outputs (strong evidence, clear guideline recommendation) and lower-confidence outputs (limited evidence, extrapolation from adjacent topics, absence of specific guidance). Clinicians need to know when to trust and when to verify more carefully.
Audit trail. Every AI-generated suggestion that influences a clinical action should be logged — what was suggested, what source was cited, whether the clinician accepted or overrode it. This is essential for clinical governance, medico-legal defensibility, and post-market surveillance.
Independence from vendor incentives. The evidence layer should serve clinical accuracy, not vendor commercial interests. If the EHR vendor also sells pharmaceutical advertising, formulary placement, or clinical pathway products, the evidence layer must be demonstrably independent of those revenue streams.
Clinician override. The clinician must always be able to override, dismiss, or question any AI-generated suggestion without friction. AI-native should not mean AI-mandatory.
The Strategic Gap: Patient Context vs Medical Knowledge
EHRs and evidence tools occupy complementary positions in the clinical workflow — and understanding this complementarity is essential for designing safe AI-enabled care.
The EHR knows the patient. Medications, allergies, coding history, documents, recent encounters, investigation results, referral status, vaccination records, and demographic details. This is patient context — essential for personalising clinical decisions.
The evidence tool knows the medicine. Guidelines, contraindications, dose adjustments, risk scores, clinical reasoning frameworks, exam-tested clinical knowledge, and evidence-graded recommendations. This is clinical knowledge — essential for making decisions that align with current best practice.
Neither alone is sufficient. A guideline recommendation without patient context may be clinically inappropriate (the guideline says prescribe X, but the patient is allergic to X). Patient context without guideline knowledge may lead to suboptimal care (the patient has condition Y, but the clinician is not aware of the updated management recommendation).
The future of UK primary care AI is the safe joining of these two layers — patient context from the EHR combined with evidence-grounded clinical knowledge from the evidence layer. The key word is "safe": the joining must be transparent, auditable, and under the clinician's control.
Where iatroX Fits
iatroX occupies the evidence layer — fast, citation-first, guideline-grounded clinical answers oriented around UK practice. The platform is not an EHR. It does not hold patient records. It does not manage appointments or process prescriptions. It provides the clinical knowledge that clinicians need to make good decisions — whether that knowledge is accessed inside an EHR, alongside an EHR, or independently of any EHR.
The strategic position is deliberate: a clinician-facing evidence layer that can support clinicians wherever the work happens, without being locked into one vendor's ecosystem or dependent on one EHR's AI integration roadmap.
As EHR vendors add AI capabilities, the independent evidence layer becomes more important, not less. Clinicians need the ability to cross-check, verify, and compare — to ask "what does the guideline actually say?" independently of whatever the EHR's embedded AI suggests. That independent verification capability is what citation-first clinical knowledge tools provide.
AI will not transform UK primary care from a separate tab. But it should not transform it from inside an opaque, unverifiable black box either. The answer should be transparent, cited, and available at the exact point where the clinician must decide — whether that point is inside an EHR, beside an EHR, or on a mobile device between patients.
Try Ask iatroX — citation-first clinical knowledge, independent of any EHR vendor →
