The model layer is commoditising. GPT-5, Claude, Gemini, and proprietary medical models are converging in capability — each generation closes the gap with the last. Within 12-18 months, the underlying AI model powering a clinical search tool will be the least interesting thing about it. The defensible layers — the moats — are elsewhere.
The Five Moats
1. Trusted Source Access and Provenance
OpenEvidence proved this matters. Its platform is trained exclusively on peer-reviewed medical literature — not the broader internet. This source control is the foundation of clinician trust. A tool that can hallucinate answers from any corner of the internet is a different proposition from a tool constrained to PubMed, NICE, CKS, and BNF.
The moat is not just having access to trusted sources — it is building the ingestion pipelines, licensing relationships, API integrations, and update mechanisms that keep the source data current. When NICE publishes a guideline revision, how quickly does the tool ingest it? When a BNF monograph changes, how quickly do the tool's drug answers update? Source freshness is a competitive advantage.
2. Citations and Verifiability
Clinicians will not — and should not — trust an AI answer without being able to verify it. The citation is the minimum requirement. But citation quality varies enormously. A citation that links to the specific NICE guideline paragraph is more useful than one that cites "NICE NG28" generically. A citation that links to the exact BNF monograph section is more useful than one that says "per BNF."
The tools that win will be those whose citations are specific enough that verification takes seconds — click the link, read the paragraph, confirm the answer. The tools that lose will be those whose citations are vague, fabricated, or require the clinician to navigate multiple pages to find the supporting text.
3. Local Guideline Ingestion
Clinical evidence is universal. Clinical guidelines are local. The pathophysiology of heart failure is the same in London and Los Angeles. The recommended first-line drug, the treatment threshold, the screening protocol, and the referral pathway are not. NICE NG106 governs UK heart failure management. ACC/AHA governs US management. ESC governs European management. The drugs, doses, and step-up protocols differ.
A clinical AI tool that cannot distinguish between UK and US guidelines is not safe for UK clinical practice — even if the underlying model is clinically competent. Guideline localisation is a moat because it requires country-specific source ingestion, editorial oversight, and ongoing maintenance. A tool built for UK practice (NICE, CKS, BNF) cannot be trivially repurposed for Germany (AWMF, S3, NVL), and vice versa.
This is why the European market will fragment by geography — and why tools like iatroX (UK), ClariMed (Germany), and AMBOSS (DACH + global) coexist rather than one tool dominating.
4. Regulatory Artefacts
MHRA registration is a moat. Not because the registration itself is technically difficult — although it requires systematic hazard identification, clinical safety cases (DCB 0129), and post-market surveillance — but because it takes time, expertise, and ongoing commitment that new entrants must replicate from scratch.
OpenEvidence's EU/UK withdrawal demonstrates the moat in action. A $12 billion company with $700 million in funding decided that the regulatory uncertainty was too great to justify entry. A tool that already holds MHRA registration (iatroX) is already through the barrier that stopped OpenEvidence.
5. Clinician Workflow and Habit Formation
The most underrated moat. A clinical AI search tool that a clinician uses once a week is a utility. A clinical AI platform that a clinician uses daily — for exam preparation, clinical queries, scoring calculators, and CPD documentation — is a habit. Habits are defensible.
OpenEvidence understood this by embedding into Epic at Mount Sinai — placing the tool inside the physician's primary clinical workflow rather than requiring a separate app. Doximity understood this by placing DoxGPT inside the Doximity platform that 85% of US physicians already use daily.
For UK clinicians, iatroX builds habit through breadth — a medical trainee preparing for MRCP on the Q-bank, checking drug doses on Ask iatroX, scoring a NEWS2 on the calculator, and logging CPD is using the same platform four times in one day across four different workflows. That is a habit no single-purpose search tool can match.
What the Market Evidence Shows
OpenEvidence proved demand. Clinicians want free, fast, cited answers. Bottom-up adoption can reach 40%+ of physicians without institutional procurement.
OpenAI proved that general AI platforms are entering clinical workflows. ChatGPT for Clinicians is free for verified US clinicians with plans to expand internationally.
Praxis proved that European VCs see evidence retrieval as venture-scale. 70 million SEK from Balderton and Creandum for a clinical search tool.
Medwise proved that local guideline search is commercially meaningful in the NHS. Enterprise deployments, HRA-listed pilot study, 2,000+ NHS organisations referenced.
AMBOSS and UpToDate proved that curated clinical content remains a moat. Expert curation, editorial quality, and institutional trust cannot be replicated by an AI model alone — they require decades of specialist authorship and editorial infrastructure.
The iatroX Argument
The winner will not simply be the model with the cleverest answer. It will be the product that clinicians trust enough to use repeatedly, can verify quickly, and can fit into their actual working day — from exam revision at 7am to a clinical query at 2pm to a calculator on the ward at 5pm to CPD logging at 9pm.
Clinical AI search will not be won by the chatbot that sounds most confident. It will be won by the system that can show its sources, understand local practice, sit inside the clinician's workflow, and earn repeated use without asking the doctor to suspend judgment.
Try iatroX — cited answers, UK guidelines, exams, calculators, CPD. One platform. Free. →
