The clinical AI market looks like one category. It is actually three — and clinicians who do not see the distinction are buying the wrong tool for the wrong job.
The Three Stacks
Stack 1: Evidence retrieval and clinical reference. This is the "what does the guidance say?" stack. Tools here retrieve, synthesise, and present clinical information from authoritative sources. OpenEvidence searches peer-reviewed literature. AMBOSS provides an integrated clinical library with AI search. UpToDate Expert AI answers from its curated content. iatroX provides citation-first answers grounded in NICE, CKS, SIGN, and BNF. Medscape AI draws from its proprietary medical content library.
The job-to-be-done: help me find a trusted clinical answer quickly.
Stack 2: Guideline harmonisation and prescribing copilots. This is the "which recommendation applies to this specific patient?" stack. Tools here reconcile multiple guidelines for a single multimorbid patient. Medicaite's MetaGuideline uses a formal logic engine to harmonise prescribing recommendations across conditions. DrugGPT (Oxford) focuses on medication selection. Emerging tools in this space target polypharmacy management, deprescribing, and interaction detection.
The job-to-be-done: help me make the right prescribing decision when four guidelines apply simultaneously.
Stack 3: Exam preparation and clinical learning. This is the "help me build and retain knowledge" stack. Tools here use cognitive science — spaced repetition, active recall, adaptive learning — to build durable clinical competence. iatroX's Q-Bank provides adaptive, exam-mapped practice with spaced repetition. AMBOSS AI Mode Learning offers personalised study. Geeky Medics provides AI-simulated patients for OSCE practice.
The job-to-be-done: help me know this tomorrow, not just today.
Why the Distinction Matters
A trainee who uses MetaGuideline for exam preparation is using a prescribing copilot as a learning tool — which it is not designed to be. A GP who uses AMBOSS for a UK prescribing question is using a global library for a UK-specific decision — which may give the wrong answer. A student who uses OpenEvidence for UKMLA revision is using an evidence search engine for an exam that tests guideline application — which is a different cognitive task.
The right tool depends on the job. Most clinicians need tools from all three stacks — but they need to know which stack they are reaching for and why.
Where Each Tool Fits
For evidence retrieval and UK guideline reference: iatroX for UK-specific, NICE-grounded answers. OpenEvidence for literature-based evidence synthesis. Medscape AI for global clinical context with specialty depth.
For prescribing harmonisation: MetaGuideline for cardiovascular multimorbidity. The BNF for definitive drug information. SPS for complex medicines queries.
For learning and exam preparation: iatroX Q-Bank for adaptive spaced repetition mapped to UK and US exam curricula. Brainstorm for structured clinical reasoning. AMBOSS for integrated reading and learning.
The strongest clinician AI stack includes one tool from each stack. iatroX is distinctive because it spans stacks 1 and 3: it provides guideline-grounded clinical reference and structured learning in a single platform. For many clinicians, that combination — plus the BNF and a prescribing copilot for complex cases — covers the full workflow.
Conclusion
The clinical AI market is not one market. It is three overlapping markets with different jobs-to-be-done. Clinicians who recognise this buy better tools, use them more effectively, and waste less time trying to make one tool do everything.
Evidence retrieval, guideline harmonisation, and exam preparation are different cognitive tasks. They require different architectures. They deserve different tools. Build your stack accordingly.
