The pain is simple. UK clinicians do not lack clinical guidance — they lack fast retrieval, synthesis, and verification during real clinical work. NICE publishes thousands of guidelines. CKS covers hundreds of primary care conditions. The BNF contains every licensed medicine. SIGN provides Scottish evidence-based recommendations. Trust intranets host local antimicrobial policies, clinical pathways, and formulary restrictions. The information exists. Finding and synthesising it during a 10-minute GP consultation or between patients on a busy ward round is the problem.
The Old Model
A single clinical query — for example, managing a newly diagnosed condition in a patient with renal impairment — might require a clinician to: open the NICE guideline (find the condition, navigate to the management section), check CKS for the primary care summary (may differ slightly from the full guideline), verify the drug dose in the BNF (which may use broader renal impairment categories than the SmPC), check the SmPC on the emc for specific creatinine clearance thresholds (section 4.2), check the Trust antimicrobial formulary if an antibiotic is involved (local formularies may differ from national guidance), and verify whether the patient's other medications interact (BNF interaction checker or SmPC section 4.5).
Six sources. Six tabs. Six different interfaces. The clinician manually reconciles the outputs and makes a decision — all while the patient waits.
The New Model
Ask a natural-language question. Receive a short answer synthesising the relevant guidance, with citations linking to the specific NICE guideline, CKS topic, BNF monograph, or SmPC section. Verify the relevant passage by clicking the citation. Make the clinical decision with the same authoritative sources — faster.
This is not "AI replacing doctors." It is AI replacing the tab-switching, cross-referencing, and manual synthesis that consumes clinical time without adding clinical value. The clinician still makes the decision. The AI compresses the retrieval step.
Why RAG/Source-Grounded AI Is Different from Generic Chatbots
Not all clinical AI is the same. The critical distinction: source-grounded AI (Retrieval-Augmented Generation, or RAG) retrieves information from specified authoritative sources and cites those sources in its response. Generic chatbots (standard ChatGPT, Gemini, Claude) generate responses from training data without guaranteed source specificity or citation.
For clinical use, this distinction is safety-critical. A source-grounded tool answering "what is the NICE-recommended first-line for type 2 diabetes?" retrieves the answer from NICE NG28 and cites it. A generic chatbot may generate a plausible answer based on training data that includes US (ADA), European (EASD), and UK (NICE) guidelines — without specifying which guideline framework it is citing and without guaranteeing it is citing the current version.
NICE itself recognises this evolution. NICE has stated that it is developing guidance for AI methods, evaluating AI technologies for NHS use, and exploring AI tools internally. The NICE Evidence Standards Framework exists specifically to help evaluators and companies judge whether digital health technologies are likely to benefit users and the health system — providing a structured pathway for tools in this emerging category.
A 2026 UCL study published on arXiv demonstrated a RAG system for querying NICE clinical guidelines, achieving 98.7% accuracy with GPT-4.1 while reducing unsafe responses by 67% compared to smaller models — confirming that source-grounded approaches meaningfully improve clinical AI safety.
NICE, CKS, BNF, and SIGN: The UK Evidence Backbone
Understanding what each source provides is essential for evaluating whether a clinical AI tool has adequate coverage.
NICE Guidelines. Comprehensive national recommendations covering conditions, procedures, and technologies. Evidence-graded. Cost-effectiveness-informed (QALY thresholds). Updated on a rolling basis. The authoritative reference for NHS clinical governance.
NICE CKS (Clinical Knowledge Summaries). Primary-care-focused management summaries. Condition-led — each topic provides a stepwise management pathway from investigation through treatment to referral criteria. More practical and concise than full NICE guidelines. The tool most GPs reach for first.
BNF (British National Formulary). Independently curated prescribing reference. Drug-led — one monograph per active substance covering doses, interactions, contraindications, and monitoring. NICE-accredited editorial process. The prescribing reference standard.
SIGN (Scottish Intercollegiate Guidelines Network). Scottish evidence-based clinical guidelines. May cover conditions or clinical areas where NICE guidance is absent or where Scottish practice differs. Valuable complementary source, particularly for clinicians in Scotland.
SmPC (Summary of Product Characteristics). Manufacturer's MHRA-approved product information, hosted on the emc. Full adverse event listings, interaction data, dose adjustments by renal/hepatic function, excipient information. The regulatory reference for individual medicinal products.
Any clinical AI tool positioning itself for UK clinicians needs to cover at least NICE, CKS, and BNF. Coverage of SIGN and SmPC data adds meaningful value for specific query types.
The Emerging UK Tools
iatroX. Free. MHRA-registered, UKCA-marked. Retrieves and synthesises NICE guidelines, CKS summaries, peer-reviewed literature, and SmPC data. But iatroX is not just a search box — it combines clinical AI retrieval with adaptive exam Q-banks, clinical calculators, and CPD documentation. This hybrid model means the clinician interacts with the platform daily across multiple workflows (exam preparation, clinical queries, scoring, CPD) rather than visiting a search tool occasionally.
Praxis Medicine. New entrant. Voi co-founder Douglas Stark. Balderton and Creandum backing. 70 million SEK raised (Breakit, April 2026). Explicitly lists NICE Guidelines, NICE CKS, NHS Digital, and Europe PMC as sources. NHS Website Content API access referenced. Early stage — but credibly positioned.
Medwise AI. UK enterprise deployment. NHS Trust integrations with local policy and formulary content. HRA-listed pilot study comparing AI guideline search against manual intranet search. Available to institutions, not individual clinicians.
Umbil. Sources strictly from NICE, CKS, SIGN, and BNF. Also generates referral letters, discharge summaries, and SBAR handovers from clinical notes. Clinical workflow tools alongside guideline retrieval.
How to Judge Them
Five evaluation criteria for any clinical AI guideline retrieval tool.
Citations. Does the tool show its sources? Can you click through to the primary NICE guideline, CKS topic, or BNF monograph? If not, it is a black box.
Source coverage. Does it cover NICE, CKS, BNF, SIGN? Does it include SmPC data? Does it include local Trust pathways (Medwise) or just national guidance?
Update frequency. How quickly does the tool ingest guideline updates after NICE publishes a revision? A tool citing an outdated guideline is worse than no tool.
Governance. Is it MHRA-registered? Does it have a clinical safety case (DCB 0129)? What data governance applies to clinical queries?
Usability. Can you use it during a consultation without disrupting the clinical workflow? Is it mobile-optimised for ward use? Does it integrate with anything else you use daily?
Where iatroX Fits
iatroX is built for this retrieval layer — but with clinical education, Q-banks, and calculators attached. Not just a search box, but a daily clinical platform.
Try Ask iatroX — cited NICE/CKS/BNF answers, free, MHRA-registered →
