Search is not the same as understanding. Finding the right PDF is not the same as knowing the answer. A document retrieval tool may locate the correct NICE guideline — but the clinician still has to navigate to the relevant section, interpret the recommendation in context, apply it to the specific patient, and document the rationale. The document is the source. The answer is what the clinician extracts from it. And the extraction step is where clinical time is consumed and cognitive load is highest.
The Clinical Question Is Usually Narrower Than the Document
A NICE guideline may span 50 pages covering epidemiology, pathophysiology, diagnosis, management across multiple lines of therapy, referral criteria, monitoring, special populations, and research recommendations. The clinician's question at 3pm is specific and time-constrained.
"When should I refer this child with recurrent tonsillitis?" The answer is in one paragraph of NICE NG67. Finding that paragraph requires navigating the NICE website, finding the guideline, locating the referral section, and reading the recommendation. Time cost: 3-5 minutes.
"Can I prescribe this in pregnancy?" Requires checking the SmPC section 4.6, the BNF pregnancy entry, and potentially UKTIS. Three sources, three interfaces, manual reconciliation. Time cost: 2-4 minutes.
"What red flags should I safety-net for?" Requires recalling or looking up presentation-specific red flags — which may be scattered across different sections of a CKS topic. Time cost: 2-3 minutes.
"Which risk score applies here?" Requires knowing which validated calculator is recommended for the clinical scenario and where to find it. Time cost: 1-3 minutes.
Each question has a specific answer within a comprehensive document. The document contains the answer alongside thousands of words that are not relevant to the specific question at the specific moment. A good clinical search tool compresses the navigation — taking the specific question, identifying the relevant passage, and presenting a short cited answer that the clinician can verify in seconds.
Why Document Retrieval Is Still Valuable
Document retrieval — finding the right guideline, protocol, or formulary entry — remains the foundation. Without reliable retrieval, no synthesis is possible. A clinical search tool that generates an answer without being able to show the source document is a black box. A tool that finds the right document and extracts the relevant passage is transparent and verifiable.
Finding the source is the prerequisite. Extracting the answer is the cognitive work. Both are needed. The value of clinical AI search is not replacing the source — it is making the source's content accessible faster.
Why Clinicians Need Synthesis
A useful clinical answer should go beyond pointing to a document. It should identify the relevant section ("the recommendation is in paragraph 1.4.7 of NICE NG28"). Summarise the recommendation concisely and accurately. Preserve caveats ("NICE recommends X as first-line, but notes Y may be preferred in patients with Z"). Show the source for verification. Expose uncertainty where it exists ("evidence for this recommendation is graded moderate quality"). Suggest next steps where relevant ("consider using the QRISK3 calculator to quantify cardiovascular risk before initiating statin therapy"). And avoid pretending to replace judgment — the answer is a clinical starting point, not a final decision.
The difference between a useful clinical AI answer and a dangerous one is not accuracy alone — it is whether the clinician can verify the answer against the original source. Citation quality determines verification speed: a link to the specific NICE guideline paragraph allows verification in 10 seconds; a generic reference to "NICE guidelines" requires 3-5 minutes of manual navigation to find and confirm the relevant recommendation.
Consider two scenarios. A clinician asks "Should I start aspirin in this patient with type 2 diabetes?" A document retrieval tool returns a link to NICE NG28 — a 50-page guideline. The clinician must navigate to find the relevant cardiovascular risk assessment section, then cross-reference with the BNF aspirin monograph. A clinical synthesis tool returns: "NICE NG28 recommends assessing cardiovascular risk using QRISK3. Aspirin is not routinely recommended for primary prevention in type 2 diabetes unless cardiovascular risk is elevated and the risk-benefit discussion favours treatment. See paragraph 1.9.2." The clinician verifies by clicking the citation. Same authoritative source. Dramatically different retrieval experience.
The Risk of Answer Engines Without Provenance
A fluent, confident AI answer without a visible source is clinically dangerous — not because the answer is necessarily wrong, but because it cannot be verified. The clinician has no way to distinguish a correct answer from a hallucination, an outdated recommendation from a current one, or a UK-applicable answer from a US-applicable one.
Provenance — showing where the answer came from, when the source was last updated, and what type of source it is — transforms "trust me" into "check this." In clinical practice, "check this" is always safer than "trust me."
The provenance requirement extends beyond simply showing a citation. Good provenance includes: the specific source document (not just the publisher), the relevant section or paragraph, the publication or last-review date, the source type (national guideline, local pathway, SmPC, peer-reviewed evidence), and any caveats about evidence quality. A citation that says "source: NICE" is less useful than one that says "source: NICE NG28, section 1.4.7, last reviewed December 2024, evidence quality moderate." The latter allows verification in seconds and informed assessment of the recommendation's strength.
The iatroX Answer Model
iatroX is built around the clinical question, not just the document. Start with the question. Retrieve relevant sources. Produce a structured, cited answer. Show the source for verification. Link to calculators or CPD where useful. Let the clinician continue reasoning — because the answer is a starting point, not an endpoint.
Ask iatroX for a clinical answer, not just a document link →
