Why Local NHS Guidelines Are Still Hard to Find — and What AI Can Do About It

Featured image for Why Local NHS Guidelines Are Still Hard to Find — and What AI Can Do About It

The problem is not that local guidance does not exist. Most NHS organisations have extensive libraries of clinical guidelines, protocols, pathways, formularies, and standard operating procedures — often hundreds of documents covering dozens of specialties. The problem is discoverability, version control, context, and trust. A guideline that a clinician cannot find in 30 seconds during a consultation is functionally the same as a guideline that does not exist.

Where Local Guidance Hides

The reality of local clinical knowledge in the NHS is fragmented across multiple, disconnected repositories.

Trust intranet — the official home, but frequently slow to load on mobile devices, poorly searchable (keyword-only, no semantic understanding), and organised by department rather than by clinical question. A clinician searching for "antibiotic UTI elderly" may find nothing because the guideline is titled "Antimicrobial Formulary 2024" and filed under "Pharmacy Department."

ICB website — commissioning-level guidance on referral pathways, service specifications, and prescribing policies. Often comprehensive but not designed for point-of-care use during a 10-minute consultation.

PDF folders on shared drives — accumulated over years, with multiple versions of the same document. No clear indication of which is current. Named inconsistently. Filed arbitrarily. The clinician who saved the original has since rotated to another department or another Trust entirely.

Email attachments — guidelines circulated as attachments, saved in personal inboxes, forwarded to new starters. When the guideline changes, the email attachment does not update.

WhatsApp groups — registrar advice, consultant preferences, departmental shortcuts, passed as screenshots or verbal summaries. Clinically useful in the moment. Untraceable, unverifiable, and medically indefensible in a formal review.

Local prescribing formularies — maintained by pharmacy, accessible via separate systems, not always integrated with the clinical record or searchable during a consultation.

Service referral criteria — each service has different criteria, different referral routes, and different thresholds that change without systematic notification to referring clinicians. The criteria for mental health crisis team referral in one ICB may differ substantially from the neighbouring ICB — and neither is easily findable during a weekend on-call shift.

Why Local Guidelines Matter

Local guidelines answer questions that national guidelines cannot. Referral pathways — NICE says "refer urgently"; the local pathway specifies to which service, via which route, with which information. A GP in Tower Hamlets and a GP in rural Devon follow the same NICE guideline but use entirely different referral mechanisms. Local antibiotic policies — reflecting local resistance patterns, stewardship agreements, and pharmacy supply that differ from national NICE recommendations. Imaging access — what a GP can request directly versus what requires secondary care referral varies by Trust and ICB. Community services — crisis teams, rehabilitation services, and social care operate with locally defined criteria that national guidelines do not specify. Shared care — drug monitoring shared between primary and secondary care follows locally negotiated protocols.

Why Local Guidance Is Difficult to Trust

No visible review date — the clinician cannot assess whether the document reflects current practice. Multiple versions — three PDFs with similar titles and different dates. Conflict with national guidance — local pathways may reflect historical practice not yet updated. Poor mobile usability — a 47-page PDF is not usable on a phone during a ward round. No clear owner — unowned documents are unmaintained documents. Unclear scope — does this apply to primary care, ED, community, or inpatients? A guideline written for the emergency department applied in primary care may recommend investigations or referrals that are not available in that setting.

What AI Can Improve

AI can address many of these problems — not by replacing local guidelines, but by making them findable, current, and usable at the point of care.

Semantic search across local documents. A clinician asking "what is the local antibiotic for community-acquired UTI in a patient with penicillin allergy?" should find the right document regardless of how it was titled or filed. Natural-language search that understands clinical intent — not just keyword matching — transforms discoverability.

Version detection and metadata. The system can flag documents that have not been reviewed within their stated review cycle, highlight when a newer version exists, and distinguish between draft, approved, and superseded versions. This is version control that does not depend on individual clinicians checking manually.

Summarisation with source links. A clinician needing a specific recommendation from a long protocol should receive a short answer with a link to the full source document — not a 47-page PDF to scroll through on a mobile device during a ward round.

Query analytics. Which guidelines are most searched for? Which are never found despite being frequently needed? Where are the gaps — topics where clinicians search but no local guidance exists? These analytics help organisations understand what their clinicians need and where the knowledge infrastructure is failing.

Change alerts. When a guideline is updated, clinicians who previously accessed it should be notified. This closes the version-control gap that makes downloaded PDFs dangerous — the clinician learns that the document they relied on has been superseded.

Local vs national comparison. When a local guideline differs from the corresponding national NICE recommendation, the system should flag the divergence — not hide it. Clinicians need to know when local practice differs from national evidence and why.

What AI Must Not Obscure

AI that makes guidelines easier to find must not make their provenance harder to verify. The source document must remain visible — not hidden behind a confident AI summary. The date must be shown — not masked by a polished answer that gives no indication of currency. The author/owner must be identifiable — so the clinician can escalate questions to the responsible person. The scope must be clear — does this apply to primary care, secondary care, community, emergency, or inpatient settings? Whether national guidance differs should be flagged — a local answer that contradicts NICE should be visible as a local variation, not presented as the definitive clinical position.

Good clinical AI makes sources more accessible. Dangerous clinical AI makes sources invisible.

Where iatroX Fits

iatroX starts from the clinical question and shows the source behind the answer. National guidance (NICE, CKS, BNF, SmPC), calculators, exam learning, and CPD sit in one place — with local pathway context as the next logical layer. The iatroX principle: an answer is only clinically useful if the clinician can see where it came from.

iatroX starts from the clinical question and shows the source behind the answer →

Share this insight