Autonomous AI Triage in the NHS: Is It Safe?

Featured image for Autonomous AI Triage in the NHS: Is It Safe?

Rapid Health's Smart Triage autonomously assesses patients, determines urgency, identifies care pathways, and books appointments — with 91% of appointments allocated without staff intervention. The published evidence shows a 73% reduction in waiting times and near-elimination of the 8am rush.

The access benefits are clear and well-documented. The safety question is equally clear and deserves honest examination: when an AI makes routing decisions for thousands of patients without human review, what happens when it gets one wrong?

What the Evidence Shows About Safety

The independent evaluation at The Groves Medical Centre (funded by Health Innovation Kent Surrey Sussex, conducted by Unity Insights) assessed acceptability, implementation, effectiveness, and impact on health inequalities over a five-month period. The evaluation found no safety concerns. The system correctly identified and escalated genuinely urgent presentations while appropriately routing non-urgent requests to planned appointments.

Two architectural features underpin this safety profile. First, the clinical questioning is structured and rule-based, not free-text generative AI. Patients answer specific questions about their symptoms, and the routing logic follows clinically defined decision pathways. This makes the system deterministic and auditable — the same combination of answers always produces the same routing decision, unlike LLM-based systems where outputs can vary.

Second, the system is designed with explicit escalation pathways for emergency presentations. Red flag symptoms and safety-critical combinations trigger immediate escalation to same-day assessment, urgent care, or emergency services (including advising 999).

The Health Research Authority has registered a formal controlled study evaluating AI triage impact across 40 GP practices (20 using AI triage, 20 controls), measuring delays in urgent care delivery, staff workload, and equity across patient demographics. This ongoing research will provide the large-scale, controlled evidence the field needs.

Where the Risks Live

Under-triage is the most serious risk: a patient whose symptoms are assessed as non-urgent when they are actually clinically urgent. This could delay time-critical treatment for conditions like acute coronary syndrome, sepsis, or ectopic pregnancy. Structured questioning covering red flag symptoms mitigates this, but no triage system — human or AI — achieves zero false negatives.

The critical comparison is not "AI versus perfect triage" but "AI versus the system it replaces." In most GP practices, the previous system is a receptionist answering a phone at 8am with a queue of 40+ callers, making rapid urgency decisions with limited clinical training, no structured questioning framework, no defined escalation pathways, and no audit trail. The receptionist might ask "is it urgent?" and the patient might say "not really" while describing chest pain. By comparison, Smart Triage asks specific symptom questions that are designed to identify clinical urgency regardless of how the patient frames their request. The structured, consistent, auditable AI approach may represent a genuine safety improvement over the status quo.

Over-triage is less dangerous but operationally significant: patients assessed as urgent when they could safely wait, consuming same-day capacity and displacing genuinely urgent patients. The evaluation data is encouraging — urgent requests fell from 62% to 19%, suggesting Smart Triage is substantially more specific (less over-triage) than the previous system, where there was a strong incentive to classify requests as urgent to avoid complaints about delayed access.

Atypical presentations pose a challenge for any structured triage system. Elderly patients with minimal symptoms masking serious pathology, patients with communication difficulties, and patients who minimise their symptoms may not trigger the appropriate urgency pathway through structured questions alone. The system needs robust safety-netting for these cases, and practice teams must remain alert to the possibility of under-assessed cases presenting through other channels or re-presenting with worsening symptoms.

Digital exclusion and equity. If triage quality differs by access channel — better online than by phone, for example — equity is compromised. Smart Triage's multi-channel approach (online, phone-assisted, in-person tablets) mitigates this, but the consistency and quality of triage across all channels should be audited, with particular attention to outcomes for patients using the non-digital routes.

What Robust Clinical Governance Looks Like

Any practice deploying autonomous AI triage should implement the following governance framework — not as a one-off exercise, but as an ongoing operational commitment.

Regular triage audit. Review a random sample of AI-triaged cases monthly, checking whether urgency assessment and care pathway allocation were clinically appropriate. Include both urgent and non-urgent classifications in the sample. Track audit findings over time to identify patterns or deterioration.

Escalation protocols. Clear, documented, tested processes for cases where the AI identifies potential emergency presentations. Every staff member must know what happens when the system escalates, and the pathway from AI escalation to clinical assessment must be immediate and unambiguous.

Safety incident reporting. All AI triage-related safety concerns should be reported through LFPSE (Learn from Patient Safety Events Service), as required by the 2025/26 contract. Include near-misses where the AI under-triaged but the issue was caught downstream — these are learning opportunities, not just reportable incidents.

Clinical oversight. A named clinical lead should oversee the system's configuration, review audit findings, approve any changes to triage pathways or urgency thresholds, and be accountable for the clinical safety of the autonomous triage process.

Patient feedback. Actively collect and review patient feedback about the triage experience — particularly from patients who felt their urgency was under-assessed, who found the process difficult to navigate, or who re-presented with worsening symptoms after being triaged as non-urgent.

Safety metrics. Track rates of same-day re-presentations (patients triaged as non-urgent who returned the same day with worsening symptoms), A&E attendances within 24 hours of non-urgent triage, and complaints related to triage accuracy. These are the leading indicators of triage safety.

The Honest Assessment

The evidence supports Smart Triage's safety. The structured, rule-based approach is more consistent and auditable than human reception-led triage. The independent evaluation found no safety concerns. The system correctly identified and escalated urgent presentations while dramatically reducing false urgency.

But "no safety concerns in a single-site five-month evaluation" is not the same as "proven safe at scale across all patient populations and demographics." The ongoing HRA-registered research across 40 practices will provide stronger evidence. In the meantime, practices adopting Smart Triage should treat it as a powerful tool that requires active governance — not a set-and-forget solution.

The access improvements are genuine and substantial. The safety evidence is positive but still maturing. The governance requirements are non-negotiable. Practices that adopt Smart Triage with robust audit, clear escalation, and active clinical oversight will realise the access benefits while managing the safety risks appropriately.

And regardless of how patients reach the consultation, the clinical quality of that consultation depends on the clinician's knowledge and tools. iatroX provides the guideline-grounded reference that supports safe clinical decisions — the layer that matters most once the patient is in the room. Free, instant, and always available.

Share this insight