The Clinical Safety Case For and Against AI Pre-Visit Triage: What Regulators, Insurers, and Clinicians Need to Consider

Featured image for The Clinical Safety Case For and Against AI Pre-Visit Triage: What Regulators, Insurers, and Clinicians Need to Consider

AI pre-visit triage sits at the intersection of clinical safety and operational efficiency. The promise is straightforward: an AI system contacts patients before their appointment, assesses urgency, gathers clinical information, and routes them to the appropriate pathway. The risk is equally straightforward: if the AI gets triage wrong, patients may be harmed.

This is not a theoretical concern. Triage is the clinical function where errors have the most immediate consequences — a missed red flag, an inappropriate reassurance, or an incorrect routing decision can delay life-saving treatment by hours or days.

The Case For

Pre-visit AI triage addresses genuine safety problems. Currently, most GP practices operate with no pre-visit assessment at all. Patients are booked into appointment slots based on availability, not urgency. A patient with chest pain and a patient requesting a fit note may be given the same morning slot.

Online consultation tools and total triage models attempt to address this, but they depend on the patient initiating contact and the practice team processing the triage queue — both of which introduce delay and variability.

An AI pre-visit triage system that contacts patients proactively, identifies urgency, and escalates appropriately could catch the patient whose symptoms have worsened since booking but who would not have contacted the practice again.

The Case Against

The risks are specific and well-documented from analogous technologies.

False reassurance. An AI system that assesses a patient's symptoms and determines they are non-urgent, when they are actually urgent, creates a false safety net. The patient believes they have been triaged. The practice believes the AI has flagged everything important. The gap between belief and reality is where harm occurs.

Sensitivity-specificity trade-off. If the AI is calibrated to be highly sensitive (catching every possible urgent presentation), it will over-triage — flagging too many patients as urgent and overwhelming the practice's acute capacity. If calibrated for specificity, it will miss genuine emergencies. The calibration is a clinical decision, not a technical one, and it must be made explicitly.

Communication failures. Patients may not understand the AI's assessment. They may interpret a non-urgent classification as medical clearance. They may assume the AI has "checked" them and feel no need to seek further help if symptoms worsen. Clear safety-netting language is essential but harder to deliver through an AI interaction than a human one.

Vulnerable populations. Patients with cognitive impairment, limited English, hearing difficulties, anxiety, or distrust of technology may interact poorly with an AI triage system. Their responses may be unreliable, their symptoms may be under-reported, and the AI's assessment may be based on incomplete information.

The Regulatory Landscape

UK: MHRA and CQC. Any AI system performing clinical triage in the UK is likely to be classified as a medical device under MHRA regulations, requiring UKCA marking. The CQC expects that AI deployed in GP services is governed, overseen by competent staff, and subject to incident reporting and learning. DTAC applies as the assurance framework for digital health technologies used in NHS care.

US: FDA. The FDA regulates AI/ML-based software as medical devices (SaMD) when they are intended to diagnose, treat, or prevent disease. Clinical triage functions would likely fall within this scope, though the regulatory pathway depends on the specific claims and risk classification.

In both jurisdictions, the key principle is that the clinical responsibility remains with the human clinician who oversees the system. The AI does not hold clinical accountability. The practice that deploys it does.

What Clinicians Should Demand

Before accepting any AI pre-visit triage system, clinicians should require published evidence of sensitivity and specificity for the conditions it claims to triage, a clearly documented escalation pathway for urgent and emergency presentations, robust handling of vulnerable patients including language, cognition, and communication needs, audit trails showing every triage decision and its outcome, regular calibration reviews with clinical input, clear patient-facing communication about what the AI assessment does and does not mean, and a human fallback route accessible at any point.

The Knowledge Layer in Triage Safety

AI triage systems need clinical guidelines to determine urgency thresholds. When a patient reports symptoms during a pre-visit call, the AI needs to know whether those symptoms meet NICE criteria for urgent assessment, two-week-wait referral, or routine management.

iatroX, grounded in NICE, CKS, SIGN, and BNF content, provides the guideline layer that any triage AI should reference. Whether the triage system is built by a scribe vendor expanding upstream or a dedicated triage platform, the accuracy of its clinical recommendations depends on the quality of the knowledge base underneath.

Conclusion

AI pre-visit triage is neither inherently safe nor inherently dangerous. It is a clinical tool that requires clinical governance. The safety case depends on the system's accuracy, the robustness of its escalation pathways, the clarity of its patient communication, and the oversight of the clinicians who deploy it.

The technology is ready. The governance frameworks exist. The question is whether practices will implement AI triage with the same rigour they would apply to any clinical intervention — or whether the operational appeal will outrun the safety diligence.

Share this insight