Every clinical AI tool you use during patient care makes a claim about clinical knowledge. Whether it is answering a question about drug interactions, suggesting a management pathway, or calculating a clinical score — the tool is providing information that may influence clinical decisions. The question is: who ensures that information is safe, accurate, and accountable?
For most clinical AI tools — including ChatGPT, Gemini, Copilot, and other general-purpose LLMs — the answer is: nobody. They are not medical devices. They are not subject to clinical safety regulation. They have no obligation to be safe, transparent, or accountable when used for clinical purposes. They are general-purpose language models that happen to be used in clinical settings — without the regulatory framework that clinical use demands.
iatroX is different. It is UKCA-marked and MHRA-registered as a Class I medical device.
What UKCA Marking Means
The UKCA (UK Conformity Assessed) mark indicates that a product conforms to the UK Medical Devices Regulations 2002 (as amended). For iatroX, this means compliance with specific requirements.
Clinical safety. The platform must demonstrate clinical safety for its intended use — clinical decision support for healthcare professionals. This includes hazard identification, risk assessment, and risk mitigation across all clinical AI outputs.
Quality management system. A documented QMS governs the development, deployment, and maintenance of the platform — including change control, incident management, and continuous improvement processes.
Technical documentation. Comprehensive technical documentation describing the device's intended purpose, design, development methodology, risk management, clinical evaluation, and post-market surveillance plan.
Risk management. A formal risk management process (aligned to ISO 14971) identifying potential hazards from the device's clinical outputs and implementing controls to mitigate those risks.
Declaration of conformity. A formal declaration by the manufacturer (iatroX / Dr Kolawole Tytler) that the device meets all applicable requirements of the UK MDR 2002.
What MHRA Registration Means
The MHRA (Medicines and Healthcare products Regulatory Agency) is the UK's medical device regulator. MHRA registration means the device is on the regulator's register of medical devices marketed in the UK — with associated obligations.
Post-market surveillance. The manufacturer must actively monitor the device's performance after market placement — collecting and analysing data on clinical safety, user feedback, and adverse events.
Adverse event reporting. Any serious incident or near-miss involving the device must be reported to the MHRA. This creates an accountability mechanism that unregulated tools do not have.
Regulatory oversight. The MHRA can inspect, audit, and take enforcement action against registered medical devices. This regulatory backstop ensures ongoing compliance — not just compliance at the point of registration.
Why This Matters for Users
Clinical governance. NHS trusts, ICBs, and GP practices implementing clinical AI tools face governance questions: is this tool safe? Is it accountable? Is it transparent? For UKCA-marked devices, the answer is documented and verifiable. For unregulated tools, the answer is: we do not know, and neither does anyone else.
Professional accountability. When a clinician uses a clinical AI tool during patient care and the tool provides incorrect information, the clinician remains professionally accountable. Using a regulated medical device — with transparent reasoning, source citations, and documented clinical safety — is a defensible clinical decision. Using an unregulated general-purpose chatbot for clinical decision support is harder to defend in a complaint, claim, or fitness-to-practise investigation.
Institutional procurement. NHS organisations increasingly require evidence of regulatory compliance for clinical software procurement. UKCA marking and MHRA registration satisfy these requirements. Unregulated tools may face procurement barriers as NHS digital governance frameworks mature.
The "Not a Black Box" Principle
iatroX's clinical AI (Ask iatroX) operates on a principle of transparent reasoning. Clinical outputs include source citations (NICE, CKS, BNF, SIGN references), confidence indicators, and explicit acknowledgement of limitations. The user can verify the AI's reasoning by checking the cited sources — the tool does not ask you to trust it blindly.
This transparency is both a design principle and a regulatory requirement. A Class I medical device providing clinical decision support must be transparent in its reasoning — the user must be able to understand why the device is recommending a particular course of action.
General-purpose LLMs (ChatGPT, Gemini, etc.) do not provide this transparency. They generate plausible-sounding text without source citations, without confidence markers, and without any regulatory obligation to be accurate. They are impressive technology. They are not medical devices.
What This Means for Clinicians
If you use AI tools in clinical practice — and most clinicians now do — the regulatory status of those tools matters. An MHRA-registered, UKCA-marked clinical AI tool provides documented clinical safety, transparent reasoning, regulatory accountability, and defensible clinical governance.
An unregulated general-purpose chatbot provides none of these.
The practical test. If a patient complained that your clinical decision was influenced by an AI tool, could you defend that decision to the GMC? If the tool is an MHRA-registered medical device with transparent reasoning and source citations, you can demonstrate that you used a regulated clinical tool and verified its output against cited guidelines. If the tool is ChatGPT — an unregulated general-purpose language model with no source citations, no clinical safety documentation, and no regulatory accountability — the defence is significantly weaker.
This is not hypothetical. As clinical AI adoption increases, GMC fitness-to-practise cases involving AI-influenced clinical decisions are inevitable. The regulatory status of the tool you used will be a relevant consideration.
What This Means for NHS Organisations
NHS trusts, ICBs, and primary care networks implementing clinical AI tools face specific governance obligations.
Clinical safety standards. DCB0129 (clinical risk management applied to the manufacture of health IT systems) and DCB0160 (clinical risk management applied to the deployment of health IT systems) apply to clinical AI tools used in NHS settings. UKCA-marked medical devices have clinical safety documentation aligned to these standards. Unregulated tools do not.
Information governance. Clinical AI tools process clinical data — patient symptoms, diagnoses, management queries. For regulated medical devices, data handling is governed by the quality management system documented in the technical file. For unregulated tools (particularly those operated by US companies under US data privacy law), data governance is less clear.
Procurement. NHS digital governance frameworks increasingly require evidence of regulatory compliance for clinical software procurement. UKCA marking and MHRA registration provide this evidence. Organisations adopting unregulated clinical AI tools may face governance challenges as regulatory expectations mature — particularly if a clinical incident involves an unregulated tool.
The trajectory. The MHRA's regulatory approach to clinical AI is evolving — the Software and AI as a Medical Device Change Programme is actively developing guidance on AI-specific regulatory requirements. Early registration (as iatroX has achieved) demonstrates commitment to regulatory compliance and positions the platform favourably as requirements evolve. Platforms that have avoided regulation may face retrospective compliance challenges as the regulatory framework tightens.
The Bottom Line
Clinical AI is here. The question is not whether clinicians will use it — they already do. The question is whether the tools they use are regulated, transparent, and accountable. iatroX is. Most alternatives are not. In a regulated profession, using regulated tools is not optional — it is the standard of care.
Learn more about iatroX's clinical AI standards at iatrox.com.
