The AMA reports that two in three US physicians are now using health AI — up 78% from 2023 to 2025. But interest is tempered by fear. What if the AI gives the wrong recommendation? Who is liable? What about HIPAA? What does the FDA require?
These questions are legitimate. The answers are more straightforward than most physicians expect.
HIPAA Compliance
Any AI tool that processes protected health information (PHI) — patient names, dates of birth, medical record numbers, clinical notes containing identifiable information — must be HIPAA-compliant. This requires a signed Business Associate Agreement (BAA) between the physician/practice and the AI vendor, data encryption in transit and at rest, access controls and audit trails, breach notification procedures, and documented data handling policies.
Ambient scribes like Abridge and Nuance DAX process PHI (they record patient conversations) and must be HIPAA-compliant with signed BAAs. OpenEvidence processes clinical notes through its Visits feature and has HIPAA compliance for that functionality.
AI tools that do not process PHI are not subject to HIPAA requirements. Clinical reference tools where you type a general clinical question — "what is the first-line treatment for gout?" — without patient identifiers fall outside HIPAA scope. iatroX, for example, is a clinical reference tool where you ask guideline-grounded questions without inputting patient data. This distinction matters for risk assessment and compliance.
Malpractice and Liability
The legal standard is clear and has not changed with the arrival of AI: the physician is responsible for clinical decisions regardless of which tools were used.
Using an AI tool that provides a recommendation does not transfer liability if the recommendation is followed and the patient is harmed. The physician made the decision; the AI provided input. Similarly, using an AI tool that provides a recommendation does not create liability if the recommendation is not followed and the physician's independent decision was reasonable.
The practical implications for day-to-day practice: treat AI-generated recommendations as input to your clinical judgement, not as instructions. Document your reasoning — especially when the AI's suggestion differs from your decision. Verify recommendations against authoritative sources. Use AI as a second opinion, not a first authority. And maintain the clinical knowledge to evaluate AI outputs critically.
The scenario physicians fear most — "the AI told me to do X and I did it and the patient was harmed" — is legally no different from "my colleague suggested X and I did it and the patient was harmed." Professional responsibility does not delegate.
Decision Support vs Diagnostic Devices
The FDA regulates AI/ML-based software as medical devices (Software as a Medical Device, or SaMD) when they are intended to diagnose, treat, or prevent disease. But clinical decision support (CDS) tools that provide recommendations for the clinician to evaluate may qualify for an exemption under the 21st Century Cures Act if they meet four criteria: they disclose the basis for the recommendation, they are intended for a clinician to independently review the recommendation, they do not replace clinical judgement, and the clinician is not required to follow the recommendation.
Most clinical reference tools — including iatroX, Medscape AI, and OpenEvidence's evidence search — operate as CDS rather than diagnostic devices. They provide information for the clinician to evaluate rather than autonomous diagnostic outputs.
Tools that provide specific diagnostic classifications, treatment protocols that clinicians are expected to follow, or autonomous triage decisions may require FDA device classification. The distinction matters for both regulatory compliance and liability exposure.
Practical Risk Management for Physicians
Choose AI tools that cite their sources — so you can verify recommendations rather than trusting them blindly. iatroX provides citation-first answers linking to the specific guideline section that supports each recommendation.
Prefer tools with regulatory status where available. iatroX is UKCA-marked and MHRA-registered for its guideline retrieval functionality. While this is UK regulatory status rather than US FDA, it demonstrates that the tool has been through a medical device assessment process.
Document that AI-assisted information was verified against primary sources when it informed a clinical decision. This creates a clear chain of reasoning in the medical record.
Maintain your own clinical reasoning as the final authority. The AI provides evidence retrieval and synthesis; you provide the judgement, context, and accountability that patients need.
Do not rely on any AI tool — purpose-built or general-purpose — for prescribing decisions without independent pharmacological verification through authoritative drug references.
Conclusion
The liability landscape for AI in US clinical practice is less uncertain than many physicians believe. The fundamental principle — physician responsibility for clinical decisions — has not changed. AI tools are inputs to that decision-making, not substitutes for it. HIPAA applies when PHI is processed. The FDA exempts most clinical decision support. And the best risk management strategy is the simplest: verify, document, and maintain the clinical expertise to evaluate whatever the AI produces.
iatroX supports this approach with citation-first, guideline-grounded answers that make verification fast and documentation easy.
