AI precharting is one of the most promising innovations in clinical workflow. The AAFP's study with Navina reported a 61% reduction in visit preparation time, a 25% increase in diagnoses found, and physicians feeling fully prepared for 81% of visits compared to 54% before. The clinical and operational case is strong.
But the technology introduces risks that are genuinely new — not just recycled versions of the risks we already understand from ambient scribing. Pre-charting AI does not merely record what happens in the room. It interprets what has already happened across the patient's entire medical record and presents it to the clinician as a structured narrative. That interpretation step is where the risks live.
The Overcoding Risk
This risk is primarily US-focused but matters wherever financial incentives are tied to diagnostic coding.
Pre-charting tools like Navina surface suspected diagnoses — including Hierarchical Condition Categories (HCCs) — at the point of care. These are conditions that the AI has identified in the patient's historical data but that may not be actively coded in the current problem list. Under value-based care contracts, recapturing these diagnoses increases the practice's Risk Adjustment Factor (RAF) score, which directly affects revenue.
The clinical benefit is real: surfacing a genuine but undocumented diagnosis improves care quality. The risk is equally real: if clinicians routinely accept AI-suggested diagnoses without independent verification, the system drifts from "capturing missed diagnoses" to "optimising coding for revenue." That drift may be gradual and unintentional, but it has regulatory consequences.
CMS and OIG scrutiny of AI-driven risk adjustment is increasing. Practices that show sudden increases in HCC capture following AI tool adoption will attract audit attention. The defence — "the AI surfaced it" — does not transfer liability. The clinician who codes the diagnosis is the one who must be able to justify it clinically.
What to verify: For every AI-suggested diagnosis, confirm that there is current clinical evidence supporting the condition's active status. Do not code a diagnosis simply because the AI flagged it from a historical note. The diagnosis must reflect the patient's current clinical reality.
The Omission Risk
This is the mirror image of overcoding, and it applies in every healthcare system.
A pre-chart is a summary. Summaries omit things. When the AI condenses hundreds of pages of medical records into a one-page brief, it makes decisions about what to include and what to leave out. Those decisions are algorithmic, not clinical.
If the AI omits a critical finding — a flagged concern buried in a specialist letter, a medication change noted in a hospital discharge but not coded in the problem list, an abnormal result that was filed but not actioned — and the clinician misses it because they relied on the pre-chart rather than reviewing the raw record, the patient may be harmed.
The liability framework is clear: the clinician is responsible for the decisions they make, regardless of what tools they used. But the risk profile is different from a missed finding in the raw EHR, because the clinician may reasonably argue that the AI's summary created a false sense of completeness. That argument will not eliminate liability, but it will complicate the medico-legal picture.
What to verify: For complex patients, always check the raw record for recent specialist correspondence, discharge summaries, and investigation results. Do not assume the pre-chart is comprehensive. Treat it as a starting point, not a complete picture.
The Anchoring Risk
Cognitive anchoring is one of the most studied biases in clinical decision-making — and AI precharting may amplify it.
When a clinician walks into the room having reviewed an AI-generated summary that highlights three problems and suggests two diagnoses, their cognitive frame is already set. They are more likely to confirm the AI's assessment than to challenge it. The pre-chart becomes a hypothesis before the clinician has assessed the patient independently.
This is particularly dangerous for presentations where the patient's story does not match the historical record — where the documented diagnosis is wrong, where a new problem is overshadowed by the pre-charted agenda, or where the patient's emotional state suggests something the data cannot capture.
What to verify: Maintain the habit of open-ended assessment. Ask the patient what brings them in today before consulting the pre-chart. Let the patient set the agenda before the AI does.
The Data Quality Risk
Pre-charting AI is only as good as the data it ingests. If the EHR contains coding errors, duplicate problem lists, unstructured scanned documents that the AI cannot parse, or conflicting information from different sources, the pre-chart inherits those errors — and presents them as a coherent narrative.
This is particularly problematic in healthcare systems with fragmented records (the US, with multiple EHR systems across providers) and in UK general practice where patient records may span decades, with legacy coding, scanned paper documents, and inconsistencies from multiple clinicians documenting over time.
What to verify: If something in the pre-chart looks inconsistent or surprising, check the source. Most pre-charting tools provide clickable links back to the original data — use them.
The Consent and Data Scope Risk
Pre-charting tools that pull data from external sources — health information exchanges, claims databases, other providers' EHRs — raise questions about what the clinician is expected to have reviewed.
If the AI surfaces a finding from an external source that the clinician would not have seen through normal workflow, does the clinician now have a duty to act on it? If they do not act and the patient is harmed, has the AI's data integration created a new liability surface?
In the UK, this is currently a theoretical risk — NHS GP records are largely self-contained within EMIS or SystmOne, and cross-provider data sharing is limited. But as interoperability improves and pre-charting tools potentially draw on shared care records, the question will become practical.
What to verify: Understand what data sources your pre-charting tool accesses. Know whether it is pulling from your clinical system alone or from external sources. If external data is included, review it with the same diligence you would apply to any clinical correspondence.
The Clinical Knowledge Verification Layer
Every risk described above is mitigated by the same practice: the clinician verifying the AI's clinical recommendations against trusted, guideline-grounded sources.
iatroX serves this verification function. When a pre-chart suggests a diagnosis, the clinician can check the NICE-recommended diagnostic criteria via Ask iatroX in seconds. When it flags a care gap, the clinician can verify the current guideline pathway. When it suggests a medication review, the clinician can check BNF interactions and dosing via the Knowledge Centre.
The pre-chart accelerates preparation. The knowledge layer ensures the preparation is clinically sound. Both are necessary. Neither is sufficient alone.
A Pre-Visit Verification Checklist
Before every pre-charted visit, ask yourself:
Have I reviewed the pre-chart as a starting point, not a conclusion? Have I checked for recent correspondence, investigations, and discharge summaries that may not appear in the summary? Am I prepared to let the patient set the agenda before I default to the pre-charted agenda? For any AI-suggested diagnoses, can I justify them with current clinical evidence? For any flagged care gaps, have I verified the current guideline recommendation? Am I aware of what data sources the pre-chart drew from?
This takes one to two minutes. It converts an AI-generated summary from a potential liability into a genuine clinical advantage.
Conclusion
AI precharting is a powerful tool. The evidence for time savings, improved preparation, and better diagnosis capture is strong. But the risks — overcoding, omission, anchoring, data quality, and expanded liability — are real and require active mitigation.
The mitigation is not complicated. It is the same practice that has always defined good medicine: verify before you act, think before you accept, and maintain your own clinical judgement as the final authority. Tools like iatroX make the verification step fast and reliable. The AI prepares the chart. You verify the content. The patient gets the benefit of both.
