The Bottom Line
- AI is best treated as a fast research assistant — not an authority.
- Your safety move is provenance: every key claim needs a source you can inspect.
- Use “Cite → Quote → Corroborate”: source the claim, verify the quote, cross-check elsewhere.
High-risk failure mode
If an AI gives confident specificity without traceable sources (names, numbers, thresholds, “rules”), assume it may be wrong until verified.
Hallucination tells (pattern recognition)
- Over-precise numbers without a source.
- Citations that don’t match the claim when opened.
- Named organisations/documents that don’t exist.
- Confident language + zero uncertainty in an uncertain area.
- “Policy-like” statements with no official link.
SourceNIST AI Risk Management Framework (official): governance and risk thinking
Open Link SourceBMJ: How to read a paper (use it to verify claims)
Open Link