The Bottom Line
- Read the privacy policy and terms before using any external AI tool for clinical work.
- Check retention, secondary use, and whether data can be used to improve services.
- If you need enterprise-grade controls, look for formal agreements and security commitments.
Governance is not optional. Even if you never enter identifiable patient data, you may still input sensitive operational information. The right stance is to understand: what is collected, how it is used, what is shared, how deletion works, and what commitments the company makes publicly.
1
Check 1 — What is collected?
Account identifiers, queries, usage telemetry, and any uploaded content. Ensure you know what counts as ‘user content’ in the terms.
2
Check 2 — Retention and deletion
Look for clear retention periods and deletion controls. If it’s vague, assume longer retention.
3
Check 3 — Secondary use
Does the policy allow aggregated/de-identified use for analytics or service improvement? Understand the wording and your organisational stance.
4
Check 4 — Security commitments
Look for a security page/trust centre that describes commitments and governance structure.
Default safe rule
If you wouldn’t put it in a public email, don’t paste it into a third-party AI tool without explicit organisational approval and a clear governance basis.
SourceOpenEvidence: Privacy policy
Open Link SourceOpenEvidence: Terms of use
Open Link SourceOpenEvidence: Security and compliance
Open Link SourceOpenEvidence: Business Associate Agreement (where relevant)
Open Link