The first wave of medical AI was about speed and fluency — generating answers that sounded clinically competent, faster than manual search. The next wave is about verification: can the answer be reviewed, attributed, governed, and trusted? Doximity's PeerCheck represents one of the most visible moves in this direction.
What Is Doximity Ask?
Doximity Ask (formerly DoxGPT) is positioned as Doximity's physician-verified clinical AI assistant, part of the Doximity Clinical AI Suite alongside Scribe (ambient note generation) and Dialer (HIPAA-compliant calling). Doximity is the largest professional medical network in the United States — over 3 million registered members representing approximately 85% of US physicians, with 720,000 unique clinicians using workflow tools in the most recent quarter.
The August 2025 acquisition of Pathway Medical for $63 million significantly expanded Ask's clinical capabilities, adding 3,200+ drug monographs and structured medical content spanning guidelines, peer-reviewed evidence, and landmark trials. Doximity's own FAQ describes Ask as providing referenced clinical answers, chart-note templates, patient education material, translation, and literacy-level adjustment — a clinical workflow tool, not just a search box.
What Is PeerCheck?
PeerCheck is Doximity's physician-led review layer. Over 10,000 physician experts — recruited from medical research authors and domain specialists — review AI-generated clinical answers for accuracy, evidence strength, and potential bias. Eric Topol (cardiologist, founder of Scripps Research Translational Institute) and former US Surgeon General Regina Benjamin serve as co-editors-in-chief.
PeerCheck-certified answers are labelled within Ask with certification badges and links to reviewing physicians' Doximity profiles. The clinician using the tool can see which specific physician reviewed the answer they are reading — creating a visible chain of accountability from AI generation to physician verification.
In a company-published evaluation of over 1,300 physicians comparing DoxGPT against OpenEvidence, UpToDate, and ChatGPT, DoxGPT was preferred as the best clinical answer at more than twice the rate of the nearest competitor (61% vs 26% for OpenEvidence). This is company-published data — not independently validated — but the sample size and comparative design are meaningful signals.
Why Physician Validation Matters
AI can generate fluent, plausible clinical answers that are subtly incomplete, overconfident, outdated, or biased. A Stanford-Harvard study found that AI can cause clinical harm in up to 22% of real patient cases. The specific risks include omission (missing a critical consideration), outdated evidence (citing superseded recommendations), false certainty (presenting uncertain evidence as definitive), jurisdictional misalignment (US guidelines applied to non-US contexts), and hallucination (fabricating plausible-sounding but incorrect clinical claims).
Physician review addresses these risks by adding domain expertise to the verification layer. A specialist reviewing an AI answer in their field can detect nuances that automated systems miss — clinical subtleties, evidence-hierarchy issues, practice-pattern considerations, and patient-safety concerns that require human judgement to identify.
Doximity's CEO told investors: "AI is fast, but [clinicians] want textbook-trusted and AI fast. That's where PeerCheck is an incredible opportunity — these 10,000 noted authors are putting their name at the top of that, and that name up there, that's trust."
What This Means for UK Clinicians
Doximity is a US platform serving US physicians. PeerCheck's reviewers are US specialists reviewing answers grounded in US clinical practice — US guidelines, US prescribing norms, US drug labels, US referral pathways. The model is important, but UK clinicians need UK-specific trust infrastructure.
UK clinical AI requires NICE and CKS guideline alignment, SmPC/eMC medicines data approved by the MHRA, awareness of UK-specific drug safety alerts, and alignment with UK referral pathways, formularies, and professional workflows.
A physician-validated answer from a US specialist — however expert — may not reflect UK prescribing practice, UK-licensed indications, or UK guideline recommendations. The trust architecture must match the clinical jurisdiction.
Where iatroX Fits
iatroX approaches the same trust problem from a UK clinical workflow perspective. It is built for clinicians and healthcare professionals, not as a consumer symptom checker. Its clinical AI layer is designed to maximise fidelity to the underlying research, guidelines, and medicines information it retrieves — using source-grounded retrieval from curated UK clinical sources, algorithmic fidelity controls, provenance display, fail-safe behaviour (narrowing, abstaining, or escalating when confidence is inadequate), and feedback mechanisms allowing clinicians to flag unclear or inaccurate outputs.
Doximity's PeerCheck shows the importance of physician review in US clinical AI. iatroX addresses the same trust problem from a UK professional workflow angle: source-grounded retrieval, UK guideline alignment, SmPC-aware medicines information, algorithmic fidelity controls, fail-safe behaviour, and feedback from clinicians and healthcare professionals.
