Clinical brainstorming is not diagnosis. It is the structured thinking that precedes and supports diagnosis — considering differentials, reviewing red flags, planning investigations, weighing management options, and thinking through clinical uncertainty. Doctors already brainstorm: with colleagues in corridors, with seniors on ward rounds, with registrars during handover, and with themselves while reviewing notes between patients. AI can support this process — without replacing the clinician's judgment.
What Clinical Brainstorming Means
Brainstorming is the process of generating and evaluating possibilities before committing to a decision. In clinical practice, this includes structuring a differential diagnosis for an unfamiliar or complex presentation, reviewing red flags that might change the urgency or trajectory of management, considering investigation options and their diagnostic yield in the specific clinical context, thinking through management approaches and their trade-offs for the individual patient, evaluating referral thresholds and the criteria that should trigger escalation, and preparing to explain a clinical plan to the patient in language they can understand.
This type of thinking is fundamentally different from looking up a specific fact. "What is the dose of ramipril in renal impairment?" is an information retrieval question — it has one correct answer. "How should I think through this patient with progressive fatigue and weight loss?" is a brainstorming question — it has multiple valid reasoning pathways that need to be considered, prioritised, and contextualised.
Why Doctors Already Brainstorm with Colleagues, Guidelines and Search
Brainstorming is not new to clinical practice. It happens constantly. Every corridor conversation with a colleague — "I've got a patient with X, what do you think?" — is brainstorming. Every MDT discussion where specialists share perspectives on a complex case is structured brainstorming. Every time a clinician opens NICE CKS and reads through a management pathway they already know, they are using the guideline as a brainstorming scaffold — reminding themselves of considerations they might otherwise overlook.
The limitation is availability and timing. Colleagues are not always free — they have their own patients, their own clinics, their own time pressures. Seniors are not always accessible — particularly for trainees working nights, weekends, or in understaffed departments. Guideline pages are available but do not respond to follow-up questions — they present static content regardless of the clinician's specific uncertainty. The brainstorming happens when it can, not always when it is needed.
This creates a gap: the moment of maximum clinical uncertainty is often the moment of minimum colleague availability. The 2am ward round, the solo GP managing a complex patient in a time-pressured surgery, the foundation doctor encountering a presentation they have never seen before — these are the moments when brainstorming support would be most valuable, and when it is most difficult to access.
How AI Can Support Structured Thinking
AI can serve as an always-available brainstorming partner — not replacing colleagues or seniors, but supplementing them in the moments when human support is unavailable.
The key distinction: AI brainstorming should support the clinician's thinking process, not short-circuit it. The AI provides structure, prompts consideration of possibilities, retrieves relevant information, and helps organise reasoning. The clinician evaluates, contextualises, applies patient-specific factors, and makes the decision.
Practical examples of responsible use:
"Help me structure a differential for painless jaundice." The AI generates a structured differential organised by mechanism — obstructive causes (pancreatic head mass, cholangiocarcinoma, choledocholithiasis), hepatocellular causes (cirrhosis, viral hepatitis, drug-induced), and haemolytic causes (haemolytic anaemias, ineffective erythropoiesis) — with key distinguishing clinical features for each category. The clinician evaluates which categories fit the patient's specific presentation, age, risk factors, and examination findings.
"What red flags should I consider in acute headache?" The AI lists red flags with their clinical significance: thunderclap onset (subarachnoid haemorrhage), papilloedema (raised intracranial pressure), focal neurology (space-occupying lesion, stroke), fever with neck stiffness (meningitis/encephalitis), new headache in a patient over 50 (giant cell arteritis, malignancy), history of malignancy (brain metastases), anticoagulation (intracranial haemorrhage), and recent head trauma. The clinician assesses which apply to their specific patient and acts accordingly.
"Which risk score applies in suspected pulmonary embolism?" The AI explains the YEARS criteria, Wells score, and revised Geneva score — when each is recommended by guidelines, what the thresholds mean, and how they inform the decision between D-dimer testing, CTPA, and clinical observation. The clinician selects and applies the appropriate score.
"How should I think through recurrent syncope?" The AI structures the approach by mechanism — cardiac (arrhythmia, structural heart disease, aortic stenosis), neurally mediated (vasovagal, situational, carotid sinus), orthostatic (hypovolaemia, autonomic failure, medication-related), and neurological (seizure masquerading as syncope) — with investigation pathway for each category. The clinician integrates with the patient's history, examination, and prior workup.
"What are the key management considerations for gout?" The AI retrieves guideline-informed management: acute treatment (NSAIDs, colchicine, or corticosteroids depending on contraindications), prophylaxis during urate-lowering therapy initiation (colchicine cover), indications for urate-lowering therapy (recurrent attacks, tophi, renal impairment, urate nephropathy), target serum urate levels, and monitoring requirements. The clinician applies to the specific patient considering their comorbidities, concurrent medications, and preferences.
What AI Should Not Do
AI should not make clinical decisions. It should not override clinical judgment. It should not be used as the sole basis for diagnosis or treatment without verification against authoritative sources. It should not be trusted blindly on drug doses, rare conditions, or complex management pathways. It should not be asked to replace the human factors that define good clinical care — empathy, contextual awareness, patient preferences, clinical intuition developed over years of experience.
The responsible use model is clear: AI retrieves information and structures reasoning. The clinician interprets, verifies, contextualises, and decides. The decision is always the clinician's — and documentation should reflect that the clinical decision was made by the professional using their judgment, supported by verified authoritative sources.
How iatroX Supports Brainstorming and Retrieval
iatroX includes accessible clinical brainstorming and information-retrieval workflows. Ask iatroX supports both specific lookups ("What does NICE recommend for X?") and open-ended brainstorming ("Help me think through Y"). Calculators support the quantitative dimension of clinical reasoning. Exam Q-banks reinforce the knowledge base that makes brainstorming effective — because the quality of clinical brainstorming depends directly on the depth and breadth of the clinician's underlying knowledge.
The Clinician Remains the Decision-Maker
Clinical AI is a tool, not an oracle. It retrieves, structures, and suggests. The clinician assesses, verifies, contextualises, and decides. The value is in the combination — AI speed and breadth combined with human judgment, experience, and contextual awareness. Neither alone is sufficient for the complexity of clinical practice. Together, they are more effective than either.
The next phase of clinical AI is not about AI replacing clinicians. It is about AI supporting clinicians in the moments when they need to think clearly, retrieve information quickly, and make good decisions under time pressure. That is what brainstorming support means in practice — helping doctors think, not thinking for them.
