From Hickam’s Dictum to Bayes’ Rule: Decision-Making Frameworks for Difficult Diagnoses

Featured image for From Hickam’s Dictum to Bayes’ Rule: Decision-Making Frameworks for Difficult Diagnoses

In medical school, we are taught that if we listen long enough, the patient will tell us the diagnosis. In General Practice, we listen, examine, and are often left with three competing probabilities and a patient who needs an answer in six minutes.

The modern "difficult diagnosis" is rarely a single rare pathogen (the "House M.D." model). It is a 78-year-old on twelve medications presenting with "tiredness and dizzy spells."

To navigate this safety, we need better mental models. We need to move beyond "gut feeling" and operationalise the frameworks of Hickam, Occam, and Bayes.

Why “difficult diagnoses” are rising

The complexity of primary care is compounding. We face a "low signal, high noise" environment:

  1. Multi-morbidity: Patients often have three chronic conditions flaring simultaneously.
  2. Polypharmacy: Side effects mimic pathology.
  3. Low Prevalence: Unlike in a specialist clinic, the "base rate" of serious disease in GP is low, making false positives dangerous.

In this environment, the old debate between "Keep it simple" (Occam) and "It’s complicated" (Hickam) is not just philosophical—it dictates your safety netting strategy.

Hickam vs Occam — the useful version

Most clinicians know the memes, but few use the heuristics effectively.

Occam’s Razor: "Entities should not be multiplied without necessity"

  • The Rule: Look for the single unifying diagnosis that explains all symptoms.
  • When to use: Young, previously healthy patients with acute presentations.
  • The Trap: Trying to force a "unifying theory" on a geriatric patient, leading to the hunt for a rare vasculitis when they actually just have heart failure and a UTI.

Hickam’s Dictum: "A patient can have as many diseases as they damn well please"

  • The Rule: Common diseases occur commonly, and often together.
  • When to use: Elderly, multi-morbid patients, or when the "unifying diagnosis" is vanishingly rare.
  • The Heuristic: "Is it one zebra, or three horses?"

Mini-Case Vignette: The Tired Breathless Patient A 70-year-old male presents with 2 months of fatigue, breathlessness, and 3kg weight loss.

  • Occam says: "Lung Cancer." (One diagnosis explains all three).
  • Hickam says: "COPD exacerbation (breathlessness) + Depression (fatigue) + Poor dentition (weight loss)."

The Lesson: Occam generates your "Must-Not-Miss" list. Hickam generates your "Most Likely" list. You need both.

Bayes’ Rule in plain clinical English

You do not need a calculator to be a Bayesian clinician. You just need to accept one truth: Tests do not give answers; they only shift probabilities.

Pre-test probability ≠ “gut feel”

This is your starting estimate before you order the bloods. It is composed of:

  1. Base Rate: How common is this disease in my population? (e.g., Giant Cell Arteritis is rare in 30-year-olds).
  2. Risk Factors: Does this patient have specific features that raise the risk?

Likelihood ratios: the clinician-friendly shortcut

Forget sensitivity and specificity. Think in Likelihood Ratios (LR).

  • LR+ (Positive Likelihood Ratio): How much does a positive test boost the probability?
    • LR > 10: Massive shift (Rule In).
    • LR 2–5: Moderate shift (Need more evidence).
  • LR- (Negative Likelihood Ratio): How much does a negative test lower the probability?
    • LR < 0.1: Massive drop (Rule Out).

“Why your negative test didn’t rule it out”

If you strongly suspect a DVT (Pre-test probability 80%) and the D-dimer is negative (LR- 0.1), the probability drops, but might only drop to 20%. 20% is too high to send home.

  • Bayesian Lesson: If the story is convincing, a negative test (unless it is a gold standard) may not be enough to stop.

A 7-step workflow for uncertainty

Use this loop when you feel stuck.

  1. Problem Representation: Summarise the case in one sentence. "75F with acute/on/chronic confusion and new urinary incontinence."
  2. Rule out “can’t miss” diagnoses: Apply Occam. What single thing kills them by tomorrow? (Sepsis, subdural).
  3. Decide if the base rate is high enough to test: Is the condition plausible? If the pre-test prob is <1%, testing will mostly yield false positives.
  4. Choose tests that shift probability: Don't order a "fishing expedition." Ask: "Will a positive result change my management?"
  5. Treat/observe vs investigate: Sometimes, Time is the best diagnostic test.
  6. Safety net + follow-up triggers: Define the specific "failure of therapy" that triggers a re-think.
  7. Reflection + learning capture: Did the probability shift as you expected?

Common failure modes

  • Premature Closure: You stop at the first "Hickam" explanation (e.g., "just a virus") without checking the "Occam" red flag.
  • Over-trusting a test: Believing a negative Chest X-ray rules out all lung cancer (it has a false negative rate; if haemoptysis persists, you must CT).
  • Base-rate neglect: Diagnosing a rare tropical disease in a patient who hasn't travelled, just because the symptoms "fit."

How AI fits (without becoming a diagnostic oracle)

AI is not a replacement for Bayesian reasoning; it is a tool to help you estimate the variables.

  • AI as Differential Expander: It helps you find the "Occam" unifiers you forgot.
  • AI as “Bayes Coach”: It can remind you of the "Base Rate" risk factors you might overlook.
  • Guardrails: Always verify. AI models are prone to "hallucinating zebras"—suggesting rare diagnoses without respecting the low base rate.

Where iatroX fits

We built iatroX to support this exact reasoning loop.

  • Brainstorm: Use this as your Structured Reasoning Workspace.
    • Input: "65M, back pain, weight loss."
    • Output: It prompts you to consider both the "Occam" (Myeloma/Mets) and the "Hickam" (Mechanical pain + Depression). It forces you to list the discriminating questions.
  • Ask iatroX: Use this to check the Pre-test Probability data. "What is the prevalence of GCA in a 50-year-old?"
  • CPD: Turn your difficult diagnosis into a reflection case. "I used a Bayesian approach to rule out PE despite a borderline D-dimer."

FAQ

Is Bayes rule practical in a 10-minute consultation? You don't need the math. You just need the concept: "My suspicion is low/medium/high. Will this test shift it enough to cross the treatment threshold?" If the answer is no, don't order the test.

What’s the difference between pre-test probability and likelihood? Pre-test probability is your estimate before the test (based on history). Likelihood (Ratio) is the power of the test to change that estimate.

Why do false positives dominate in low-prevalence settings? If a disease affects 1 in 1000 people, even a test with 95% specificity will generate 50 false positives for every 1 true case. This is why "screening" asymptomatic people is dangerous in General Practice.


Struggling with a complex case? Use Brainstorm to map out your Occam and Hickam lists in seconds.

Share this insight