MRCGP AKT & SCA q-banks: what actually moves the needle (and how adaptive learning helps)

Featured image for MRCGP AKT & SCA q-banks: what actually moves the needle (and how adaptive learning helps)

If you’re preparing for MRCGP, you already know the problem: you can spend hundreds of hours “doing questions” and still feel underprepared for the parts that actually determine pass/fail — especially the jump from AKT knowledge application to SCA consultation performance.

This article is a practical guide to choosing (and using) an MRCGP AKT qbank and SCA practice resources in a way that reliably improves outcomes — with a clear explanation of what “adaptive learning” is supposed to do for GP trainees, and how to use AI tools safely during revision.

Not medical, legal, or exam policy advice. Always check the latest RCGP exam pages and regulations before your sitting.


What the AKT and SCA assess (and common failure modes)

The AKT and SCA are core MRCGP assessments set out by the RCGP.

  • The applied knowledge test (AKT) assesses the knowledge base underpinning independent general practice in the UK, within the NHS context.
    See RCGP overview: MRCGP: applied knowledge test (AKT)

  • The simulated consultation assessment (SCA) assesses a candidate’s ability to integrate and apply clinical, professional, and communication skills in GP consultations.
    See RCGP overview: MRCGP: simulated consultation assessment (SCA)

What the AKT really tests (and how people commonly miss marks)

According to the RCGP exam regulations, the AKT covers three broad areas:

  • clinical knowledge
  • critical appraisal / evidence-based practice
  • organisation, management, and administrative issues

See: RCGP regulations: examination structure and content

Common AKT failure modes (what tends to “bleed marks”):

  • pattern recognition without reasoning (spotting a diagnosis but not applying best next step)
  • weak exam technique (misreading SBAs, anchoring early, not eliminating options)
  • evidence-based practice gaps (stats/interpretation feels “optional” until it isn’t)
  • organisation / ethics / law under-revision (often ignored, reliably tested)
  • not training timing (knowing content but losing marks due to pacing)

Also note that the AKT is not purely one question style. The RCGP describes the AKT as containing mainly SBAs, but also other formats. See: Preparing for the AKT

What the SCA really tests (and why good trainees still struggle)

The SCA is built around three performance domains and assessed across multiple consultations, which means consistency matters as much as “brilliance” on one case.

The RCGP describes the three domains as:

  • data gathering and diagnosis (DG&D)
  • clinical management and medical complexity (CM&C)
  • relating to others (RTO)

See: RCGP: marking and results for the SCA

Common SCA failure modes:

  • data gathering that doesn’t converge (history-taking without hypothesis testing; missing serious illness rules-outs)
  • management that isn’t GP-realistic (over-investigating, under-safety-netting, unclear follow-up)
  • weak structure and time control (running out of time before management and safety-netting)
  • communication that is “nice” but not clinically effective (rapport without clarity, shared plan, or safeguarding)
  • not practising out loud (reading cases silently is not the same skill as performing a consultation)

RCGP’s consultation toolkit highlights the importance of structure and timing (including how a 12-minute consultation typically flows through domains). See: SCA consultation toolkit overview


Q-bank selection criteria (AKT vs SCA: different tools, different standards)

It is a mistake to pick one resource and force it to do both jobs.

AKT: what your qbank must deliver

A strong MRCGP AKT qbank should do four things well:

  1. breadth across the RCGP-tested areas
    You want systematic coverage across clinical knowledge, evidence-based practice, and organisation/management.

  2. applied decision-making (not textbook recall)
    The AKT rewards choosing the best next action in real GP context, not just naming conditions.

  3. high-quality explanations
    The explanation should teach:

    • why the right answer is right
    • why the distractors are wrong
    • what the exam trap was
  4. timed exam-mode that feels realistic
    If the platform doesn’t train pacing and SBA discipline, it’s leaving marks on the table.

Practical “green flags” for AKT qbanks:

  • mixed-question sessions (not only topic blocks)
  • “incorrects” and “flagged” review queues
  • performance analytics by category (including evidence-based practice and organisation)

SCA: what “SCA practice” resources must deliver

For SCA, you are not buying a question bank. You are buying performance training.

Strong SCA resources do this:

  • provide scenario variety (acute, chronic, mental health, safeguarding, uncertainty, multimorbidity)
  • train structure (opening, agenda, ICE, focused data gathering, convergence, plan, safety-netting)
  • include communication behaviours mapped to the RCGP domains
  • force you to practise out loud, under time pressure, with feedback

A quick self-test:

  • if you can “revise for hours” without ever speaking out loud or getting feedback, your SCA prep is probably underpowered.

Adaptive learning for GP training (weak-area targeting + repetition scheduling)

Adaptive learning is most useful when it changes what you practise next and when you revisit it, based on your performance.

In MRCGP prep, adaptivity should drive:

  • weak-area targeting (e.g., antibiotics, paeds fever, derm, stats, safeguarding)
  • error clustering (e.g., repeated mistakes in safety-netting; misinterpreting NICE-style thresholds)
  • spaced repetition (your weak topics return at planned intervals until they stabilise)
  • mixed practice (because real performance requires switching contexts)

What meaningful adaptivity looks like

A platform claiming adaptivity should allow you to:

  • see a live weakness dashboard (not just “overall percent correct”)
  • generate sessions from weakness clusters (topic + error pattern)
  • schedule review automatically (missed questions resurface strategically)
  • balance breadth (coverage) with depth (fixing repeats)

What to treat as “adaptive in name only”

  • a random question generator
  • “adaptive” that only repeats incorrect questions without pattern recognition
  • analytics that don’t translate into a review plan

“AI tools” section: safe uses for revision (and unsafe uses)

AI can genuinely help with MRCGP preparation — but only if you keep it inside safe boundaries.

Safe uses for AI during revision

Used responsibly, AI tools can help you:

  • explain answers in plain English after you’ve attempted the question
  • generate feedback prompts (“what did I miss?”, “what alternative diagnoses mattered?”)
  • create micro-revision notes from your own errors (“one rule I will apply next time”)
  • rehearse SCA phrasing for common tasks (safety-netting lines, shared decision-making, uncertainty scripts)
  • support reflection (turning a practice case into a structured learning point)

Unsafe uses (and why they are risky)

Avoid using AI to:

  • reconstruct or share exam content (this can breach exam rules and confidentiality)
  • “memorise leaked questions” or simulate “exact exam stations”
  • outsource clinical judgement by accepting outputs without verification
  • paste identifiable patient information into a tool that isn’t approved for that use

The RCGP explicitly treats exam material as confidential and has misconduct policies governing the AKT and SCA. See:

A simple rule:

  • use AI to strengthen your thinking and communication, not to bypass the assessment process.

Where iatroX fits (adaptive loops + question bank workflow; UK context)

iatroX is built to sit between “doing questions” and “closing the gap”.

In GP training workflows, iatroX is most useful when you use it to create adaptive loops:

  1. do an AKT block (timed, mixed)
  2. review incorrects and identify the error pattern (not just the topic)
  3. use iatroX to clarify the underlying concept (and the common traps)
  4. re-test with targeted questions until the weakness stabilises
  5. convert repeated errors into short “rules” you apply under time pressure

For SCA, iatroX can support:

  • quick learning around consultation structure and safe management patterns
  • rehearsing explanation language, safety-netting scripts, and shared-plan phrasing
  • structured reflection after mock cases (what went well, what would I change)

The point is not “more content”. It is higher-quality repetition.


Sample weekly schedule (AKT + SCA split)

Below is a realistic template for a busy GP trainee. Adjust volume to your rota, but keep the structure.

Weekly structure (6–10 weeks out)

Monday

  • AKT: 1 timed mixed block (25–50 questions)
  • Review incorrects (30–45 minutes)

Tuesday

  • SCA: 2 cases out loud (timed) + feedback (partner/trainer/record yourself)
  • Build a “phrasing bank” (5–10 key lines)

Wednesday

  • AKT: targeted weakness block (20–40 questions)
  • iatroX loop: clarify 2–3 recurring error patterns

Thursday

  • SCA: 2 cases (focus on different domains: one DG&D-heavy, one CM&C-heavy)
  • Reflect: 3 bullet points (what to keep / stop / start)

Friday

  • AKT: timed block + short review
  • Mini-mock: 10-question sprint to train pace

Saturday

  • SCA: 4-case mini-circuit (timed, with strict structure and safety-netting)
  • Review: identify one behavioural change to implement

Sunday

  • Consolidation day:
    • revisit flagged AKT questions
    • rewrite your top 10 “rules”
    • plan next week based on analytics

The discipline that matters most

  • for AKT: review is where the marks are
  • for SCA: speaking out loud + feedback is where the marks are

FAQ (real queries)

“What is the best MRCGP AKT qbank?”

The best qbank is the one that:

  • covers the RCGP-tested areas consistently
  • teaches applied decision-making with strong explanations
  • gives analytics that translate into an action plan
  • supports timed mixed sessions and review loops

If a qbank has poor explanations or weak error correction, it becomes a false-confidence machine.

“How should I split time between AKT questions and SCA practice?”

A sensible default in the final 6–10 weeks:

  • 3–4 days/week: AKT timed blocks + review
  • 2–3 days/week: SCA cases out loud + feedback

Increase SCA intensity closer to the exam if consultation performance is your risk area.

“How many AKT questions should I do per week?”

There is no magic number. A strong target is:

  • 150–300 questions/week (depending on rota)
  • with high-quality review of incorrects and repeated mistakes

Fewer questions with structured review often beats high-volume shallow practice.

“How do I improve SCA quickly?”

Fast wins come from:

  • strict structure (opening → agenda → focused data gathering → convergence → management → safety-netting)
  • rehearsed safety-netting and uncertainty phrases
  • recording yourself and correcting 1–2 behaviours per week
  • practising cases that make you uncomfortable (multimorbidity, safeguarding, MH, complexity)

“Can I use AI tools for MRCGP revision?”

Yes — safely. Use AI for:


Share this insight