If you’re revising for the UK medical licensing assessment (MLA / UKMLA), you’ll quickly discover two truths:
- there are plenty of question banks, and
- not all “UKMLA qbanks” are built to train what the exam is actually testing.
This guide is designed to help you choose a UKMLA revision question bank intelligently in 2026 — with a clear focus on exam realism, reliability, and what “adaptive learning” really means (and doesn’t mean) in medical exam prep.
What UKMLA/MLA is (who it’s for, why it exists, and the two parts)
The General Medical Council (GMC) is introducing the medical licensing assessment (MLA) to set a consistent threshold of core knowledge, skills, and behaviours for doctors who want to practise in the UK.
In practical terms, the MLA is a two-part assessment:
- AKT (applied knowledge test)
- CPSA (clinical and professional skills assessment)
A key nuance (especially relevant for IMGs and UK students) is delivery:
- For UK medical students, MLA components are set/delivered by UK medical schools (with the GMC quality-assuring against common requirements).
- For international medical graduates (IMGs) who need to demonstrate knowledge/skills via the MLA, the GMC delivers the MLA route.
Useful primary sources you can cite:
- GMC overview of the MLA: https://www.gmc-uk.org/education/medical-licensing-assessment
- GMC page describing CPSA as one of two MLA components (AKT + CPSA) and that UK medical schools set and deliver CPSA for students: https://www.gmc-uk.org/education/medical-licensing-assessment/uk-medical-schools-guide-to-the-mla/clinical-and-professional-skills-assessment-cpsa
- Medical Schools Council overview (two-part MLA; administered by your medical school for UK students): https://www.medschools.ac.uk/for-students/medical-licensing-assessment/
- GMC MLA content map (explicitly describes “set and delivered by UK medical schools … and the GMC for IMGs”): https://www.gmc-uk.org/cdn/documents/mla-content-map-_pdf-85707770.pdf
What a UKMLA q-bank must train (and what it must not)
A good MLA AKT question bank should train applied knowledge and clinical reasoning in exam-style scenarios — not “textbook recall”.
In fact, official MLA-related materials repeatedly emphasise that the AKT is about applied knowledge rather than pure factual recall, and that questions are designed around the MLA content expectations and exam sampling.
What this means in practice:
A UKMLA qbank should train
- knowledge application (what you do with facts, not just what you can recite)
- breadth across the MLA content map (system-wide, across specialties, presentations, professionalism)
- SBA exam stem skill (recognising the best answer in imperfect clinical scenarios)
- safe decision-making framing (risk, escalation, uncertainty, red flags, governance, professionalism)
A UKMLA qbank should avoid over-indexing on
- obscure factoids with no clinical utility
- dated stems that don’t resemble modern MLA-style vignettes
- “trick” questions that reward memorisation over judgement
- answer keys with minimal explanation (you don’t learn from these)
Primary source to cite for “applied knowledge rather than factual recall” framing:
- Medical Schools Council AKT student handbook 2026 (PDF): https://www.medschools.ac.uk/wp-content/uploads/2025/10/MS-AKT-Student-Handbook-2026.pdf
Adaptive learning: what it means in exam prep (and what to look for)
“Adaptive learning qbank UK” is now common marketing language, but not all adaptive systems are meaningfully adaptive.
In exam prep, adaptive learning typically means a platform uses your performance to improve what you see next and when you see it again, using concepts like:
- spaced repetition (revisiting topics over time so learning persists)
- retrieval practice (forcing recall to strengthen memory and application)
- weakness targeting (prioritising topics where you’re underperforming)
- error clustering (spotting repeated patterns: e.g., ECG interpretation, antibiotic choices, red flags)
- difficulty calibration (moving you towards exam-level complexity as competence increases)
What to look for (practical indicators of real adaptivity)
- You can see a weakness dashboard that updates reliably after sessions
- You get topic-level and subtopic-level breakdown (not just “cardiology: 62%”)
- The platform creates a review queue (missed/flagged items resurface intentionally)
- You can generate mixed practice sets (because the real exam is mixed)
- There is a clear model for how “adaptive” selection works (even if simplified)
What to treat cautiously
- “adaptive” that is simply a randomiser
- “adaptive” that only sorts by “right/wrong” but doesn’t support targeted review loops
- platforms that claim adaptivity but don’t show how they handle errors, updates, or curriculum mapping
The reliability checklist for q-banks (the part most people skip)
When you’re choosing the best UKMLA qbank, the main risk is not that it is “hard” or “easy” — it’s that it is unreliable, poorly mapped, or weak on explanations.
Use this checklist before you commit.
1) curriculum mapping
- Does it map to the MLA content map (or an explicit UKMLA blueprint)?
- Can you revise by presentation, system, and professional themes (not just organ blocks)?
- Is the mapping visible and auditable, or is it vague marketing?
2) explanation quality + references
- Do explanations teach the why, not just the correct option?
- Are explanations consistent with UK practice and common pathways?
- Are there references or rationale that can be checked?
- (Even a simple “based on X guideline / Y standard practice” is better than nothing.)
3) analytics & personalisation
- Can you track performance by topic and over time?
- Can you filter by new vs incorrect vs flagged questions?
- Can you generate targeted sets from weak areas?
4) question freshness + errata process
- Does the platform publish updates, errata, or change logs?
- Can users report errors easily?
- Is there a visible process for corrections (and how quickly they happen)?
5) mobile UX and offline access
- Is the mobile experience actually usable for daily micro-sessions?
- Can you do offline practice (or at least low-data practice)?
- Are explanations readable on a phone (not tiny walls of text)?
If a qbank is weak on explanations, error correction, and mapping, it tends to produce false confidence.
Shortlist: types of tools (and how they differ)
Most revision stacks now include three categories. Each has a role — but they are not interchangeable.
1) Traditional q-banks (fixed sets)
These usually offer:
- a large fixed question library
- topic filters, mocks, and analytics
- predictable revision pathways
They can be excellent for breadth and repetition, provided explanations and updates are robust.
Examples often used by UK students include platforms such as PassMedicine, Pastest, and Quesmed (always assess them with the checklist above, because products evolve).
2) AI-adaptive q-banks (dynamic selection, error clustering)
These aim to:
- select the next question based on your performance
- cluster repeated errors
- create personalised review loops
They’re potentially strong for efficiency in a 6–10 week sprint, but only if their adaptivity is transparent and the content is high quality.
3) “Clinical knowledge copilots” (how they differ from q-banks)
These tools are not q-banks. They are designed to support:
- fast clinical knowledge lookup
- structured reasoning
- summarisation and decision support workflows
They can be valuable in parallel with qbank practice because they help you:
- understand why an option is correct
- explore differentials and next steps
- connect guidelines-style thinking to exam stems
But they should not replace repeated SBA practice if your goal is AKT performance.
Where iatroX fits (adaptive learning + country-specific exam tagging; “learning continuum” framing)
iatroX is designed to sit between “exam prep” and “clinical reasoning support”.
In the UK exam context, the value is typically:
- adaptive learning workflows that help you target weak areas efficiently
- country-specific exam tagging (so UKMLA/UK-aligned practice doesn’t get mixed with US/CA/AU content)
- a “learning continuum” approach: using exam-style practice to drive knowledge gaps, then using structured support tools to close those gaps quickly
A sensible way to use iatroX alongside a traditional qbank is:
- do timed mixed SBA blocks (qbank)
- review misses/flags
- use iatroX to rapidly clarify the underlying concept and common traps
- return to targeted questions until the weakness stabilises
This preserves the exam-specific muscle of SBA practice while improving the quality of review.
How to build a 6–10 week UKMLA plan (weekly cadence, mixed practice, timed blocks, review loops)
Below is a practical template that works whether you’re using a traditional qbank, an AI-adaptive qbank, or a blended approach.
Weeks 1–2: baseline + breadth
- 1–2 timed blocks per day (e.g., 25–50 SBAs)
- mixed practice from day one (avoid “comfort-topic only”)
- build a weakness list (top 10 topics + 5 recurring error types)
- start a review loop:
- incorrects reviewed within 24–48 hours
- flagged questions revisited weekly
Weeks 3–6: intensity + weakness targeting
- increase timed blocks (e.g., 50–100 SBAs/day depending on schedule)
- 2–3 focused sessions/week on your top weak clusters
- 1 full mock weekly (or every 10–14 days) with strict timing
- tighten your review method:
- correct answers: quick skim
- incorrect answers: deep review + “why the other options are wrong”
- repeat errors: write a one-line rule (“if X + Y, think Z”)
Weeks 7–10: exam simulation + consolidation
- shift from “learning mode” to “performance mode”
- more mocks and exam-like mixed sessions
- focus on:
- exam pacing
- reducing avoidable errors
- stamina and decision confidence under time pressure
- keep a short list of persistent weak areas and hammer them repeatedly
The one rule that matters most
Do not let your plan become “endless questions without review”.
Most score gains come from:
- analysing errors,
- changing your decision pattern,
- and then re-testing those same patterns until they stick.
FAQ (written as real queries)
“Best UKMLA qbank: what should I buy?”
Choose based on reliability, not hype. Use the checklist:
- mapping to UKMLA/MLA blueprint
- explanation quality
- analytics and review loops
- freshness and errata process
- mobile usability
If two qbanks are similar, pick the one you will use daily.
“UKMLA AKT practice questions: how many do I need to do?”
There is no magic number, but the pattern is consistent:
- fewer questions + high-quality review often beats
- many questions + shallow review
Aim for consistent timed practice plus ruthless review loops.
“MLA AKT question bank vs textbook: which is better?”
For AKT performance, a qbank is usually the core tool because it trains SBA decision-making and exam pacing. Textbooks can support understanding, but they won’t replicate exam stems.
“Adaptive learning qbank UK: is it worth it?”
It can be — if it meaningfully:
- targets weaknesses,
- schedules review,
- and clusters repeated errors.
If “adaptive” doesn’t change your revision behaviour in a measurable way, it’s just branding.
“What about postgraduate pathways like MRCGP and MRCP(UK)?”
After UK graduation, many doctors sit postgraduate exams aligned to training pathways:
- MRCGP includes components such as the AKT and SCA (set out by the RCGP): https://www.rcgp.org.uk/mrcgp-exams/regulations-examination-structure
- MRCP(UK) is a shared membership exam across the UK Royal Colleges of Physicians: https://www.rcp.ac.uk/events-and-education/education-and-professional-development/exams-and-assessment/
UKMLA is not the same as these exams, but the skills you build (applied reasoning, exam discipline, structured review) transfer well.
