How iatroX's Adaptive Engine Works: The Algorithm Behind Your Quiz Sessions

Featured image for How iatroX's Adaptive Engine Works: The Algorithm Behind Your Quiz Sessions

Every medical Q-bank claims to be "adaptive" or "AI-powered." Most are not. A platform that lets you filter by topic and difficulty is not adaptive — it is filterable. A platform that randomises question order is not adaptive — it is random. Adaptive learning means the platform analyses your performance and automatically selects the next question to target your specific weakest area — without your input, on every question, in real time.

This is how iatroX actually works.

Performance Tracking

Every question you answer generates a data point. The engine tracks per-topic accuracy (your percentage correct across each clinical domain, updated after every question), difficulty level performance (how you perform on easy, medium, and hard questions within each topic — separating "strong topic, easy questions" from "strong topic, hard questions"), recency of exposure (when you last practised each topic — used to compute spacing intervals), and response time (how quickly you answered — distinguishing confident correct answers from uncertain correct guesses).

These data points are not stored as a static profile. They are recomputed in real time after every question — meaning the engine's model of your knowledge updates continuously, not at the end of each session.

Adaptive Question Selection

When the engine selects the next question, it evaluates all available questions against your current performance profile and selects the question that maximises learning value.

The selection algorithm considers topic weakness (questions from your weakest clinical domains are prioritised — the topic where your accuracy is lowest gets more questions), difficulty calibration (questions are served at the difficulty level that matches your current competence in that topic — not too easy (wasting time on mastered content), not too hard (producing frustration without learning)), coverage gaps (topics you have not practised recently are boosted — ensuring breadth of coverage even as the engine targets specific weaknesses), and question novelty (questions you have not previously seen are preferred over repeated questions — though previously-answered questions re-enter the pool via spaced repetition).

The result: every question you see is the question that, based on your current performance data, will produce the most learning improvement. This is fundamentally different from random selection (which wastes time on already-strong topics) and manual topic selection (which is subject to your bias toward comfortable topics).

Spaced Repetition Scheduling

Topics you have studied are scheduled for review at intervals based on the spacing effect literature. The core principle: reviewing information at increasing intervals produces better long-term retention than reviewing at fixed intervals or cramming.

The engine computes a review interval for each topic based on your accuracy history. A topic where your accuracy is improving rapidly receives a longer interval (you are learning it well — it does not need immediate review). A topic where your accuracy is stagnant or declining receives a shorter interval (the knowledge is not consolidating — it needs reinforcement sooner).

The intervals are informed by the Cepeda et al. (2008) meta-analysis — the most comprehensive synthesis of spacing effect research, covering over 250 studies. The practical outcome: topics initially appear at short intervals (1-3 days) and extend to longer intervals (1-4 weeks) as your accuracy improves. If accuracy drops at a review point, the interval contracts. If accuracy holds, the interval extends.

Study Planner Integration

The adaptive engine feeds into the study planner through a priority scoring formula that determines which topics appear in tomorrow's daily task list:

Priority = weight × (1 - mastery) + decayBonus - recencyPenalty + phaseBias

Where: weight reflects the topic's importance in the exam blueprint (high-yield topics get higher priority), mastery is your current proficiency in that topic (low mastery = high priority), decayBonus increases priority for topics whose last review is approaching their spacing interval deadline (preventing knowledge decay), recencyPenalty decreases priority for topics you studied very recently (avoiding unnecessary repetition), and phaseBias adjusts priority based on the current preparation phase (foundation phase biases toward coverage breadth, application phase biases toward weak-area depth, performance phase biases toward mixed-topic practice).

The formula is evaluated for every topic in the exam curriculum, producing a ranked priority list. The study planner selects the highest-priority topics for tomorrow's tasks — constrained by your available study time.

Topic Normalisation

Different question banks use different tagging systems. A cardiology question might be tagged "cardiovascular" in one bank, "cardiology - heart failure" in another, and "medicine > cardiac > congestive cardiac failure" in a third. For the adaptive engine to track performance across banks, these tags must map to a unified curriculum view.

iatroX uses semantic embedding clustering (all-MiniLM-L6-v2) to normalise topic tags across all question banks. Each tag is converted to a semantic vector — a high-dimensional numerical representation of the tag's meaning. Similar vectors are clustered into unified topics. This means a question tagged "CCF" in one bank and "heart failure, congestive" in another are mapped to the same underlying topic — and your performance is tracked consistently across both.

This normalisation is what allows the engine to build a single, coherent model of your clinical knowledge across all the exams you practise — rather than treating each exam's Q-bank as an isolated silo. If you practise cardiology questions in the MRCGP AKT bank and then switch to the MRCP bank, your cardiology proficiency carries across. The engine knows you are strong in cardiology regardless of which bank the question came from.

The practical benefit: IMGs who prepare for PLAB 1 on iatroX and then transition to MRCP or MRCGP preparation carry their entire performance history forward. Topics mastered during PLAB preparation do not need to be relearned from scratch for the next exam — the engine already knows your proficiency and adjusts accordingly. This continuity across exams is only possible because of topic normalisation — without it, each exam would be a fresh start with no performance history. No other medical Q-bank platform provides this cross-exam continuity — because no other platform has solved the topic normalisation problem at this level.

What This Means in Practice

You open iatroX. You start a quiz session. The first question is on thyroid disease — because your endocrine accuracy is the lowest of all your clinical domains. You get it right. The second question is on musculoskeletal — because your MSK accuracy is second-lowest and you have not practised MSK in 8 days. You get it wrong. The third question is another MSK question — at a slightly easier difficulty — because the engine has now identified MSK as a specific weakness requiring immediate attention.

Over 20 questions, you practise 4-5 different topics — weighted toward your weakest areas, calibrated to your current difficulty level, and scheduled to resurface at optimal spacing intervals. Every session is maximally efficient — not because you chose the right topics (you did not choose), but because the engine chose them for you based on your data.

Compare this to a non-adaptive session on a traditional Q-bank. You select "random" mode. The first question is on cardiology — your strongest topic. You get it right in 20 seconds. The second question is also cardiology. You get it right. The third is respiratory — also strong. Ten questions in, you have practised three strong topics and feel productive. But you have not touched endocrinology (your weakest domain) or musculoskeletal (your second-weakest). The session felt efficient. It was not.

Common Misconceptions About Adaptive Learning

"Adaptive just means harder questions." No. Adaptive means the right questions — which may be easier, harder, or the same difficulty depending on your current performance in that topic. If your endocrinology accuracy is 30%, the engine does not serve you a hard endocrinology question (which would produce frustration). It serves a medium-difficulty endocrinology question — challenging enough to produce learning, not so hard that it produces helplessness.

"I should still choose my own topics sometimes." You can — iatroX supports topic-filtered practice alongside adaptive mode. But the adaptive engine is better at identifying your weaknesses than you are. Human self-assessment of knowledge gaps is systematically biased: we overestimate our competence in familiar topics and underestimate our competence in unfamiliar ones (the Dunning-Kruger effect). The engine has no such bias — it measures accuracy directly.

"Adaptive learning means I'll never see easy questions." Incorrect. The engine serves questions across all difficulty levels — including easy questions in strong topics for spaced repetition maintenance. Strong topics still appear in your sessions, but at lower frequency and higher difficulty than weak topics. This prevents knowledge decay in strong areas while concentrating effort on weak areas.

"The engine will run out of questions in my weak areas." The engine manages question recycling through spacing intervals. Previously-answered questions re-enter the available pool after a computed interval — so even a finite question bank produces a continuous stream of appropriately-timed review questions. As the bank grows, question novelty increases — but the adaptive value comes from selection intelligence, not bank size.

Experience adaptive learning on iatroX at iatrox.com/quiz-landing.

Share this insight