These three platforms are making different promises for MSRA preparation. Understanding those differences is more useful than ranking them on a single scoreboard — because the right tool depends on your revision architecture, not just the number of questions.
MediWord: The Recall Realism Play
MediWord's core thesis is that the closer your practice questions match the actual exam, the better you will score. Its 3,300+ questions are recall-based — modelled on patterns from recent MSRA sittings — with every clinical explanation linked to NICE or GMC guidance.
This approach optimises for recognition. When you sit the real exam, the question stems feel familiar because you have practised similar patterns. For CPS, this means the clinical scenarios mirror the style, length, and trap structures of the actual paper. For PD, the SMART framework teaches you to decode ethical scenarios using a consistent methodology aligned to GMC Good Medical Practice.
Best when: You have 4-8 weeks of focused revision, want maximum exam similarity, and are specifically targeting MSRA rather than building broader clinical knowledge.
Trade-off: Recall-based preparation optimises for a specific exam window. The knowledge may not transfer as well to different exam formats or clinical practice because it is anchored in pattern recognition rather than conceptual understanding.
Medset: The Adaptive Efficiency Play
Medset's thesis is that AI should determine what you study. Its adaptive engine analyses your performance and adjusts question selection to target your weakest areas, reducing time spent on topics you already know.
Best when: You have limited revision time and need maximum efficiency — the algorithm ensures every question you answer is working on a genuine knowledge gap rather than reinforcing what you already know.
Trade-off: Adaptive algorithms are only as good as their question pool and their targeting logic. The effectiveness depends on question quality and the sophistication of the weakness-detection model.
iatroX: The Learn-and-Verify Play
iatroX makes a different promise: not just practice, but retained knowledge. Its spaced repetition algorithm ensures that incorrectly answered questions resurface at scientifically optimal intervals — the method with the strongest evidence base for long-term retention.
The distinctive feature is the integration between practice and clarification. When you get a question wrong, Ask iatroX gives you the NICE-grounded explanation in seconds — not a generic teaching note, but the specific guideline recommendation with a citation link. Brainstorm helps you reason through complex clinical scenarios step by step.
Best when: You want to build knowledge that persists beyond the exam — for clinical practice, for future exams, and for the clinical reasoning the MSRA is actually testing.
Trade-off: iatroX is not MSRA-specific in the way MediWord is. It does not claim recall-based questions modelled on recent papers. Its strength is the learning science, not the exam pattern matching.
The Decision Framework
If your priority is exam-day recognition: MediWord. The recall-based questions are designed to feel like the real thing.
If your priority is revision efficiency: Medset. The adaptive engine minimises wasted study time.
If your priority is retained knowledge with guideline grounding: iatroX. The spaced repetition and citation-first clarification build understanding, not just recall.
The strongest candidates use more than one: MediWord or Medset as the primary exam-specific bank, plus iatroX as the adaptive, guideline-grounded complement that ensures the knowledge sticks.
Conclusion
MediWord, Medset, and iatroX represent three different theories of how to prepare for the MSRA: exam realism, adaptive efficiency, and evidence-based retention. All three have merit. The best revision architecture combines exam-specific practice with adaptive weakness targeting and guideline-grounded understanding — which is why the combination outperforms any single tool.
