Introduction
The UK’s healthcare system relies heavily on international medical graduates (IMGs), who now comprise a substantial portion of the medical workforce. In fact, over half of new GP trainees in 2023 were IMGs, up from 34% in 2019 (NHS Digital). Similarly, in 2021 nearly 13,000 foreign‐trained doctors were registered—64% of new UK registrants that year (GMC). Yet despite their crucial role, IMGs often underperform in high‐stakes clinical assessments when compared to UK‐trained graduates. Studies have documented large gaps in exam success rates. For example, non‑white IMGs have been up to 15 times more likely to fail the General Practice licensing exam on their first attempt than white UK graduates (BMJ). Even UK‑trained Black and minority ethnic (BME) candidates face about 3.5 times higher failure risk than white peers in the same exam (BMJ). These stark disparities raise pressing questions: Do these outcomes reflect bias in the exams, or do they stem from differences in training and preparedness—such as familiarity with NHS clinical guidelines and cultural communication norms?
This debate came to the forefront in 2013–2014, when the British Association of Physicians of Indian Origin (BAPIO) brought a judicial review against the Royal College of General Practitioners (RCGP) and GMC over the observed gap in pass rates. The High Court ultimately upheld the exam’s legality and found no overt discrimination, but the judge warned that if the College did not act to reduce the disparity, it could face future legal challenges (The Lancet). Since then, regulators, royal colleges, and researchers have been examining the causes of differential attainment (as these performance gaps are known) and seeking solutions. Recent changes – including the introduction of a new Simulated Consultation Assessment (SCA) for GP trainees and the upcoming UK Medical Licensing Assessment (UKMLA) for all graduates – aim to ensure a fair, consistent standard. At the same time, there is growing recognition of the need to better support IMGs through tailored training, mentorship, and innovative tools. In this article, we delve into the data on IMG versus UK graduate exam performance (covering exams like the CSA/SCA, PLAB, and UKMLA), investigate potential biases and the role of guideline interpretation, and discuss expert insights on how to bridge the gap. The goal is to rethink how assessments are designed and how candidates are prepared – ultimately ensuring that all clinicians, regardless of background, can demonstrate their true competence in providing safe patient care.
Background on UK clinical exams
UK clinical licensing and specialty exams are designed to ensure that all doctors meet a consistent standard of knowledge and skills. For UK‑trained medical students, the journey traditionally culminates in university finals and then postgraduate exams for those entering specialties. For international graduates, entry into UK practice typically requires passing the Professional and Linguistic Assessments Board (PLAB) test. The PLAB, administered by the GMC, has two parts (a written Applied Knowledge Test and an Objective Structured Clinical Examination) and historically aimed to assess if IMGs have knowledge and skills equivalent to a UK doctor who has completed their first year of training (GMC).
However, evidence emerged that PLAB’s standard was set too low – one analysis published in The Lancet in 2014 found that raising the pass marks of PLAB Part 1 by approximately 13% and Part 2 by around 20% would be needed to put IMGs on equal footing with UK graduates in later exams (The Lancet). This finding, coupled with data showing many PLAB passers subsequently struggled in postgraduate exams, prompted a re-evaluation of the licensing process.
Starting in 2024, the UK Medical Licensing Assessment (UKMLA) is being introduced as a single, unified exam for all new doctors—IMGs and UK graduates (GMC; NHS Digital). The UKMLA consists of an applied knowledge test (AKT) and a clinical and professional skills assessment (CPSA), mirroring the structure of PLAB but with important differences. The MLA’s content places greater emphasis on ethics, communication, professionalism, and UK‑specific guidelines and protocols than the old PLAB did (The Lancet). The goal is to ensure a consistent standard of competency for all doctors entering practice in the UK, and to familiarise every candidate with core NHS guidelines and expectations from the outset.
For those pursuing specialty training (residency), each Royal College has its own membership exams. In general practice (family medicine), trainees must pass the Membership of the RCGP (MRCGP) examinations, which include:
- an AKT (multiple‑choice applied knowledge test),
- a workplace‑based assessment portfolio, and
- a clinical skills exam.
Until 2019, the clinical skills component was the Clinical Skills Assessment (CSA) – a high‑fidelity OSCE where candidates consulted with simulated patients. The CSA was conducted in English with role‑players and examiners observing in person. It tested not only clinical reasoning but also communication and adherence to best‑practice guidelines. Notably, this exam became the focus of the IMG performance debate. During the COVID‑19 pandemic, the CSA was replaced temporarily by the Recorded Consultation Assessment (RCA) (using recorded real consultations). Most recently, in late 2023, the RCGP launched the Simulated Consultation Assessment (SCA), a new exam format that, like the CSA, uses simulated cases but is delivered in a more flexible manner (with candidates in multiple locations). The SCA was developed with extensive stakeholder input to address previous concerns about fairness and realism (RCGP). Early feedback from trainees has been positive – interestingly, IMGs reported more positive views of the new SCA format than UK graduates did, perhaps reflecting hopes that it will better suit diverse candidates.
Other postgraduate exams have shown similar attainment gaps. For instance, in hospital specialties, the Membership of the Royal College of Physicians (MRCP(UK)) exams have also found different pass rates between UK graduates and IMGs. These patterns underscore that the issue is not isolated to one college or exam, but is an overarching challenge in UK medical training. With that context in mind, we turn to the question of bias and the factors underlying these performance disparities.
Understanding the bias in exam performance
Whenever one group of doctors consistently underperforms another in assessments, it is crucial to ask whether the exams themselves are fair. A range of investigations have probed whether bias—conscious or unconscious—in exam design or scoring could be contributing to IMG failure rates. The high‑profile MRCGP case in 2013–2014 exemplified these concerns. At that time, data showed that only 36% of international candidates (and 65% of UK‑trained BME candidates) passed the CSA on the first attempt, compared to 93% of white UK graduates (BMJ). BAPIO argued that the exam format might be inherently biased against non‑UK candidates (for example, through subjective scoring or cultural barriers), amounting to indirect discrimination. Although the judicial review did not find the RCGP guilty of racism, it shone a spotlight on “differential attainment” and put pressure on the College to investigate the causes (The Lancet).
Research evidence on exam bias has been somewhat reassuring on the surface. A 2013 analysis published in the British Journal of General Practice examined CSA scoring in detail to see if examiners were showing preferential treatment based on gender, ethnicity, or graduate origin. It concluded that “examiners show no general tendency to favour their own kind,” and that differences in scores were mainly related to candidate characteristics (such as skill and knowledge) rather than examiner demographics (BMJ). In other words, there was no clear pattern of examiners overtly giving lower marks to IMGs or BME candidates. Additionally, the persistent gap in performance on the written MRCGP AKT—a machine‑marked test—mirrors the gap seen in the clinical exams (GMC; NHS Digital). This suggests that the issue is not simply one of subjective bias in face‑to‑face exams; if it were, one would expect IMGs to do well on the written test. The fact that IMGs also score lower, on average, in the AKT indicates that other factors (like knowledge base or exam technique) are at play.
However, the absence of overt examiner bias does not mean the playing field is truly level. Subtle, systemic biases can creep in. The independent review by Professor Aneez Esmail (commissioned by the GMC) famously reported “no evidence of discrimination” in the CSA process, yet a parallel BMJ paper cautioned, “we cannot exclude subjective bias… in the marking of the clinical skills assessment as a reason for these differential outcomes” (BMJ). Unmeasured factors—such as variations in how role‑play patients interact with an IMG candidate or how cultural differences in communication are perceived by examiners—could influence scores. Moreover, if UK‑trained BME graduates (who have gone through the same medical school system as their white peers) also exhibit lower pass rates, then factors like racial bias or stereotype threat may be contributory. As Dr Ramesh Mehta, president of BAPIO, stated, “it is damning that in this day and age the sole reason that UK educated and trained doctors are four times as likely to fail this exam seems to be… their skin colour” (The Lancet).
Beyond the exams themselves, bias may also manifest in the broader training environment. Qualitative studies have found that IMGs and minority UK graduates often perceive additional hurdles in their training pathways. They report having less supportive relationships with senior colleagues or mentors, feeling socially isolated, or experiencing subtle discrimination at work (BMJ). Such factors can impact confidence and access to informal exam preparation guidance. In national focus group studies, trainees felt that while formal exams were rigorous and standardised, aspects such as workplace‑based assessments and selection interviews were “vulnerable to bias,” potentially disadvantaging those from IMG or BME backgrounds. All these insights suggest that performance disparities arise from a complex mix of factors. While overt bias in the assessment process may be minimal, the cumulative effects of cultural differences, language issues, educational background, and systemic factors can disadvantage IMGs. Recognising this complexity is the first step to addressing it.
The role of guideline interpretation
One frequently overlooked factor in differential exam performance is the candidate’s familiarity with UK clinical guidelines and expected practices. UK medical training places heavy emphasis on national guidelines (such as NICE guidelines, NHS protocols, and British National Formulary standards) to ensure evidence‑based, uniform care. Many exam stations and questions are designed around the premise that the candidate will manage a case in line with these guidelines. For a doctor trained overseas, this can pose a challenge—not because of clinical incompetence, but due to a gap in contextual knowledge. IMGs may have been taught different approaches or may not have experienced a single national guideline system like NICE in their country of origin.
Research confirms that this is a significant issue. A cognitive interview study of GP trainees taking the AKT found that IMG candidates often struggled with questions tied to UK‑specific recommendations (BMJ). As one participant explained, “you need to be working with the NICE guidelines. … I have not been introduced to it [during training abroad]. So it is a new thing I have been picking up after my graduation while working in the UK.” Another IMG noted that in his home country, national guidance did not apply at the point of care—“back where I trained, I don’t think national guidelines apply… over here everything is very rigid. So you couldn’t get to that NICE guideline of what is expected of you” (The Lancet). These testimonies illustrate the steep learning curve that IMGs face. They might be excellent clinicians, but if they answer an exam question based on practices from their country of origin, they could be marked incorrect in the UK context.
This unfamiliarity can affect both knowledge exams and clinical scenarios. An IMG might not know the UK’s recommended screening intervals for cervical cancer, the first‑line medication for a condition per NICE, or the specific risk assessment tools used in NHS practice—all details that are commonly tested in exams like the AKT. Even nuances in drug nomenclature (brand versus generic names) or legal frameworks (such as sections of the Mental Health Act) can trip up a competent doctor who is not yet fully acclimatised to UK practice. In the CSA/SCA, guideline interpretation is crucial; an IMG who manages a case safely but not according to the preferred UK guideline might lose marks.
Crucially, guideline knowledge is teachable. Once IMGs are aware of this gap, many work hard to close it, though it requires time and exposure. Specific training on UK guidelines during GP training has been shown to help, yet some IMGs still feel they need more time to fully adapt (GMC). Even UK graduates occasionally lack detailed knowledge of less‑common guidelines, but they are generally more adept at knowing where to find and how to apply NICE guidance. For IMGs, the adjustment is often more abrupt. This points to an important area for intervention—ensuring that IMGs receive comprehensive orientation to UK guidelines early in their practice, for example through the NHS‑specific content included in the UKMLA (NHS Digital).
Analysis of SCA/CSA results
The membership exam for GPs has been under particular scrutiny for differential attainment. It is instructive to examine the outcomes of the new Simulated Consultation Assessment (SCA) compared to the old CSA, as the RCGP has recently published initial performance data. The SCA was introduced, in part, to address fairness and update the exam format. But did it make a difference?
Data shows consistently higher success for UK graduates than for IMGs across the written knowledge test (AKT) and clinical assessments (CSA, the pandemic-era RCA, and the new SCA). Notably, the SCA has slightly improved IMG outcomes compared to the CSA, but a large gap remains (RCGP).
Historical data reveal that during 2010–2019 the CSA first‑attempt pass rate was around 90.8% for UK graduates versus only 43.0% for IMGs—less than half of international candidates passed (BMJ). This significant gap prompted intense debate and legal challenges. In late 2023, the inaugural SCA diets were held. According to RCGP reports, in the SCA’s first several sittings (from November 2023 to June 2024) 94.3% of UK graduates passed on their first try, compared to 51.5% of IMGs (RCGP). Although this represents a modest improvement for IMGs (an approximate 8.5 percentage point increase over the CSA era), the relative gap remains substantial, with UK graduates nearly twice as likely to pass as their IMG counterparts.
Further breakdowns reveal that both UK and IMG candidates saw higher pass rates in the SCA than in the CSA (UK +3.5%, IMG +8.5%), possibly due to enhancements in exam delivery or the filtering out of weaker candidates over time. Moreover, the ethnicity gap among UK graduates narrowed in the SCA, with a difference of only 1.01% between white and BAME candidates (96.7% vs 90.8% pass rates respectively), though these findings are preliminary. Gender differences in the SCA mirrored those observed in the CSA, with female candidates outperforming male candidates (82% vs 66%), and notably, male IMGs recorded some of the lowest pass rates overall (RCGP).
While the improved performance in the SCA is promising, it does not indicate that the problem is solved. The RCGP itself acknowledges that although early SCA data are positive, “there is still much work to do” to reduce differential attainment (RCGP). Plans for a full Fairness Review of the SCA and initiatives such as recruiting a more diverse examiner pool and standardising examiner training are ongoing measures to address these issues.
Expert perspectives and research insights
Over the past decade, research has increasingly focused on the reasons behind these disparities and how to address them. Expert opinion converges on the view that multiple, interacting factors—not one single cause—are responsible for differential attainment. For instance, a qualitative study by Professor Katherine Woolf and colleagues highlighted that IMGs (and UK-born minority trainees) often felt less integrated and lacked the informal mentorship essential for exam preparation (BMJ).
Further analyses have shown that the attainment gap is evident across multiple stages—from recruitment scores to in-training assessments and final exams (The Lancet). This consistency suggests that the issue is not isolated to any one exam but is reflective of the broader challenges faced by IMGs in the UK system. Language and communication skills are frequently cited as critical factors; even when IMGs have passed formal English language tests, subtle differences in communication styles—such as consultation methods, idioms, and patient expectations—can adversely affect their performance. As a result, some Royal Colleges have introduced targeted communication workshops specifically for IMGs.
On the regulatory front, both the GMC and other medical education bodies have acknowledged the issue of “differential attainment” and are actively working to address it. The GMC’s annual training reports now track exam outcomes by ethnicity and IMG status, and it has funded research (including work by Professor Esmail) to explore the underlying causes (GMC). The British Medical Association (BMA) has also urged the RCGP to review the CSA to ensure fairness for all candidates. These initiatives highlight a shift from a defensive stance to a proactive approach, aiming to improve both exam fairness and the overall training environment.
Pathways to bridging the gap
Addressing exam performance disparities requires a multi-pronged approach. Structural changes are already underway. The introduction of the UKMLA is designed to ensure that all new doctors—whether UK‐trained or IMG—start on a level playing field with a common foundation of knowledge, including UK guidelines and ethics. By setting a uniform benchmark from the outset, the hope is that later discrepancies will diminish.
Within training programmes, early support and inclusion are paramount. Many deaneries and NHS trusts have established induction courses for IMGs that cover NHS practices, local protocols, and communication skills. Mentorship schemes pairing IMGs with experienced clinicians are increasingly common. For instance, some regions offer mock CSA workshops specifically for IMGs, recognising that tailored practice can lead to improved outcomes. The RCGP’s 2023 Fairness Review recommended “early, clear, individually tailored advice and guidance” for IMGs to help prepare them for both the AKT and SCA (RCGP).
Another key strategy is addressing the more intangible aspects of performance—namely confidence, culture, and communication. Constructive feedback and culturally sensitive mentoring are essential to help IMGs adapt to the UK system. Additionally, regular monitoring and transparent reporting of exam data by all examining bodies can help identify and rectify areas of disparity.
Technology and innovation also hold promise. AI-driven learning platforms such as iatroX offer immediate, evidence‑based clinical guidance by integrating trusted UK sources like NICE, BNF, and NICE‑CKS. By reducing cognitive overload and offering tailored quiz and brainstorming modes, such platforms help IMGs familiarise themselves with UK standards even before they step into the exam room. These digital tools not only support exam preparation but also promote continuous professional development, ensuring that clinicians remain updated on best practices.
Finally, fostering a supportive community is vital. Organisations like BAPIO and initiatives by the GMC are working to build robust mentoring and peer support networks for IMGs. Recognising and celebrating the diverse strengths that IMGs bring to the NHS can help shift the narrative from deficit to asset.
Conclusion
The persistent gap in exam performance between international and UK‑trained medical graduates is a complex challenge that demands a comprehensive response. While overt examiner bias may not be the sole cause, IMGs undeniably face hurdles such as adapting to UK guidelines, overcoming cultural and linguistic differences, and bridging gaps in contextual knowledge. Reforms such as the UKMLA, enhanced induction programmes, and targeted support measures are essential steps toward equity.
Ultimately, rethinking exam disparities is about more than just ensuring fairness on test day—it is about creating an environment where every doctor, regardless of origin, is empowered to succeed. By combining structural reforms with innovative digital tools like iatroX and by fostering a more supportive training culture, the NHS can work towards a system that truly reflects its ideals: inclusive, meritocratic, and dedicated to delivering exceptional patient care.
The journey toward parity will require continuous effort, rigorous monitoring, and an unwavering commitment to addressing systemic issues. With the integration of AI-driven learning and a renewed focus on comprehensive support, the future can hold a more equitable assessment process where every clinician’s competence is recognised and nurtured.