Clinical learning has never really been the problem. Most clinicians learn every day. They learn when they check a guideline before prescribing, when they sense-check a referral threshold, when they look up a dose, when they draft safer patient advice, and when they double-check evidence on a question they thought they already knew. The problem is that this learning is often fragmented, undocumented, and then reconstructed later under appraisal pressure.
That is why a meaningful shift is now underway. CPD is beginning to split into two models. The first is the traditional model: deliberate study, deliberate reflection, deliberate logging. The second is newer and potentially more important: workflow-derived learning, where everyday clinical questions, knowledge retrieval, and documentation tasks become captured evidence of development. Publicly, that is exactly the direction several tools are now moving toward, including Umbil’s query-to-reflection logs, Praktiki’s automatic CPD tracking and FourteenFish sync, Heidi Evidence’s “CPD tracking, built into practice”, and ClinicalKey AI’s CME/MOC features integrated into the clinical workflow. (Umbil, Praktiki, Heidi Evidence, ClinicalKey AI)
This matters because the old split between “real work” and “professional development” increasingly looks artificial. The more interesting question now is not whether AI can generate learning logs. It clearly can. The real question is whether AI can convert real clinical work into meaningful CPD in a way that is useful, proportionate, and genuinely aligned with how UK appraisal and revalidation actually work.
The old model of CPD: separate learning, separate reflection, separate admin
The classic CPD model is familiar to every UK clinician. You attend a webinar. You read a guideline update. You complete a module. Then at some later point you try to remember what you learned, document why it mattered, assign credits or time, and work out how to present it coherently at appraisal.
That model is not wrong. Deliberate study still matters. But it is structurally inefficient for busy clinicians because it treats learning as something that happens in a separate room from practice. In reality, much of the most valuable learning occurs precisely when work is live, contextual, and memorable.
UK appraisal guidance already points in this direction. The GMC’s revalidation guidance says doctors must collect and reflect on supporting information, including CPD, rather than merely amass documentation, and the Academy of Medical Royal Colleges has been explicit that the quality of CPD matters more than simple quantity or credit-counting. In fact, the AoMRC’s updated core principles say collection of CPD credits is no longer expected for appraisal in itself, and that doctors should plan and evaluate CPD continuously as needs arise in practice. (GMC supporting information for revalidation, GMC CPD guidance, AoMRC core principles for CPD 2023 update)
That is a crucial point. UK clinicians do not merely need “more CPD content”. They need easier ways to identify, capture, and reflect on the learning that is already happening in the course of good clinical work.
The new model: learning as a by-product of care
This is where the category becomes genuinely interesting.
A new CPD stack is emerging in which learning can be generated or captured from:
- clinical question asking
- guideline lookup
- evidence checking
- note or letter drafting
- patient communication support
- follow-up and reflection on real cases
In other words, CPD is starting to become less of a separate event and more of a layer sitting on top of real work.
That can be highly valuable. If a clinician checks a prescribing question, verifies a guideline position, drafts a patient-facing explanation, and then with one click turns that moment into a reflective note or exportable log, the friction between work and learning falls dramatically. That is not merely convenient. It changes the economic logic of CPD. It makes professional maintenance feel less like an additional task and more like a natural extension of practice.
But this only works if the captured learning is real. If the system simply converts any click into bland pseudo-reflection, then the result is not better CPD. It is just more polished admin.
Why this matters in UK practice specifically
The UK context matters because appraisal and revalidation are not built around endless evidence hoarding. They are built around proportionate supporting information, reflection, and demonstration of ongoing professional development across real practice. The GMC explicitly says it is not enough simply to collect information; reflection is central. The Medical Appraisal Guide likewise stresses that the purpose of supporting information is to facilitate self-review and useful appraisal discussion, not to create exhaustive portfolios for their own sake. (GMC guidance, Medical Appraisal Guide 2022)
That makes the UK especially fertile ground for this category. Clinicians are not necessarily asking for more educational content. They are asking for less friction in proving and reflecting on the learning they are already doing.
This is one reason the CPD knowledge-platform environment is heating up. The winning tools may not be the ones with the biggest model or flashiest brand. They may be the ones that best connect three layers:
- trusted knowledge retrieval
- workflow usefulness
- lightweight reflective capture
Where the tools fit
This is not a pure head-to-head comparison, because these tools are not identical products. But they are all helping define the new learning stack.
Umbil: workflow-derived reflection built from clinical questions
Umbil is perhaps the clearest example of the workflow-derived model. Publicly, it says that every time a clinician checks a dose or looks up a guideline, the system can identify potential CPD entries, generate a structured reflection aligned with GMC domains, save it to a log, and export the full log for annual appraisal. It frames this explicitly as turning daily work into a finished CPD portfolio. (Umbil Capture Clinical Learning)
That is an important move because it reframes learning capture from a retrospective chore into an almost ambient by-product of practice. Umbil’s broader public proposition also includes UK guidance retrieval, referral-letter drafting, safety-netting advice, and documentation support, which means it is trying to link knowledge lookup and admin outputs directly to reflective capture. (Umbil)
The strongest reading of Umbil is therefore not “an AI that does CPD”. It is a UK workflow assistant that tries to convert point-of-need knowledge work into appraisal-ready evidence. For GPs, trainees, and clinicians who dislike reconstructing learning after the fact, that is a coherent and potentially sticky proposition.
Its limits are also worth stating clearly. Publicly, Umbil is still a standalone web tool rather than a deep EMIS or SystmOne integration, and its current guidance scope is centred on national UK sources such as NICE, CKS, SIGN, and BNF rather than being a fully local-protocol layer. That does not invalidate the model, but it does define its current shape. (Umbil GP Workflow Assistant, Umbil NICE Guideline Summary Tool)
If you want the fuller product review angle, see Umbil AI review for UK clinicians.
Praktiki: deliberate microlearning that behaves like modern workflow
Praktiki sits in a slightly different place. It is not primarily built around converting live clinical work into reflective logs. Instead, it presents itself as an AI-powered microlearning platform for clinicians, with hundreds of practical case-based modules, five-minute daily learning, instant answers from UK guidance, automatic CPD tracking, and direct FourteenFish sync. Its public site says learning evidence can be stored in a portfolio, exported for appraisal, or synced directly with FourteenFish. (Praktiki)
That puts Praktiki closer to the deliberate microlearning side of the new CPD stack. But it is still part of the same shift because it tries to reduce the administrative distance between learning and recording. In older systems, the user completed a module and then separately logged it. In Praktiki’s model, the learning object, the time allocation, and the portfolio trail sit much closer together.
There is also a practical design intelligence in the product’s public messaging. Five-minute, case-based learning is not pretending that clinicians have large empty spaces in the day. It is acknowledging that much CPD needs to fit into brief windows on phones and desktops. In that sense, Praktiki is less about ambient learning capture and more about frictionless, appraisal-aware microlearning.
That may suit GPs, trainees, and clinicians who want concise, structured, educationally cleaner CPD rather than deriving learning from every workflow event. It is also probably easier for some supervisors and appraisers to interpret than highly automated query-derived logs, because the learning unit is more explicit.
If your broader interest is how different AI tools map to different jobs and roles, see What AI tool should a doctor actually use?.
Heidi Evidence: CPD as part of a wider care-partner workflow
Heidi Evidence is interesting because it suggests another route entirely. Publicly, its Evidence product is positioned around independent, unlimited clinical answers in the flow of care, with trusted sources, shareable outputs, and “CPD tracking, built into practice”. It sits inside a broader Heidi care-partner narrative alongside scribing and communications. (Heidi Evidence)
That means Heidi is not primarily selling CPD. It is selling a larger workflow platform in which CPD capture becomes one feature among several. Strategically, that may be powerful. If a clinician is already using the platform for note generation, follow-up, and evidence lookup, then adding CPD capture can feel natural rather than forced.
The advantage of this model is that it may integrate learning into a broader digital workflow rather than asking the clinician to adopt a separate CPD tool. The risk is that CPD becomes a minor checkbox feature rather than a deeply thought-through reflective layer. Much depends on how meaningful the capture actually is, how source-grounded the outputs remain, and whether the clinician can review and shape the reflection rather than passively accepting AI-generated prose.
So Heidi Evidence represents a third model in the category:
- not pure microlearning
- not only query-to-reflection
- but CPD as one layer inside an end-to-end clinical AI workflow
ClinicalKey AI: education credit inside an institutional evidence engine
ClinicalKey AI adds a fourth model and is useful precisely because it is different again. Publicly, Elsevier says ClinicalKey AI allows clinicians to earn, track, and claim CME and MOC credits directly in the platform, with support from American Boards across eight specialties, while also integrating into Epic and the wider clinical workflow. Elsevier’s 2025 announcements also describe ClinicalKey AI as being available in over 50 countries. (ClinicalKey AI individual clinicians, Elsevier press release, 4 March 2025)
This is a useful reminder that the category is not purely UK-specific. International evidence engines are also moving toward workflow-linked education capture. But ClinicalKey AI’s continuing-education framing is more institutional and more American in its accreditation logic. CME/MOC is not the same cultural and professional frame as UK appraisal and revalidation, even if the underlying idea is similar.
That makes ClinicalKey AI relevant to the thesis but not perfectly analogous to the UK-facing products. It shows that education capture is becoming a point-of-care feature, but its strongest resonance is still likely to be with clinicians and organisations already operating inside that international, enterprise-style evidence environment.
What is genuinely useful versus gimmicky
This is the real dividing line in the category.
AI-derived CPD is genuinely useful when it does four things well.
1) It reduces admin rather than adding another layer of it
If a tool saves the clinician from reconstructing what they learned, when they learned it, and why it mattered, that is a genuine gain. If the clinician must still heavily rewrite, recategorise, manually export, and reconcile everything, then the automation may be more theatrical than useful.
2) It is grounded in trusted, inspectable sources
Learning that comes from vague AI synthesis without visible source logic is weak. Learning that emerges from trusted guidelines, evidence, or case-based educational design is much more defensible. This is why source-grounded products are more credible in the category than generic assistants.
For clinicians who want a knowledge-first layer before any CPD layer, that is also where iatroX becomes relevant. The Clinical Q&A Library, A-Z Clinical Knowledge Centre, and How iatroX Works are all designed around practical, clinician-oriented retrieval rather than passive content consumption.
3) It helps at the point of need
The most valuable learning is often the learning that happens when a real clinical problem is in front of you. A system that can capture or support that moment has a structural advantage over one that only works in deliberate, scheduled study time.
4) It supports reflection rather than merely generating polished noise
This may be the most important criterion of all. UK revalidation culture is not supposed to reward empty verbosity. Reflection should identify the learning, the relevance to practice, and any action or change arising from it. A beautifully phrased paragraph that says little is not strong CPD; it is just elegant documentation.
That is where many AI products may fail if they are not careful. It is easy to automate text. It is harder to automate meaningful professional insight.
The danger: “learning theatre”
There is a real risk in this category that deserves naming.
If every guideline lookup, quick query, or drafted document is automatically transformed into a portfolio entry, clinicians may end up with an abundance of low-value reflective fragments. That creates the illusion of educational richness without necessarily improving learning, judgment, or patient care.
One could call this learning theatre: the production of CPD-shaped artefacts that look impressive but do not meaningfully deepen practice.
This is not merely a theoretical concern. The GMC and AoMRC emphasis on reflection, relevance, and proportionate supporting information exists precisely to guard against excessive box-ticking. A good AI CPD tool should therefore help clinicians surface meaningful moments, not flood them with administrative confetti. (GMC guidance, AoMRC core principles)
The best products in this category will therefore be the ones that know when not to create a formal learning artefact.
Who benefits most from this new stack
The clinicians most likely to benefit are those whose learning is already happening in intense, fragmented, real-world contexts.
GPs and primary care clinicians
Primary care generates constant point-of-need learning: prescribing questions, pathway checks, referral decisions, safety-netting, documentation, and patient explanations. Any tool that captures useful learning from those events can meaningfully reduce friction.
Trainees
Foundation doctors, GP trainees, registrars, and many medical students often already live in a dual burden of service work plus portfolio maintenance. Tools that reduce the distance between those two worlds can be genuinely helpful.
IMG doctors
Clinicians adapting to a new healthcare system often learn continuously through real clinical encounters. A system that helps capture that learning in a local, structured, appraisal-aware way may be particularly valuable. For that audience, the broader Academy and clinically practical learning routes inside iatroX can also be useful.
Supervisors and educators
There is a potential secondary value here. If learning capture becomes more native to practice, supervisors may get more realistic insight into what clinicians are actually struggling with and improving on, rather than only seeing formal course certificates.
Clinicians with appraisal friction
Some clinicians do plenty of learning but consistently struggle to document it well. This category exists largely for them.
The best future is probably not one tool, but a stack
A useful way to think about the market is that the “new learning stack” may end up having several layers:
- a knowledge layer for clinical questions and guideline retrieval
- a workflow layer for documentation, summaries, letters, and patient advice
- a capture layer for reflection, portfolio evidence, and exports
- an integration layer for appraisal systems, EPRs, or educational platforms
Different products currently own different parts of that stack.
Umbil is strong in the connection between question asking, output generation, and reflective capture.
Praktiki is strong in microlearning plus easy CPD recording and FourteenFish sync.
Heidi Evidence points toward evidence retrieval and CPD inside a broader care-partner ecosystem.
ClinicalKey AI shows how education credit can live inside an institutional evidence engine.
And that is precisely why this category matters. It is no longer just about “what AI answer engine should I use?” It is about how learning, evidence, workflow, and professional maintenance are starting to converge.
A practical framework for evaluating AI CPD tools
Before relying on any AI tool to support CPD, UK clinicians should ask:
Is the learning grounded in trusted sources?
If not, the resulting CPD trail may be weak from the start.
Is the reflection meaningful?
Does the tool help identify what changed in understanding or practice, or does it merely output generic sentences?
Does it reduce appraisal burden?
Or does it just move the burden around?
Is it proportionate?
Can you curate and refine what is captured, or does the system generate too much noise?
Does it fit your workflow?
A beautifully designed CPD feature is still a poor feature if you never use the surrounding product.
Is it aligned with UK professional reality?
That means reflection, supporting information, and practical usefulness, not just credit accumulation for its own sake.
Bottom line
Yes, AI can turn real clinical work into CPD.
But only under certain conditions.
It works when the learning is real, the sources are trustworthy, the capture is proportionate, and the reflection remains meaningful. It works when the tool reduces the distance between clinical practice and professional development instead of creating more bureaucratic performance. And it works best when the product feels native to the clinician’s actual day rather than bolted awkwardly onto it.
That is why this category is becoming strategically important. The future of clinician learning is unlikely to be just more webinars, more passive reading, or more separate admin systems. It is more likely to be a blended model in which some learning remains deliberate and some is captured directly from real work.
Umbil represents the workflow-derived reflection model.
Praktiki represents appraisal-aware microlearning.
Heidi Evidence represents CPD as part of a wider care workflow.
ClinicalKey AI represents education capture embedded inside a global evidence engine.
The winner will not be the tool with the flashiest AI.
It will be the one that makes professional maintenance feel like a natural extension of good clinical work.
For clinicians who want to start from the knowledge side of that equation, explore the Clinical Q&A Library, the A-Z Clinical Knowledge Centre, and How iatroX Works. For the broader category view, the next read should be What AI tool should a doctor actually use?.
