The RCGP has addressed this directly. The position is nuanced but clear.
The RCGP Position
AI can be a useful tool to support aspects of personal and educational development. But overuse or overreliance risks undermining the very purpose of reflective practice. The purpose of reflection is not the output (the written entry) — it is the process (the cognitive engagement that produces learning).
What Is Acceptable
AI-generated prompts for reflection — "what could you have done differently?" or "how does this relate to capability X?" Improving the clarity and structure of your own writing. Identifying relevant RCGP capabilities to link. Using purpose-built tools (Learner+, iatroX CPD) designed for this workflow.
What Is Not Acceptable
AI generating the reflection content itself. Using AI for MSF feedback or self-reflection assessments. Entering patient data into consumer AI tools. Submitting polished AI output without genuine cognitive engagement.
The Practical Framework
Write your reflection first — even if it is rough and incomplete. Use AI to deepen it — ask probing questions about your entry, identify gaps in your analysis. Edit the result — ensure the final version reflects your genuine thinking. Never the reverse (AI writes, you edit).
Panel Reality
If every reflection reads like polished AI output with no genuine voice, panels will notice. Formulaic language, identical structures, and generic learning points are red flags. Your ES and ARCP panel may explore entries — you need to discuss the content in depth.
Declaration
Be transparent with your ES and appraiser about how you have used AI. Transparency protects you. Both RCGP and RCPCH recommend this.
Where iatroX Fits
iatroX's CPD module scaffolds the reflection process — mapping learning activities to professional domains — without generating the reflection content itself. The thinking remains yours.
