Executive summary
In 2025, the application of artificial intelligence in UK cancer care has moved decisively from research pilots to governed, at-scale deployments. The evidence is now compelling: prospective clinical trials show that AI can significantly reduce the mammography reading workload in population screening without harming detection rates, and may even increase the number of cancers found (PubMed).
Beyond imaging, the landscape is maturing rapidly. Digitally assisted pathology has entered regulated clinical use, and AI-powered radiotherapy auto-segmentation is delivering substantial time savings while maintaining quality (FDA Access Data, NCBI). Furthermore, a new class of AI models is demonstrating the ability to flag cancer risk months or even years earlier from routine EHR data and pre-diagnostic scans (Nature). For NHS cancer services, the key to harnessing these benefits lies in a structured approach to adoption, guided by the robust governance frameworks set out by bodies like ASCO and NICE.
Where AI fits across the oncology pathway
Screening & triage (mammography, CT)
The most mature application of AI in oncology is in population screening. AI algorithms are being used to augment the double-reading of mammograms, helping to offload clinician workload and, in some studies, improve cancer detection rates (PubMed).
Diagnosis (digital pathology & radiology)
In diagnostics, AI is acting as a powerful "second set of eyes." The first FDA de novo clearance for an AI in pathology was granted in 2021 for a prostate cancer detection tool, signalling the clinical maturity of AI-assisted slide-reading support. These tools are now becoming available with UKCA/CE-IVD marking.
Treatment planning (radiotherapy)
One of the biggest efficiency gains is in radiotherapy planning. Multi-centre validations have shown that deep-learning auto-segmentation for organs-at-risk can deliver workflow time savings of 40–75% while maintaining clinically acceptable contour quality, freeing up dosimetrists and clinical oncologists to focus on more complex planning tasks (Nature, PubMed).
Precision oncology & follow-up
An exciting frontier is the use of AI for risk prediction and opportunistic early detection. Recent research has shown that AI models can analyse routine EHR data or pre-diagnostic scans to identify patients at high risk of developing cancers like pancreatic cancer, long before they become symptomatic (Nature).
Evidence highlights (what to cite in your business case)
- Mammography RCTs: A large-scale Swedish population screening study is a key piece of evidence. It found that AI-supported reading delivered a non-inferior cancer detection rate with a substantially lower workload for radiologists.
- Digital pathology (regulatory proof-point): The Paige Prostate Detect tool was the first AI in pathology to receive FDA de novo authorisation, a landmark regulatory milestone.
- Radiotherapy planning: Multiple prospective evaluations have now been published, consistently showing significant time savings from deep-learning auto-segmentation, particularly for complex cases like head-and-neck cancers.
- Early detection (pancreas): A 2023 study in Nature Medicine demonstrated that an EHR-based AI model could stratify a patient's risk of developing pancreatic cancer up to three years before a formal diagnosis.
UK lens: adoption signals & public acceptability
- Breast screening AI: The UK National Screening Committee (NSC) and NICE are actively reviewing the evidence for AI in breast screening, with a focus on ensuring cross-vendor generalisability and seamless workflow integration as key appraisal criteria.
- NHS deployments in pathology: National initiatives like PathLAKE Plus are helping to expand the use of AI-assisted pathology for breast and prostate cancer across multiple NHS Trusts.
- Public views: A UK study of screening-eligible women found that there is a conditional acceptability for the use of AI in breast screening, provided that human oversight is retained and that communication with patients is clear and transparent.
What good looks like (technical patterns that cut risk)
- Provenance-first: Any AI tool that provides guideline-linked outputs must be "provenance-first," showing its sources and allowing for verification.
- Robust external validation: The evidence base must include multi-centre, cross-vendor datasets to prove the model generalises beyond its training environment.
- Human-in-the-loop workflow: The safest and most effective models preserve the role of the expert clinician, whether in a second-reader paradigm for screening, tandem reads in pathology, or an "edit-and-approve" workflow for radiotherapy contours.
Implementation playbook (90-day → 12-month)
0–90 days — readiness & due diligence
- Select one high-impact use-case per service line (e.g., AI for breast screening; auto-segmentation for head-and-neck radiotherapy).
- Baseline your key metrics (e.g., radiologist workload minutes per case, contouring time).
- Gather all assurance artefacts from your chosen vendor, including their UKCA/CE mark, MHRA registration, DTAC compliance pack, and DCB0129 clinical safety case.
3–9 months — pilot with pre-declared endpoints
- Screening: Run a pilot to test for non-inferiority on cancer detection rates, measure the workload reduction, and track all safety flags.
- Pathology: Analyse any shifts in error types, the impact on turnaround time, and the yield from AI-prompted second looks.
- Radiotherapy: Measure the time saved per plan, the dosimetric impact, and the rate of re-work required on AI-generated contours.
9–12 months — scale or stop
Publish your local results. Align your governance and equity monitoring with the principles from bodies like ASCO. Develop a clear plan for managing model retraining and software updates before scaling up.
Risks & Mitigations
- Generalisation gaps: Insist on seeing cross-vendor evidence from the supplier and conduct your own local shadow-mode testing before going live.
- Automation bias: Implement explicit disagreement workflows and require a formal human sign-off for all clinical decisions.
- Equity & access: Follow ASCO's guidance on monitoring for algorithmic bias and ensure patient communication is clear and inclusive.
Tool categories & examples to watch
- Screening mammography AI: Look for clinical-use solutions with FDA/CE/UKCA marking, such as Transpara and Lunit INSIGHT.
- Digital pathology: Watch for the continued rollout of tools with regulatory clearance like Paige Prostate Detect and others undergoing NHS pilots, such as Ibex.
- Radiotherapy auto-segmentation: This is now a feature in many mainstream treatment planning systems.
- Risk/early detection: Keep an eye on the emerging research on population-level pancreatic risk models.
Measurement framework
- Screening: Cancer detection rate, recall rate, interval cancer rate, reader workload (minutes/case).
- Pathology: Sensitivity/specificity vs a reference standard, major error rate, turnaround time (TAT).
- Radiotherapy: Contouring time, geometric and dosimetric concordance, plan quality assurance.
- Early detection/risk: AUROC, positive predictive value (PPV) at clinically acceptable thresholds, and the downstream impact on time-to-diagnosis.
Governance & ethics (the non-negotiables)
- Adopt the six guiding principles for clinical AI from the American Society of Clinical Oncology (ASCO): safety, transparency, equity, accountability, oversight, and evidence.
- For any UK screening AI, your evaluation must align with the expectations of NICE and the UK NSC, with a strong focus on validation and clear communication about human oversight.
FAQs
- Can AI replace double reading in breast screening?
- The evidence strongly supports AI-supported reading, which can reduce workload and maintain (or potentially improve) detection rates. National screening programmes will decide on the final model of use after the current large-scale evaluations are complete.
- Is AI safe for the primary diagnosis of cancer on slides?
- AI in pathology has now cleared significant regulatory hurdles, such as the FDA de novo clearance for prostate cancer. However, local validation and a final human sign-off remain essential.
- How much time can radiotherapy auto-segmentation really save?
- Published studies consistently report time savings in the range of 40–75% for contouring organs-at-risk, while preserving clinically acceptable plan quality.
