It Appears That the Focus on Safety Has Been at the Expense of Other Aspects of Quality of Care
- SQ
- Jul 24
- 4 min read
Updated: Jul 29
This clickbait-ish headline was one of the summary sentences found in the Review of Patient Safety Across the Health and Care Landscape report, published just days ago by the UK’s Department of Health & Social Care. You’ll find it buried at the end of the first finding, under a header that reads: There has been a shift towards safety (vs other areas of quality of care) over the last 5 to 10 years, with considerable resources deployed, but relatively small improvements have been seen.

Left alone and without reading the entire 175-page report, one might unfairly label the lead author, Chair of NHS England Dr. Penelope Dash, as a skeptic or even a critic of patient safety initiatives. The review offers a thoughtful, if sobering, diagnosis of why well-meaning efforts in patient safety, despite years of substantial investment, often fail to yield meaningful, sustained improvement. Might some of these findings hit too close to home?
Safety by Addition, Paralysis by Accumulation
A recurring theme throughout the report is the tendency to improve safety by "adding more". The landscape is crowded with well-intentioned bodies, reviews, checklists, alerts, scorecards, toolkits, and training modules. Each one, taken alone, is sensible. Taken together, they often feel like bureaucratic sediment, layer upon layer, policy upon policy, until frontline teams can barely move without tripping over a new form to fill or metric to meet. The result? An ecosystem where no one is quite sure who’s in charge, what matters most, or how to tell whether patients are actually safer.
Serious incident analyses have become ritualized: templated, abstracted, stripped of local nuance. It’s a mundane administrative cycle for some, and a psychological torment for others ensnared in a trap-laden health system. And because each report must say something, even the tangible results feel like a template too: more checks, more training, more supervision. Recommendations pile up like unread emails, each one reasonable on its own, yet collectively impossible to absorb.
Investigations that Illuminate, not Intimidate
Incident investigations, particularly in the UK, are further handicapped by a lack of strategic coordination. Inquiries happen often, and sometimes all at once, each in their own silo, with little cross-talk and no shared clearinghouse for findings. The result is a scattergun approach to learning: disconnected reports, overlapping recommendations, and no coherent pathway to system-wide change. More troubling, some investigations, especially those commissioned reactively or led by groups without formal investigative expertise, can feel less like learning exercises and more like audits of blame. Staff describe a climate of uncertainty and quiet fear, unsure of who’s watching, what will be written down, or how their words might be interpreted later. The resulting reports are often superficial, quick to diagnose, quicker still to prescribe.
One of the quiet strengths in the UK’s safety landscape is the Health Services Safety Investigations Body (HSSIB), a body designed not to regulate or punish, but simply to understand. Staffed with investigators who actually know what they’re doing (including those with backgrounds in human factors, systems safety, and other high-hazard industries), HSSIB was meant to bring scientific discipline to the art of learning from failure. It operates independently from care providers, with a statutory duty not to assign blame or liability so as to foster learning rather than fear. When investigations belong to experts, not committees, we get real insight instead of more paperwork, genuine learning instead of finger-pointing, and the kind of clarity that actually helps systems move forward, instead of just ticking the boxes.
The User Voice, Unused by the System
Despite being a health system that sees hundreds of millions of patient interactions each year, the mechanisms for listening to those patients are surprisingly clumsy. A bewildering spread of patient advocacy bodies exists to speak on behalf of the health services user, yet they are too far removed to drive change where it matters. Most NHS boards still lack an executive for user experience, a role that would be considered basic in any airline, hotel chain, or tech company. It's one thing to conduct surveys or hold listening sessions. It’s another to actually act on what’s heard. And if the user’s voice is filtered through six different layers before reaching decision-makers, is it still a voice, or just background noise?
However, what might be missing isn't just representation, but human-centered design. Patients, caregivers, and even healthcare professionals continue to navigate services that feels unintuitive or downright confusing. When healthcare services are not usable, they are not safe. Safety doesn't just depend on what happens in theatre, at the clinic, or on the ward. It begins the moment someone tries to book an appointment or seek help. If the system frustrates, confuses, or excludes, harm is already underway.
Data-rich, Insight-poor
For a system as data-rich as the NHS, with decades of audit results, incident reports, and user feedback, the report highlights how little of it is converted into insight. Advanced analytics and AI could help surface hidden risks and identify priorities, but most of that potential remains dormant. Instead, data is gathered manually, reported narrowly, and siloed away. In a system overwhelmed by noise, better use of data could be the signal we need.
Critique is Cheap, Creation is Costly
While it is easy to unearth history for a list of shortcomings, it's just as easy to critique a report while overlooking the complexity, trade-offs, and sheer effort it took to complete the review. The report brings to light the fragmentation in the UK’s patient safety oversight landscape, but it leaves its origins curiously unexamined. The proliferation of safety bodies didn’t happen by accident, and reflects a knee-jerk response to create new layers of oversight, rather than confronting the cultural and operational roots of failure. Many incident recommendations aren’t ignored because no one’s in charge, but because no one feels safe or supported enough to act on them, or if the work-as-imagined recommendations even make sense in the work-as-done environment. The report risks oversimplifying safety as a mathematical input-output problem through condensed metrics, ignoring the complex adaptive system behaviors that keep today’s messy healthcare systems together.
I have to acknowledge the possibility of any political underpinnings. Last year, Dr. Dash also chaired a review of UK's Care Quality Commission and found "significant failings". A primary-school level version of this report can be found here.
But it still begs the question: has your well-meaning patient safety effort come at the expense of other aspects of quality of care? How can you be sure?
PS: Thank you Karen for sharing the report with me!
Comments