System Resilience In Mass Casualty Incidents: Operational Challenges
- SQ
- May 21
- 4 min read
Updated: May 21
During my time in flight school we were required to have sections of the FAR/AIM (Federal Aviation Regulations & Aeronautical Information Manual) etched into our heads, a dense yet indispensable tome that lays out every legally binding "rules of the road" for flying. Buried within its bureaucratic prose is a kind of practical folklore: the so-called first law of flying, which declares that “the pilot in command of an aircraft is directly responsible for, and is the final authority as to, the operation of that aircraft.” In other words, the burden of compliance begins and ends with the one at the controls. Close behind it is the second law: “In an in-flight emergency requiring immediate action, the pilot in command may deviate from any rule of this part to the extent required to meet that emergency.”
(Actually, the oft cited “first law” appears much later, in Title 14, Chapter I, Subchapter F, Part 91.3, under “Responsibility and authority of the pilot in command.”)
As cadets, we learn that while strict compliance is the norm, meaningful deviations are tolerated when the moment demands it. Because no manual, no matter how thick, can anticipate the shape of every storm cloud. Healthcare workers live a version of this reality too. Despite being error-prone, unverifiable, and explicitly discouraged, Verbal orders are nonetheless common in medical emergencies where speed is survival.
This same spirit of creative deviation likely prevented a recent “mini mass casualty” event from becoming an even greater tragedy. Just weeks ago, a fire broke out at a shophouse used for conducting children's cooking lessons and holiday camps. The blaze claimed the life of one child, left others unconscious or severely burned, and broke the hearts of many. The final casualty count stood at six adults and sixteen children, but it could have been far worse.
With flames devouring the building, many others were stranded out on the third floor ledge. Children contemplated jumping down to save themselves.
A group of nearby migrant construction workers rushed over with a section of scaffolding and erected it as close to the building as possible. It only reached the second floor, so they carried a step ladder up the scaffold, leaned it precariously against the burning structure, and climbed toward the children. Others anchored the bases of both the ladder and the scaffold. One by one, they helped the trapped children and teachers down.

In any other context, what they did would have been condemned as “unsafe practice”, a cascade of safety violations, each more glaring than the last. Using a ladder atop a scaffold is a well-known hazard, implicated in more than a few serious injuries. A human chain dangling off the side of that scaffold only compounds the risks: fall hazards, structural instability, overload. But in that moment, safety wasn’t the point. Survival was. They accepted short-term danger to serve a larger, more urgent imperative. And in doing so, they demonstrated something no rulebook can easily capture: discretion under pressure. Improvisation with purpose.
Healthcare, by contrast, is a world layered with policies, protocols, and performance metrics—systems designed to preempt failure through tight control. In such environments, safety born of improvisation is rarely rewarded. When something goes wrong, investigations often follow a Safety-I script: identify the deviation, correct the actor, reinforce the rule. The logic is tidy. If harm occurred, someone must have strayed.
But this view is narrow. It blinds us to the fact that day-to-day success, particularly in complex, fast-moving clinical environments, often requires varying amounts of adaptation. We still hear of doctors quietly documenting patient notes retrospectively, as they spend precious clinic office hours focusing on patient care and interaction. Such adaptive acts, unwritten, unmeasured, albeit at times unsafe, are often what hold things together.
Safety managers must resist the comfort of clean narratives. To understand a system fully, it is not enough to ask how things fail. We must also ask how they succeeded majority of the time, and recognize why appropriate (or inappropriate) deviations were deemed necessary. This is the essence of Safety-II, which doesn’t replace Safety-I, but complements it. They are not steps in a sequence. They are parallel perspectives. Two sides of the same coin.

Just as risk management advocates for mitigating and monitoring risk, safety isn’t about eliminating all hazards. It encompasses taking the right risks for the right reasons at the right moments. And sometimes, that means doing exactly what the manual warns against, because the manual never met the fire.

If you've scrolled down this far, welcome to the post-credits scene. While the main commentary closed on a hopeful and actionable note, it’s worth acknowledging the quiet complexities that remain. Translating Safety-II into practice, particularly in our local healthcare context, is not without friction. What is praised in the language of Safety-II can, quite reasonably, be frowned upon through the lens of Safety-I. Yet both perspectives have their place. In a parallel universe, had that ladder slipped or the scaffolding given way, the narrative might have shifted from heroism to recklessness. The very act that saved lives could have been recast as a cautionary tale of unsafe improvisation.
This is one of the many operational challenges in our journey toward true resilience. How do we support human ingenuity without slipping into dangerous deregulation? How can we advocate for adaptation and discretion without being misunderstood as endorsing unchecked autonomy? How do we differentiate brilliance from boldness, or efficiency from impulsive, rule-breaking behavior? Should healthcare look at the FAR/AIM or the aviation industry for reference, again?
Thoughts?
Comments