This week the press is buzzing about a new study just published in The British Medical Journal (linked here) on medical errors in the United States. It certainly is something we High Reliability Mindset proponents already know about, but the study presents a new angle and refreshing take on one of the most critical issues in healthcare. It has also been the topic of this blog for the past 5 years.
The authors from Johns Hopkins Medical School have used a new epidemiological algorithm that tries to identify the root causes of medical related deaths, such as those due to medical errors, inorder to calculate more accurate death rates. Usually, death rate statistics are based on the cause of death that docs fill out on the death certificate and that is the traditional cause of death (i.e. “cardiac arrest”). This is the immediate issue that led to the patient’s demise but not necessarily the deeper root causes of the death. The authors reason, and correctly I believe, that this is not really a cause of death but more realistically a metric of death itself and therefore not truly reflective of deaths due to errors. For 17 years now, statistics on death due to medical error have been discussed in terms of the famous Institute of Medicine report that was optimistically entitled, “To Err is Human”. The 1999 IOM study puts the number of deaths due to medical error at about 98,000 per year in the US. Using these newer methodologies the BMJ study puts that number closer to about 300,000 and that makes this the third leading cause of death in the US.
Of course statistics can be bludgeoned into any conclusion that the statisticians want, but this new review is an honest and thoughtful study that takes a fresh look at this chronic problem and comes up with new and insightful data. As adopters of The High Reliability Mindset, we agree with the conclusions of this report. It states that, “Human error is inevitable. Although we cannot eliminate human error, we can better measure the problem to design safer systems mitigating its frequency, visibility, and consequences. Strategies to reduce death from medical care should include three steps: making errors more visible when they occur so their effects can be intercepted; having remedies at hand to rescue patients.”
As we have discussed right from the first post on this blog, the basic fact is that human error will occur, so we must engineer a system that traps these small missteps early and prevents an ultimate catastrophe to our patients. We must fight this problem also by understanding when and how error occurs and maintain a constant vigilance to prevent these potentially fatal mistakes. Click on any link on the main page to find HRM tools that will help accomplish this aim.
Share on Facebook
Posted in High Reliability Mindset, High Reliability Organizations, Patient Safety.
– May 9, 2016
On May 31, 2014 at about 9:30 in the evening a Gulfstream G-IV business jet bound for Atlantic City was destroyed while attempting to take off from Hanscom Airport in Bedford, Massachusetts. Seven people were killed; two pilots, a flight attendant, and four passengers. The airplane hurtled down the runway and into a ravine and just before the crash that ended their lives, the seasoned pilots can be heard on the cockpit voice recorder repeatedly shouting, “I can’t stop it…I can’t stop it”. Even though the plane was going 150 mph it never left the ground.
So, what went wrong? As with all aircraft, the G-IV is equipped with a safety feature called “flight control locks” or “gust locks” for short, that prevent the wind from pushing the control surfaces around while the plane is on the ground (emphasis here is the “on the ground” part of the description) and damaging the plane. You can still add all the power you want to the aircraft engines but with the locks in place, you just can’t steer or take off. Of course the National Transportation Safety Board did a thorough investigation of the accident, and as we have discussed in previous posts on this site, blamed the deadly crash on pilot error. In this case the NTSB had it right, it was totally their fault. They had left the gust locks in place accelerating down the runway and they couldn’t get the plane off the ground even after they passed their takeoff speed.
It turns out that not only did the crew fail to disengage the flight control locks prior to the attempted take off; they also didn’t run a simple pre-flight check list that would have found the problem and avoided the accident. The same two pilots had flown hundreds of flights together and spent thousands of hours together in the cockpit but the investigation found that they actually never adhered to standard safety procedures. The NTSB stated that the “crew’s habitual noncompliance with standard operating procedures and checklists” was well known and that these pilots had a “long-term pattern” of noncompliant safety behavior. The NTSB conclusion was that failing to adhere to standard operating procedures and common safety principles had caused the deadly accident. These two pilots enabled and reinforced each other’s bad habits with their mutual agreement and support of their shared disregard for standard safety habits. As adopters of the High Reliability Mindset – we are all firm believers in High Reliability Habits as key weapons to avoid tragic mistakes.
Just like in all disasters, this accident is the result of failures on many levels as in the famous Jim Reason “Swiss Cheese Model” that has been the topic of previous blog posts. With so many “holes in the cheese” there were multiple opportunities to prevent this tragedy before it ever happened. The concept of trapping small missteps before they can grow into full-fledged disasters is a critical component of the High Reliability Mindset that I have written about in past posts. Certainly, the most direct blame is with the crew itself but we can’t overlook the fact that the leadership had opportunities here also and decided to overlook and leave uncorrected and unpunished the crew’s “habitual non-compliance”. Their failure to develop a healthy team culture is a key component of this tragedy and resulted in their failure to check up on each other and prevent both pilots from ignoring the same thing. “Getting Them to Play Well Together” is one of the major duties we have as leaders of our teams. As we have discussed in the past, just because there are experienced operators and experts on the team does not make it an expert team. Remember that, “A Team of Experts is not an Expert Team”. A critical safety habit in team operations is to check on each other; maintain good team performance and assure precise communication. Get yourself and your team into a frame of mind where you are always maintaining “High Reliability Habits” for maximum safety.
Another issue is the simple use of checklists that I have rallied for so many times in the past. A healthy safety culture means each team player enforces safe habits and checklist use by other team members. Whether it’s a simple pre-op checklist in the operating room or a pre-flight checklist, just reading the items avoids catastrophe before it can even start down the slippery slope. In this case the pre-flight checklist would have immediately identified that the control locks remained engaged and the crew could have released the locks with just a quick flick of their fingers. Instead they all died. Another integral part of teamwork is communication as I wrote about in the blog It’s My Pool, It’s Your Pool. Not only did this team not run the check list they obviously didn’t even talk to each other about the takeoff procedures as noted on the cockpit voice recorder.
Bad habits are like bad viruses – they are contagious and both guys on this flight crew caught the fatal bug of complacency. Deeply engrained and long-standing habits can become impossible to break and it has been said that the chains of our bad habits are too weak to be felt until they are too strong to be broken. Some physicians still push back against standard operating procedures as we view them as limiting and restrictive. But standards generally enshrine the safest and best practice models and are standard practice for a reason – because they work to prevent disaster. SOPs standardize good ideas and assure reproducibly safe outcomes. The use of SOPs should be enforced by all team leaders and used by all high reliability teams
Share on Facebook
Posted in High Reliability Mindset, High Reliability Organizations, Human Factors.
– April 22, 2016