Skip to content


Not Accepting the Accepted – There are No Such Things as “Normal Accidents”

Charles Perrow’s 1984 book “Normal Accidents” is a great study of system failures but his main thesis that accidents are “normal” begs for a response from us High Reliability Mindset adopters.  Perrow claims, “Our ability to organize does not match the inherent hazards of our organized activities.  What appears to be a rare exception, an anomaly, a one-in-a-million accident,perrow book is actually to be expected.   It is normal.”  Sure, humans can and will make mistakes and some of these mistakes can lead to disaster BUT, and it’s a big one, as believers in the HRM safety theory we preach that accidents are not normal and certainly not the inevitable result of an error.  It is the central tenet of the HRM theory that we can stop the out-of-control train from wrecking by practicing High Reliability Mindset skills.  So this is where we part company with Mr. Perrow; he believes disaster is inevitable after a mistake or error.  We believe we just can’t accept the accepted theory that nothing can be done about the outcomes of error and furthermore we can interrupt this chain of events.  Maybe we should write a book called “Normal Errors” and show Mr. Perrow how to avoid his prediction of ultimate calamity with our HRM skills.

 

command and controlAs adopters of the HRM theory we understand that the central HRM principle used in preventing error from eventuating to tragedy is all built around trapping little missteps while they can be controlled and preventing them from propagating all the way to tragedy.  So, the primary disconnect I have with Perrow’s theory is not in the inevitability of error, it is his leap from there to a conclusion that inevitably, once an error is made, disasters are sure to follow.

 

If there was any proof that our thesis is true I suggest you spend a few days reading one of the most hair- raising and scariest books entitled “Command and Control” that is Eric Schlosser’s 2013 documentary.  This is an awesome work of research that details the multiple missteps that characterized American nuclear power development and nuclear weapons research and deployment. His story is sobering, terrifying and spins a tale that is just plain amazing that the world survived our floundering early steps into the nuclear age. The first mistakes happened right from the start with the first chain reaction of the Manhattan project in a squash court under the west viewing stands of Stagg Field at the University of Chicago in December 1942.  It was pretty much down hill from there in regards to incidents, errors, missteps and near disasters. This included dozens of nuclear bombs falling from airplanes (fortunately not detonating), bombs being mishandled, mislabeled and generally mistreated.   But that’s the point, these were all near misses, there has never been a nuclear weapons accident in spite of a colossus of screw-ups that, if Mr. Perrow is to be believed, really should have ended our world with a chain of nuclear catastrophes.

 

So, what are we HRM believers to do to prevent “normal” accidents from injuring or evening killing our patients and ourselves?  Well, that’s what this blog is all about!  Most of the previous posts on this blog deal with key elements of our defenses against disaster but let’s pick out a few key defenses and put them into a logical format to combat Mr. Perrow’s conclusions.

 

We HRM adopters start from an acknowledged position that mistakes will happen and that we can engineer into our own performance and our system numerous methods to prevent these small errors from ending in disaster.  First of all, we know that there are times when mistakes are more likely to happen and we can anticipate these times and be extra vigilant when the risk is high.  My series of posts linked here on error producing conditions details this.  Also we can use our intellect and understanding of these conditions to imagine scenarios of bad outcomes and pre-plan for ways to work our way out of problem situations and prevent further error propagation.  Check out “Imagine Patient Safety” for specifics.  And further as we monitor our safety skills we need to be sure we define our individual, team and system safety limits as detailed in “Bingo and Other Parlor Games” posted here.  We can maintain a safety culture and environment of safe performance for ourselves and our team as discussed in “Sterile Cockpits”.  And while we’re at it monitor and maximize our own individual performance as detailed in these posts.

 

Let’s not overlook teamwork.  Certainly a healthy team culture is a key to check up on each other. “Getting Them to Play Well Together” is one of the major duties you have as leader of your team.  I always tell my co-pilot in the cockpit and my residents in the OR that I’m going to miss something and so will they, but as long as we work together we’re not going to miss the same thing.  Remember that, “A Team of Experts is not an Expert Team”.  Check on each other; maintain good team performance and communication.  Get yourself and your team into a frame of mind where you are always maintaining “High Reliability Habits” for maximum safety.

And remember that you are the last chance to prevent injury as in Reason’s old Swiss Cheese model, “It All Comes Down to that Last Slice of Cheese.

 

The plain fact is accidents don’t happen by accident.  They are born of errors and nurtured and propagated to adulthood by a series of interacting events that can and must be interrupted at any number of links in the chain by system vigilance and individuals maintaining a High Reliability Mindset.

 

Share on Facebook

Posted in Error Producing Conditions, High Reliability Mindset, High Reliability Organizations, Patient Safety.

Tagged with , .


0 Responses

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.



Some HTML is OK

or, reply to this post via trackback. All comments are moderated and may not appear immediately.