Skip to content

Sully, The Poster Child for The High Reliability Mindset

I think those of us who are adopters of the High Reliability Mindset should vote every year for the person who uses our HRM skills and tools to achieve the year’s most amazing outcome. That goes for a superstar either in and outside of medicine.   My vote this year goes to Chesley Sullenberger the heroic pilot of US Air flight 1549.  His story is the subject of the new movie “Sully” with Tom sullyHanks and by now everyone knows the details. Sully was flying an Airbus A320 and was 2 minutes into its flight with 150 people abroad when the plane flew through a huge flock of birds. The birds were ingested into the engines and shut down both of them only 3,000 off the ground.  His airplane was now about as aerodynamic as a cinder block. With almost superhuman calm and control Sully made the life saving decision to ditch the airplane into the Hudson River. He resisted the temptation to try to get the aircraft back to the airport. He would have had to defy physics to fly the huge A320 back as it was fully fueled and loaded with people and baggage (very heavy); he knew it right away and made the right call to put the plane down in the water.

If the High Reliability Mindset can be distilled into a single overarching safety principle it is that we need to use our entire spectrum of HRM tools and get to a place in our minds were we are capable of making a quick and accurate assessment of a major problem by processing complex information and immediately formulate a plan to do the right thing. That even goes for those times when our brains are on fire. So, let’s dissect the story and find the lessons we can learn.   According to the NTSB review of the brainaccident, about 12 seconds after the bird strike Sully took control of the airplane by hand and five seconds later called for the QRH (Quick Reference Handbook) ‘Engine Dual Failure’ checklist from his first officer. This was the checklist that contained guidance to follow if an engine restart was not possible and if a forced landing or ditching was anticipated.  With those laws of physics now ganging up on them, they no time to complete the checklist because of the airplane’s low altitude and the limited flight time as the plane fell to the ground.  During those few short moments, Sully assessed his dire situation – low to the ground in a big, heavy airplane that was suddenly turned into a glider by a flock of birds.  I’m sure he knew the glide ratio (defined as the rate of altitude loss of an airplane with no power) of his A320 – it is about 15:1.   That meant at 3,000 feet he had less than 7 miles to glide before hitting the ground – but he was already 12 miles from the airport – not an enviable position to find yourself.  Pilots call the temptation to get back to the airport after an engine failure “stretching the glide” and it never works because the laws of physics grant you no waivers.  If you loose your engines in the air you have climbed up a decision tree with very few branches so you have to climb out on the right one.  Decision time.

First, and don’t get me wrong on this; I am a huge advocate of checklists.  I’ve written about it on this site and published in peer reviewed surgical journals on the use of checklists in surgery.    Just for a refresher, go back and review the recent posts on Standard Operating Procedures and previous posts on Timeouts, Personal Minimums, and as far back as the first post on this site.  HOWEVER and it’s a big one, there are those brains-on-fire times when you just don’t have time for written checklists and have to rely on your memory.  It’s something of a conundrum but still a very important principle in patient safety and also cockpit safety.  The reality is when there just isn’t time to pull out a lengthy printed checklist and scroll through the items to get you or your patient out of a jam.  Furthermore, it is just impossible to contemplate every permutation, combination and iteration of a big acute problem and then formulate a useable checklist.  So for those of us who advocate the High Reliability Mindset there just has to be another way to manage emergencies and imminent disasters.  Air Force pilots call them Bold Face Items – emergency procedure memory items – that must be memorized and recited back during training exercises.  We even teach our children instant recall check lists items such as “Stop Drop and Roll” in case of a fire.  So it can be done, but how?

Neuroscience offers a few ways to help us remember elements that are time critical and have to be recalled from memory but ALL of this depends on you practicing and storing the information and skills long before they are needed in an emergency.  Physicians and pilots handle complex information, certainly with a different vocabulary and different equipment but the lessons apply equally to both since our brains handle all this information and time pressure in the same way.  Experts practice and drill and simulate to store time critical information in the frontal lobes for rapid access in the decision making process.   Experts make decisions reflexively based on instant recall of this information.  Amateurs store information in the temporal lobes where critical information is short lived and hard to find.  Amateurs make deliberative and time consuming decisions since they don’t have as much stored information and have to “figure it all out” again.

So before any tricks can be applied to store those emergency memory items you have to turn your brain into an expert’s brain with The High Reliability Mindset of frontal lobe storage from multiple practice sessions and simulations.  So here’s how you can commit and recall those hot button, instant accessible items.

  • One of the most important factors are clues and tips right from the environment you’re in.  If the right triggers are present, you’ll retrieve the appropriate items from your memory stores and then execute it.
  • Another important factor is the number of memory items you need to call up for an emergency.  One trick I use is to commit to memory the number of items on my memory checklist then I keep going back into my frontal lobes until I have done the number I memorized.  Most neuroscientists think the average person can keep lists from 3 to 7 items in rapid access storage areas.
  • Keep your memory checklists simple with only one “bullet” item per line not a complicated “if this do that”.  The best number is probably no more than 3 steps per memory line.
  • When you are in an actual emergency the situational factors of the patient’s predicament and associated distractions are really relevant in preventing you from carrying out the tasks on your Memory Item Checklist. Respect this – which means practice in your head and in simulation when available under stressful situations.  Make it real, make it feel real and take the drills seriously to cement your memory items in the front of your brain where they will do you and your patient the most good.

The story, and the movie have a happy ending.   Responding to the captain’s report of a bird strike, controller Patrick Harten gave the flight a heading to return to LaGuardia and told him that he could land to the southeast on Runway 13. Sullenberger responded that he was unable.  “We can’t do it”, and then, “We’re gonna be in the Hudson”, making clear his intention to bring the plane down on the Hudson River due to a lack of altitude. Air traffic control at LaGuardia reported seeing the aircraft pass less than 900 feet (270 m) above the George Washington Bridge. About 90 seconds before touchdown, the captain announced, “Brace for impact”, and the flight attendants instructed the passengers how to do so.

The plane ended its six-minute flight at 3:31 pm with an unpowered ditching in the middle of the North River section of the Hudson River roughly abeam 50th Street (near the Intrepid Sea-Air-Space Museum) in Manhattan and Port Imperial in Weehawken, New Jersey. Sullenberger said in an interview on CBS television that his training prompted him to choose a ditching location near operating boats so as to maximize the chance of rescue. The location was near three boat terminals: two used by ferry operator NY Waterway on either side of the Hudson River and a third used by tour boat operator Circle Line Sightseeing Cruises.  It’s a fascinating look into the emergency thoughts and actions of a well-trained pilot to listen to the air traffic controller voice recordings or  read the actual transcripts of the cockpit voice recorder (CVR) for brief flight on the NTSB web site.

Some highlights from the transcript are particularly insightful:  Sully’s mic is HOT-1, his First Officer is on mic HOT-2, TCAS is the traffic collision avoidance system and EGPWS is the enhanced ground proximity warning system, US Air contacts traffic control with the call sign, “Cactus”….

15:27:10.4 HOT-1 birds.

15:27:11 HOT-2 whoa.

15:27:11.4 CAM [sound of thump/thud(s) followed by shuddering sound]

15:27:12 HOT-2 oh #.

15:27:13 HOT-1 oh yeah.

15:27:13 CAM [sound similar to decrease in engine noise/frequency begins]

15:27:14 HOT-2 uh oh.

15:27:18 CAM [rumbling sound begins and continues until approximately 15:28:08]

15:27:23.2 HOT-1 my aircraft.  (Sully takes over hand flying airplane)

15:27:24 HOT-2 your aircraft.  (First Officer acknowledges, remember the post “It’s my pool, It’s your pool”?)

15:27:28 HOT-1 get the QRH… [Quick Reference Handbook] loss of thrust on both engines.

15:27:30 FWC [sound of single chime begins and repeats at approximately 5.7 second intervals until 15:27:59]

15:27:32.9 RDO-1 mayday mayday mayday. uh this is uh Cactus fifteen thirty nine hit birds, we’ve lost thrust (in/on) both engines we’re turning back towards LaGuardia.

15:27:42 DEP ok uh, you need to return to LaGuardia? turn left heading of uh two two zero.

15:28:02 HOT-2 airspeed optimum relight. three hundred knots. we don’t have that.

15:28:03 FWC [sound of single chime]

15:28:05 HOT-1 we don’t.

15:28:05 DEP Cactus fifteen twenty nine, if we can get it for you do you want to try to land runway one three?

15:28:05 CAM-2 if three nineteen-

15:28:10.6 RDO-1 we’re unable. we may end up in the Hudson.

15:28:14 HOT-2 emergency electrical power… emergency generator not online.

15:28:18 CAM [sound similar to electrical noise from engine igniters ends]

15:28:30 HOT-2 distress message, transmit. we did.

15:28:31 DEP arright Cactus fifteen forty nine its gonna be left traffic for runway three one.

15:28:35 RDO-1 unable.

15:28:36 TCAS traffic traffic.

15:29:11 PA-1 this is the Captain brace for impact.

15:29:14.9 GPWS one thousand.

15:29:25 RDO-1 we can’t do it.

15:29:26 HOT-1 go ahead, try number one.

15:29:28 RDO-1 we’re gonna be in the Hudson.

15:29:33 DEP I’m sorry say again Cactus?

15:29:37 GPWS too low. terrain.

15:29:41 GPWS too low. terrain.

15:29:49 EGPWS terrain terrain. pull up. pull up.

15:29:51 DEP Cactus uh….

15:29:55 EGPWS pull up. pull up. pull up. pull up. pull up. pull up.

15:30:04 GPWS too low. terrain.

15:30:06 GPWS too low. gear.

15:30:09 CAM-2 got no power on either one? try the other one.

15:30:13 EGPWS caution terrain.

15:30:14 DEP Cactus fifteen twenty nine uh, you still on?

15:30:15 FWC [sound of continuous repetitive chime begins and continues to end of recording]

15:30:15 EGPWS caution terrain.

15:30:21 HOT-1 got any ideas?

15:30:22 DEP Cactus fifteen twenty nine if you can uh….you got uh runway uh two nine available at Newark it’ll be two o’clock and seven miles.

15:30:23 EGPWS caution terrain.

15:30:23 CAM-2 actually not.

15:30:24 EGPWS terrain terrain. pull up. pull up. [“pull up” repeats until the end of the recording]

15:30:38 HOT-1 we’re gonna brace.


In June 2005, NASA published a report that discussed the challenges of emergency and abnormal situations in aviation.  The report states, “some situations may be so dire and time-critical or may unfold so quickly” that pilots must focus all of their efforts on the basics of aviation—flying and landing the airplane—with little time to consult emergency checklists.  This sounds a lot like what those of us who deal with medical emergencies and trauma confront frequently.  The report indicated that, although pilots are trained for emergency and abnormal situations, it is not possible to train for all possible contingencies. Further, although training provides operators with greater flexibility than guidance and commercial carriers are faced with time and financial constraints limiting the “range and depth” of the emergencies trained to those that are most common and for which the checklist procedures work as expected. The report also pointed out that, although simulators have a limited ability to replicate real-world emergency and abnormal situations and demands, training for these situations does benefit pilots. The NASA report noted that a review of voluntary reports filed on the Aviation Safety Reporting System (ASRS) indicated that over 86 percent of “textbook emergencies” (those emergencies for which a good checklist exists) were handled well by flight crews and that only about 7 percent of non-textbook emergencies were handled well by flight crews.

The NASA report highlights many of the principles of training in healthcare that have been in previous blog posts on this site.  For example the report noted that, although checklists and procedures cannot possibly be developed for all possible contingencies, checklists should be developed for emergency and abnormal situations “for all phases of flight in which they might be needed.” Further, the report stated that, when designing checklists and procedures for emergency and abnormal situations, attention should be paid to the wording, organization, and structure to ensure that the checklists and procedures are easy to use, clear, and complete. The report also indicates that, because attention narrows during emergency and abnormal situations due to increased workload and stress, checklists and procedures should minimize the memory load on flight crews and that some airlines and manufacturers have reduced the number of items that have to be memorized by the flight crew (memory items). In addition, because flight crews have limited opportunities to practice abnormal situations, performing the appropriate procedures requires greater effort and concentration. Finally, flight crewmembers’ attention can become narrowed, causing them to become cognitively rigid, which can reduce their ability to analyze and resolve the situation.

So in the end this episode shows us the ultimate outcome of training, practice and use of the High Reliability Mindset tools.   For us the take-way lessons are clear.  Don’t rely on checklists for every possible contingency.  Develop a cogent, consistent way to process emergency information and formulate an out for your patients, or your passengers.  The HRM thought process works and it will apply no matter what the unique circumstances you face in the middle of the night when your brain is on fire.

(Picture credit – not a copyright protected image:

Share on Facebook

Posted in CRM, High Reliability Mindset, Human Factors, SA.

Tagged with , , , , .

Medical Error – New Statistics but the Same High Reliability Mindset Solutions

This week the press is buzzing about a new study just published in The British Medical Journal (linked here) on medical errors in the United States. It certainly is something we High Reliability Mindset proponents already know about, but the study presents a new angle and refreshing take on one of the most critical issues in healthcare.  It has also been the topic of this blog for the past 5 years.


The authors from Johns Hopkins Medical School have used a new epidemiological algorithm that tries to identify the root causes of medical related deaths, such as those due to medical errors, inorder to calculate more accurate death rates.  Usually, death rate statistics are based on the cause of death that docs fill out on the death certificate and that is the traditional cause of death (i.e. “cardiac arrest”).  This is the immediate issue that led to the patient’s demise but not necessarily the deeper root causes of the death.  The authors reason, and correctly I believe, that this is not really a cause of death but more realistically a metric of death itself and therefore not truly reflective of deaths due to errors.  For 17 years now, statistics on death due to medical error have been discussed in terms of the famous Institute of Medicine report that was optimistically entitled, “To Err is Human”.  The 1999 IOM study puts the number of deaths due to medical error at about 98,000 per year in the US.  Using these newer methodologies the BMJ study puts that number closer to about 300,000 and that makes this the third leading cause of death in the US.


Of course statistics can be bludgeoned into any conclusion that the statisticians want, but this new review is an honest and thoughtful study that takes a fresh look at this chronic problem and comes up with new and insightful data.  As adopters of The High Reliability Mindset, we agree with the conclusions of this report.  It states that, “Human error is inevitable. Although we cannot eliminate human error, we can better measure the problem to design safer systems mitigating its frequency, visibility, and consequences. Strategies to reduce death from medical care should include three steps: making errors more visible when they occur so their effects can be intercepted; having remedies at hand to rescue patients.”


As we have discussed right from the first post on this blog, the basic fact is that human error will occur, so we must engineer a system that traps these small missteps early and prevents an ultimate catastrophe to our patients.  We must fight this problem also by understanding when and how error occurs and maintain a constant vigilance to prevent these potentially fatal mistakes.  Click on any link on the main page to find HRM tools that will help accomplish this aim.

Share on Facebook

Posted in High Reliability Mindset, High Reliability Organizations, Patient Safety.