Thanks for the summary of the incident. It was helpful to me to see that it took you some time to come to the decision to change your plan. Losing an engine mid flight is not a minor issue. Still you first continued to fly to the original destination.
Making the decision to change plans so that you don't reach your desired outcome is something that we try to avoid and therefor neglect the severity of problems that are happening to us.
I encountered a related situation recently where snowfall in the mountain blocked our planned route. Instead of taking a detour or going back into the valley we pushed through the snow. We neglected the severe risk of an accident until the very end where we luckily decided to return as we stood in front of a very dangerous section. Even though it obviously would have been a big risk to take that route it still took us a long time to come to the decision to turn around, because we were strongly attached to our plan. Inertia is hard.
The ancient Greek - in line with this - thought of Hope as a dangerous thing. Because it made us neglect the danger already around us.
That's why I think we nowadays get the Pandora's Box myth wrong. Hope at the bottom was the most dangerous of all evils in the box. Not what many believe the one good thing that was in there.
The main reason to land ASAP after an engine shutdown is that you don't know what caused the shutdown. If one engine went out, the other has a high likelihood of going out, too. And then yes, it will sound stupid in the NTSB report...
Yeah couldn't find his exct model manual but the checklist I've seen all end with "land as soon as possible" for these kind of high stakes procedures specifically to avoid human factors in the decision making, it's a weird blog entry this one, like I wouldn't like to fly with people second guessing checklists.
Many years ago I worked as a sysadmin at a bank. We had an outage -- a lightning strike on our server room caused stuff to fail and shut down all power.
Upon restart, we discovered our main Oracle database supporting everything that is important at the bank was corrupted.
No biggie, we had a backup database. The backup was constantly being updated from the primary and was essentially just couple minutes away from being current with it.
But the Disaster Recover Plan which was prepared the previous year clearly stated that in case of a disaster like that, the first step is to make a backup of the secondary database. This would take about 6 hours which our management didn't want to do. This would mean the bank would not be open the next day and nobody wanted to make that decision.
So they decided, we will skip this step, copy the rest of redo logs to the backup, apply them to the database and then turn on the database.
We did this only to find the secondary is also corrupted.
To summarise, when the lighting hit the server room, some garbage was written to the redo logs on the primary server. We had perfectly good secondary but when we copied the logs from primary and applied to the secondary, we also corrupted it.
Now we have no primary and secondary... let's check if we have a tape backup!
As it happened, we were in transition from an old DDS backup method which was taking too much time during the night to a new shiny DLT tape library. Only the tape library came blocked to just couple slots (demo mode!) and would use same 5 tapes all over again. It was impossible to fix this remotely and we were waiting for a technician to come and unblock the library.
In the meantime, somebody decided we don't have enough time to do both DLT and DDS backup, so they figured out we will only do the new, untested backup method.
So when we tried to recover data from DLT tape we found ... it is also corrupted. Same tapes were written over again and again without ever testing the recovery and now the only backup we had was also corrupted.
At this point, to our horror, we found that we have no single consistent backup of our main database.
In the end, I ended up spending two or three days almost NON-STOP on a transatlantic call (that was extremely expensive back then) with an Oracle guru who was telling me how to edit the large data files in a hexadecimal editor to manually fix the mess.
In essence, we would locate everything that would point to the extents that contained garbage and would edit it to no longer point to the garbage.
Overall, the bank was shut down for 6 days and lost 20% of its market cap.
All that for not following the procedure.
Somebody made that procedure with a clear head to analyse all pros and cons and all possibilities. The time to ask "Why do we need to make a backup of the backup before we can use the backup?" was when that procedure was made, not when the actual thing happened...
Indeed. In an emergency: don't stop thinking but also don't overthink, the people that had all the time in the world to design the recovery process probably thought of stuff that you didn't think of and your best bet is to not optimize and do it by the book. Because if you are 'creative' you may save a couple of hours or lose many days. The possible downsides usually aren't worth it.
Also: TEST YOUR BACKUPS.
I actually applaud the author for this post: it's very clearly something that it is easy to find fault with and yet he's got enough sense of responsibility to do it anyway knowing that a lot of people will piss on him. But it may save a life or two down the line and that makes this kind of honesty more than worth it.
You must have to really love flying to do it recreationally. The whole thing sounds like a monumental pain in the ass. Way too much can go wrong and lots of little details to keep track of.
Making the decision to change plans so that you don't reach your desired outcome is something that we try to avoid and therefor neglect the severity of problems that are happening to us.
I encountered a related situation recently where snowfall in the mountain blocked our planned route. Instead of taking a detour or going back into the valley we pushed through the snow. We neglected the severe risk of an accident until the very end where we luckily decided to return as we stood in front of a very dangerous section. Even though it obviously would have been a big risk to take that route it still took us a long time to come to the decision to turn around, because we were strongly attached to our plan. Inertia is hard.