Aircraft Accidents and Penetrating the Cockpit

The first recorded United States hijacking occurred on May 15, 1928; it was perpetrated with a ball peen hammer.  Skyjackings through the 1960s and 1970s, the hijacker(s) gained access to the control cabin; takeovers were executed with a wide range of weapons, e.g. knives, guns or ‘bombs’ in a shoebox.  The last US hijackings on September 11, 2001, were successful because of coordinated efforts by terrorist teams penetrating the cockpit.

It appears that may no longer be necessary; at least, not physically.  The cockpit can be breached without raising a finger.

Technology has not been asleep over the last two decades.  The almost complete transition of most airliners, certainly most aircraft, from analog to digital technology, has left a vacuum where control used to be.  In a recent issue of Motherboard, Joseph Cox examined the threats in an article titled: US Government Probes Airplane Vulnerabilities, Says Airline Hack Is ‘Only a Matter of Time’.  Ironically, even this June 6, 2018 article may be out of date, technology-wise.

Skyjackings through the 1970s were based on controlling the pilots and thus the aircraft; the hijacker(s) wanted two things: passage to the location of their choice and their ten minutes of fame … or infamy.  The hijacker was not a pilot; they did not know how to fly the aircraft, so were dependent on keeping at least one pilot alive to arrive safely; that was their leverage.  The cockpit in many airliners had three pilots; the aircraft were controlled by systems that were analog, manipulated by toggle switches, wheels, directional switches and handles.  Not only did the hijacker control the three pilots, but air traffic controllers and law enforcement were kept at bay, powerless to interfere with the hijackers’ demands.  The power: controlling every aspect of the hijack with the weapon of choice.

Then in April 1994, FedEx flight 705 changed the rules forever when Auburn Calloway attempted to wrest control of a DC-10 from its crew.  The coward, Calloway, a FedEx pilot himself, did not need the flight crew to operate the aircraft to its suicidal conclusion, because he would fly it.  He endeavored to kill the crew in flight and fly the aircraft into the Memphis hub.  In a series of unexpected, albeit fortunate, circumstances, Calloway’s sole opportunity was foiled in different ways.  Through the flight crew’s tenacity, bravery and will to survive while saving others, the plan fell apart.  However, it was the first time that both suicide and the hijacker’s flying skills were crucial to the hijacking’s purpose.

FedEx 705’s lessons should have been the case study for all transportation and law enforcement agencies; the world had changed; the rules had changed.  It is Monday morning quarterbacking to suggest, but other than in the Tom Clancy novel, Debt of Honor, no one seemed to give the scenario of FedEx 705 any validity.  It was a cargo plane; that can’t happen to a passenger airliner, the checks are far superior to overnight cargo operations.

On nine-eleven, the four hijackings proved again that the flight crews were dispensable to the terrorists, after serving only one purpose; get the aircraft in the air.  The facts of the hijackings were too random: more than one airline targeted; more than one aircraft type attacked and even more than one airport of origin.  These unknowns confused the intent.  The hijackers were trained to fly, assuring each airliner would hit its target.  The one hurdle, as in all prior hijackings, was still gaining access to the cockpit.

Shanksville, PA, Ground Zero, the Pentagon and even Fresh Kills, Staten Island, are vivid reminders of the US’s vulnerability that day.  Nearly seventeen years have passed since that infamous Tuesday morning, already the public has largely forgotten or downplayed the events of nine-eleven.  One World Trade Center has replaced the Twin Towers and the Pentagon’s western wall has been repaired as if nothing happened.  And perhaps that is the problem; we have erased the scars and, as a result, the shock and pain.  Now, there are those who aim to dispel the very fixes put in place as a result of nine-eleven.  The Department of Homeland Security (DHS) is constantly under fire; The Transportation Security Administration (TSA) has always been a favorite target and current political candidates suggest abolishing the US Immigration and Customs Enforcement Agency (ICE).  Just when this country must prepare for another attack, the populace is only concerned with inconvenience and political correctness.

How easy would it be to take over an airliner today?  According to Mister Cox’s article, somewhat easier than one would imagine.  The DHS and other US government agencies are already investigating the ease with which an airliner’s controls could be exploited by a cyberattack.  Demonstrations have already been made that show an unmanned aerial vehicle (UAV), aka a drone, can be hijacked from halfway around the world.  This does not happen because of one’s firsthand manipulation of flight or engine controls.  Instead, it is simply software, programming; that is the new knife, gun or ‘bomb’ in a shoebox.

Each airliner made after the 1980s has digital technology, utilizing ‘fly-by-wire’ equipment.  This technology allows the manufacturer to decrease aircraft weight by eliminating thousands of pounds of cables, pulleys and quadrants; it relieves mechanics of troubleshooting and many return-to-service tests, saving money and time.  The onboard computers interpret pilot input into a digital signal that translates into the movement of flight controls, engine throttles and landing gear.  Wires have replaced mechanical devices while computers act on behalf of the pilot.  On autopilot, the aircraft operates independently; throttle movements, course corrections and altitude adjustments are made without pilot input, indeed without pilot awareness.

Who programs and repairs these aircraft computers?  Any one of hundreds, if not thousands, of avionic repair stations worldwide.  Are there controls in place to police the avionics people repairing, possibly programming, these computers?  It would depend.  In the US?  Most likely, but other countries do not receive the security oversight seen in the US.

A ‘sleeper’ program, a virus, that could upset an airliner’s control could be uploaded in computers, installed in airliners all over the world, with no forewarning; it could ‘sleep’ for days, weeks, months or even years.  An upset could include an aerodynamic stall during take off where recovery is impossible.  The program could shut down all engines four hours flying time from land over the deepest underwater cavern.  With the plans for pilotless airliners, what chance would the passengers have?  Would the flight data recorder, if found, reveal any data?  How many airliners around the world could be upset in one day, without the hijacker (singular) even entering the cockpits or even the airliner?

And what of UAVs?  Could they be made to fly into an airliner’s engines on the approach into a major airport?  Would the cockpit windscreen survive a direct hit with a drone?

Could any of these scenarios happen?  Perhaps.  At one time fans of Tom Clancy were thrilled by the last chapters of Debt of Honor, thinking they would never come true, scared to think someone thought of it.  The truth is: terrorists no longer try to land the plane and are not concerned with their own safety.

The true terror is the unknown and these scenarios provide many unknowns.  The US has been complacent about security these last seventeen years.  It is good that agencies like DHS have been working to prevent cyberattacks, but all agencies need to engage.  The US needs to remember what it was like that Tuesday in September, the day the world changed forever.

Aircraft Accidents and Routine Chaos

Complacency kills.  The line between flawless and chaos is often so fine it is invisible.  A routine procedure, accomplished thousands of times with no error, can quickly descend into irreversible turmoil, a place where no one can pull back the disastrous results.  There are hours, days and weeks for regret, along with an eternity of ‘what-ifs’.

A DC-10 needed to be moved from one gate to another section of the ramp, only fifteen gates away.  A normal routine that involved three mechanics: one mechanic (Brake Rider) rode the brakes in the cockpit, monitoring the auxiliary power unit (APU), communicating with the tower and applying the brakes should the towbar snap and leave the aircraft without controls; a second mechanic (Driver) who drives the pushback tug used to push and pull the aircraft; the third (Communicator) provides communication via headset, through a long umbilical mike cord that runs from the Communicator’s headset to the aircraft.

This is the penultimate moment, as the tug connects to the aircraft and everyone goes to their assigned stations; the moment when everyone can dismiss the complacency and be concerned with the possibilities.  However, this is so routine, it doesn’t even bear planning.  The Driver starts the tug’s engine; the Brake Rider calls the ground tower and the Communicator, upon being given the go ahead from the Brake Rider, signals the Driver to push. 

The aircraft was pushed back.  With the DC-10 on the taxi line, the Driver moved from one end of the tug to the other to pull the DC-10 to the other gate.  The Communicator stood on a platform on the aft end of the tug, the cord still running up to the aircraft.  Meanwhile, taxiing jets on a nearby taxiway; the DC-10’s APU running at high speed; the diesel tug chugging loudly – all these added generously to the noise pollution that would confound any verbal shouts between the Driver and the Communicator.  The Driver’s visibility, looking behind, was prevented by all but a small window of what could be seen looking back … should he look back at all.  Remember, it is routine.

As the Driver accelerated to an excessive speed, the DC-10 aircraft followed obediently; the pushback crew wanted to expedite the job and get back into the air-conditioned office.  With the tug and aircraft making good speed, the Communicator’s headset cord became tangled on the nose gear and tightened in a slight turn; the Communicator’s headset snapped and pulled the Communicator off the back of the tug.  Landing hard on his side, he rolled back towards the main gear.  He tried to shout, but the Driver could not hear; the cord was damaged and the Brake Rider could not hear.  Unable to get up from the ground, the Communicator could not avoid going under the wheels of the DC-10’s left main gear.  More than two hundred and forty thousand pounds of aircraft – 120 tons – ran over both legs.

For the Communicator, the world stopped; he passed out from shock.  The Driver, whose attention was forward, was oblivious; the Brake Rider stood by and enjoyed the ride; the radio had been normally silent.  The headset bounced ownerless behind the tug.

As the tug continued on, the warning came from the side as an observer witnessed the whole episode.  The observer flagged down the tug with arms waving, running at a good clip.  The tug stopped; the Driver’s world stopped.

As people moved to the Communicator’s motionless figure, laying on the tarmac, calls were made to the tower to stop traffic on that taxiway; to Maintenance Control for instruction and emergency services and silent prayers were offered to God for the fallen Communicator.  The world for all involved started to ooze slowly forward.

The Communicator laid unconscious in his own blood.  The basic weight of the aircraft was two hundred and forty thousand pounds – this, however, did not include the thousands of pounds of ramp fuel and several thousand pounds of ballast on the upper deck.  The left main gear’s outboard wheels had crushed the Communicator’s left leg from the thigh down and the right leg from the knee down.  Thankfully unconscious, the Communicator’s left thigh bone had been obliterated, crushed to quarter-sized pieces by the left gear’s outboard wheels.  The skin, muscle, ligaments and pant legs were shredded and unidentifiable from each other.  The femoral artery was severed and, for some reason, was pinched, not flowing freely.  The left foot and lower shin, also caught under the wheels, were flattened to a width of zero by the left gear’s inboard tires.  The right leg below the knee had disappeared under the tires; they emerged just as destroyed.

An instructor on the field teaching a class at the time, rushed over and administered a tourniquet to prevent the Communicator from bleeding out.  Time picked up, gradually caught up.  An ambulance arrived on the field, the Communicator was moved to the hospital and many surgeries … should he survive.

First thing, readers, these are not stupid mechanics; no one got what they deserved by being careless.  They were doing what they always did, perhaps in the way they always did it.  It was a routine.  It was every day stuff.  It was, however, complacency.

They were no more careless than the private pilot who, on his last moments on Earth, and after completing the ten-thousandth pre-flight on his single-engine aircraft, missed the red elevator lock out ‘REMOVE BEFORE FLIGHT’ flags from the elevators before taking off and hitting the forest’s trees at the end of the runway without ever acquiring twenty feet of altitude.

They were no more careless than the mechanic who forgot to write up the gear pins in the maintenance log and released the aircraft for flight, only to have the flight complete an air turnback because they couldn’t retract all the gear.

They were no more careless than the Alaskan bush pilot who carried several hundred pounds of freight in his Piper without strapping it down; the plane managed to clear the runway’s point of no return, only to tear a path through the treetops at the runway’s end because the freight moved aft, causing the nose to point almost straight up before impact.  The pilot’s body was recovered in the Spring when the snows melted.

The rest of the airline’s mechanics, those who worked with the DC-10 pushback crew, were all affected by the incident.  People pushed airplanes back a little slower, for a little while, at least.  The Driver was forever affected, refusing to get behind the wheel of a tug for many years.  Management buzzed; front line supervisors were as deer in the headlights, while upper management, wanting rolling heads for what happened on their watch, assigned blame.

The Communicator survived.  He was a man with a work ethic, one who was not about to retire or be forced into languishing behind a desk.  Although he would not walk again, if he was to keep working, he would do what he could from a wheelchair.  He eventually – after rehab – worked putting engines back together in the Engine Shop.  It was not what he wanted, but his attitude was that a day above ground was better than one below, which he continues to enjoy to this day.

But complacency kills.  The victim may not be a person like Communicator who suffered from the consequences of letting their own guard down.  It may be a passenger on a flight gone bad; a pilot who rushed the preflight; a mechanic who was unaware the flaps were being extended; a flight attendant opening the entry door without disarming the slide.  Victims may be the result of a baggage loader not questioning the weight and balance load sheet or a gate agent pushed to expedite a flight.  Chaos is, just that: chaos; once started, it is difficult to stop.  But a good way to prevent chaos to begin with is to remember: complacency kills.

Aircraft Accidents and the ASAP

Opened in July 1936, New York City’s Triboro Bridge was designed to access three New York City boroughs: Manhattan, Queens and the Bronx.  Originally called, simply, the ‘Triboro Bridge’, it is now named the ‘RFK Triboro Bridge’, or, affectionately called by today’s New Yorkers, ‘Aaaaaaaahhh! No-o-o-o-o!!’  It was funded by interest-bearing bonds, issued by the Triborough Bridge Authority itself, and secured by revenues from future tolls.  The point of my history lesson?  The construction bill for the Bridge has been paid many times over, yet the tolls remain and, in fact, rise on a regular basis.  That is because as time marched on, bureaucracies were reluctant to change what paid money.  Politicians and bureaucrats invented ways for accepted practices to remain the same.  It was, essentially, a shell game; a flimflam.

I have been writing about the Federal Aviation Administration (FAA), its compliance philosophy (CP) and the CP’s safety programs.  Last week I wrote about the voluntary disclosure reporting program (VDRP).  Today, the Aviation Safety Action Program (ASAP) concludes this ‘series’; it draws the elements of how the FAA has assumed a hands-off approach to regulating, relying heavily on Industry to police itself.

The ASAP was introduced as an answer to the Safety Conference of January 1995.  Originally instituted in Advisory Circular (AC) 120-66 in January 1966, (revised twice in March 2000 and November 2002) the ASAP brought about common sense solutions to pressing problems.  For instance, pilots were deviating from their assigned altitudes, like overshot or undershot altitudes.  According to US Air’s Altitude Awareness Program, several causes of altitude deviations were identified, such as: crew distractions, improper altimeter settings or Air Traffic Control (ATC) operational errors.  Common sense solutions were introduced, e.g. assigning which pilot makes altitude adjustments; pilots verbally challenging each other for new settings and readbacks to/from ATC.

These were excellent advances in safety; simple, yet smart.  Such ASAP successes were an example of how the program could work with the team cooperation of the certificate holder, the FAA, the workforce and, in some cases, the labor unions.  Sensible results have been implemented successfully into other air carriers’ programs.  FAA-approved ASAPs are for air carriers flying under Title 14 Code of Federal Regulations (CFR) part 121 (Domestic, Flag and Supplemental Operations) part 135 (Commuter and On Demand Operations) and Domestic Part 145 (Repair Stations).  Other certificate holders can apply, but are chosen on a case-by-case basis, to see if the processes can be adapted.  ASAPs are adopted voluntarily by the certificate holder, the FAA and other suitable parties.  The parties are held to a written agreement called a Memorandum of Understanding (MOU) that spells out the ASAP’s purpose, terms, procedures and who the parties are.  The MOU is the contract between all parties that determines how the gravity of safety issues will be resolved and how the issues will be reconciled.

Once adopted, the ASAP generates an Event Review Committee (ERC); this group is responsible for establishing what safety issues qualify for an ASAP investigation.  Types of reports excluded from the ASAP process, include: intentional disregard of safety; criminal activity; repeated violations or reports of an employee when NOT acting as an employee, e.g. an arrest while on personal time.

The ASAP report ‘will not be used to initiate or support any company disciplinary action, or as evidence for any purpose in an FAA enforcement action …’

As per AC 120-22B, ‘The ERC will:

  • Review and analyze reports submitted under the ASAP
  • Determine whether such reports qualify for inclusion
  • Identify actual or potential problems from the information contained in the reports, and
  • Propose solutions for those problems.

The ASAP policy was a proactive approach to aviation safety; it took the best of both worlds – industry and regulatory – and put them in a room together to brainstorm the biggest problems, develop solutions and implement them.

Like any policy, the intent of the program does not always translate through time.  The FAA of 2018 is not the FAA of 1995; industry’s safety hurdles have changed.  The focuses have realigned; in the wake of complacency, proactive has been replaced by reactive.

As mentioned in my previous articles: Aircraft Accidents and the FAA Compliance Philosophy Parts I and II, the FAA’s compliance philosophy has altered the administration’s approach to aviation safety, from a regulatory proactive mind-frame to a behaviorist ‘study-of-the-human-condition’ reactive mind-frame.  The new FAA philosophy premiered in 2015.  Twenty years earlier, in 1995, the Aviation Safety Action Program was introduced, when the FAA maintained a regulator-to-regulated relationship with industry.  The VDRP, itself, came about in 1992, leading the way for ASAP in the quest for sparking positive changes in aviation.  Between 1992 and 2015, important changes took place.

When the Aviation Safety Action Program premiered, the aviation accident rate was higher than it is today in both general and commercial aviation.  Since then, improvements made in safety promoted a false sense of security; aviation’s ‘safest years’ fed a complacency that said, “We’ve fixed safety; it is time to relax our guard.”

In the 1990s through the early 2000s, airlines were no strangers to mergers and their cultural effects, e.g. FedEx and Flying Tiger; US Air and Piedmont; American and TWA; America West and US Air.  Many operators either went bankrupt or their routes were absorbed by international airlines, e.g. Eastern, Pan Am, Frontier.  These drastic changes in corporate cultures, job actions and job losses, contributed to the need for a program like the ASAP.  The change in merging cultures demanded that programs like ASAP and VDRP kept the air carriers’ feet to the right path.

It is important to add the terrorist attacks of 9/11.  The aviation industry was specifically targeted by the terrorists and required re-evaluations of how everything in the industry was done, from repairs to background checks.  Funding previously allocated to oversight accomplished by aviation regulators was now shared with Homeland Security and the Transportation Security Administration.

Soon, digital technologies replaced analog technologies, one-for-one.  New advances in composites, non-destructive testing and the use of unmanned aerial vehicles eased into the industry, eventually exploding across the aviation community.  Finally, the introduction of commercial space flight and computerized air traffic control added to the FAA’s responsibility and workload.  The aviation industry changed almost overnight.

Then, after 2015, the FAA changed in two distinct ways: the compliance philosophy, aka the FAA’s culture, was rewritten, and the FAA started to conduct a multi-year reorganization.  The compliance philosophy’s main culture changed: the FAA aviation safety inspector (ASI) became restricted in how they regulated.  No longer regulators, ASIs were now behaviorists.  Their mission?  Understanding bad behavior, not preventing it.  At the ERC, the FAA no longer represented the ‘Word of Authority’.  Instead, the ASI became a referee, a kindly uncle whose authority became a shadow of its former self.

Will programs, like ASAP and voluntary disclosure, survive the transition into the new philosophy? With the FAA’s reorganization, the new environment may not cultivate the programs to survive.  For instance, the reorganization further confused the lines between Headquarters and the individual FAA Regions.  After two years, FAA Flight Standards District offices still don’t know how to find answers or even where to look.  The air carriers’ certificate management offices, caught up in the confusion of the reorganization, are not paying necessary attention to proven programs, e.g. the ASAP.

Just like the Triboro Bridge tolls, bureaucratic organizations are reluctant to change what should be changed, even while in the midst of complete overhaul.  Perhaps the illusion of not upsetting the balance makes transition smoother.  But, for who? And for how long before the shell game is revealed for what it is?  A flimflam.

Aircraft Accidents and Voluntary Disclosures

In my youth, when the weather prevented baseball, basketball and bicycles, my friends and I would retire to the garage for a time-consuming board game, e.g. Risk.  We did not have fun-sucking video games back then, but we relied on outmaneuvering our opponents at the speed of a dice roll.  Monopoly often made a debut at these times with its hotels and railroads … and rich Uncle Pennybags.  With, of course, the coveted ‘Get-Out-of-Jail-Free’ card from both the Community Chest and Chance card decks, it allowed the possessor of said card to escape prison time and return to the action of the game.

The Voluntary Disclosure Reporting Program (VDRP), introduced in May 1998, was designed to encourage air carriers, Parts Approval Holders, and other certificate holders to disclose safety or security information voluntarily to the Federal Aviation Administration (FAA) without fear of the information being made public or for incrimination for the information.  The VDRP was a good idea; it allowed the FAA to promote safety by removing the enforcement consequence.  Unfortunately, as is with many good ideas, the privilege gets abused; it is often seen as a ‘Get-Out-of-Jail-Free’ card.

When I worked on the airport ramps, the fear ramp workers had was that if you hit the aircraft, you were fired; I saw it on the regional airports as well as the airline ramps.  But the air certificate holders soon wised up, that ‘accidents’ can happen; even being extra careful, new-hires and seasoned folks tend to dent the fuselage or run into a flight control … stuff happens.  The new mantra became: tell someone about the damage and no one gets fired.  The operator would rather find out before launch than for there to be a problem later, especially with a pressurized airframe.

In the same vein, the FAA wrote the VDRP policy.  Originally approved for air carriers in January 1992 and expanded in May 1998, the purpose of the VDRP was to give industry their own approved program that stresses the importance of safety failure disclosures.  Per FAA Order 8000.89, the VDRP would drive the discloser to implement new programs to reduce the opportunity for reoccurrence without, in turn, being violated and/or the event being made public.  The FAA’s rules for implementing Title 49 United States Code (USC) section 40123 can be found in Title 14 Code of Federal Regulations (CFR) part 193.

The benefit of Voluntary Disclosure Reporting as opposed to the Compliance Philosophy – and this is important – is the VDRP’s requirement to fix the problem.  The FAA has had the means to tie the Compliance Philosophy to enforcement actions into a symbiotic relationship.  This would give the Compliance Philosophy a lasting effect, but, like a poor marksman, the FAA continues to miss the target.

When enforcement continues to be a reaction to safety violations, the lesson to the certificate holder (CH) is lost; they don’t get it; they don’t learn nuthin’.  Why?  Because the certificate holder is not held to repairing the problem.  Does the CH get fined or lose certification privileges?  Yes, but after the check is cashed, the lesson is lost.  The CH makes no plans to prevent reoccurrence of the bad behavior.

The correction is part of the voluntary disclosure report; the comprehensive fix guarantees the certificate holder looks at the broken program and, as part of their penance, comes up with a proactivenot reactive – means to address the problem.  That way, it does not happen again.

The benefits of safety disclosures cannot be expressed enough.  However, in past articles, the point was made that aviation people of all types, from pilots to mechanics to balloonists outnumber the FAA aviation safety inspector (ASI) workforce overseeing them by over one hundred to one.  The air carrier world is worse, number-wise; the ASI to aviation professional ratio is overwhelming, perhaps many hundreds to one.  The number of safety reports that the certificate management office (CMO) ASI has to keep track of; the aircraft maintenance data that must be crunched; the risk analyses information that must be investigated; all this takes up time the CMO ASIs do not have.  To keep track of the various voluntary disclosure reports could be a job within itself; the CMO relies on the air carrier’s integrity.

For example, on April 1st, a major freight airline (hypothetically) loads a wide body airliner using specific pancake scales to weigh the freight, labeled 1 through 5; the airliner departs for its destination: Boston.  After departure, the ramp loadmaster is checking equipment condition and notices that scale number 3 was supposed to be calibrated by March 31st.  Per Title 14 CFR part 135.185 (b) (2) and part 121.135 (b) (21), the air carrier is violating their own weight and balance manual (WBM).  The freight was incorrectly weighed; the weights are inaccurate.

The WBM was written by the air carrier; approved by the FAA.  To clarify, inaccurately weighed freight could result in an airliner’s loss of control, passenger or cargo airliner.  A single engine feeder aircraft could become a lawn dart, while a helicopter, with a critical center of gravity, might become so much shrapnel spread around the ramp.

Per FAA Order 8000.89, the Initial Notification must include: description of the violation (scale out of calibration); verification that noncompliance ceased after discovery (they removed the scale from service); an investigation conducted and a report written in ten working days and what a comprehensive fix will include – the comprehensive fix must be designed to prevent reoccurrence … period.  What will be done to assure that the air carrier will not miss the calibration date again?  And in 99.99% of the cases, this is where it ends.  The planets align and the Earth rotates on its axis again.

There are other types of disclosures an air carrier must report: pilot altitude deviations, repair station audit evaluations, inflight engine shutdowns, flight diversions, incidents/accidents and so on.

The point was made about air carriers having superior numbers for the FAA CMO’s ASIs to keep an eye on.  This promotes an environment within the air carriers to let things slide while the ASIs are preoccupied, to water down the urgency of, e.g. an uncalibrated scale until the FAA arrives on the field … then file a voluntary disclosure report as a ‘Get-Out-of-Jail-Free’ card.  It is how the game is played.

When I worked for the FAA’s Flight Standards Division, I often enrouted; this means I flew with the flight crew and observed firsthand the loading and unloading conducted on each ramp the plane departed or landed at.  It was a great surveillance tool because the air carrier never saw me coming.  My specialty was cargo, which includes passenger luggage loading; it is my strong point, my history and I know how to play the game.  If, while conducting surveillance, I came up on scale number 3 (that went out of calibration on March 31st) but now the date I’m on the ramp is April 20th; the air carrier’s VDRP could become a violation issue, especially if the comprehensive fix did not correct the problem.  Attempts to file a Voluntary Disclosure then are futile; the air carrier knew about the problem and ignored the consequences, putting safety in jeopardy.  But it only works if the air carrier is caught in the act.

There is more detail to the VDRP than what is mentioned here; these are the basic points of how the VDRP works, why the VDRP works and why the VDRP needs to be taken seriously.  Furthermore, there are certainly no defenses of the behaviorist psychologist – not FAA regulator – direction of the FAA’s Compliance Philosophy.  Failures to, not just discover safety problems, but correct them, end up in real accidents with real people dying.  There are no second rolls of the dice, no ‘Get-Out-of-Jail-Free’ cards, no do-overs.  And there are no excuses for not playing the game … by the rules.

Aircraft Accidents and Lessons Unlearned XIV: American 191

The relationship between Maintenance and Engineering (M&E) can be represented as two ‘friends’, facing ninety degrees apart with arms crossed over their chests, looking suspiciously at the other over their shoulders.  Unlike the pilot-mechanic relationship being akin to the New York Yankees and the Boston Red Sox competition, M&E folks are not rivals for the ‘True Airman’ prize.  M&E is a semi-symbiotic bond; they are more dependent on the support of each other than either will admit.  At the same time, theirs is an affiliation whose foundation demands trust and communication.  In this strange allies’ relationship, the tragedy of American Flight 191 was fostered.

American 191 crashed after taking off from Chicago’s O’Hare airport on May 25, 1979.  National Transportation Safety Board (NTSB) Report AAR 79/17 recorded the investigation into the accident.  The root cause of the accident was that American Airlines (AA) combined two approved maintenance procedures in an impractical way: removal and replacement (R&R) of an engine and the R&R of a pylon.

Briefly, McDonnel-Douglas (MCD) issued service bulletin (SB) 54-48 in 1975 and SB 54-59 in 1978 for their DC-10s; both SB’s called for replacing the two wing engine (#1 and #3) pylons’ spherical bearings to address cracks found in the aft Attach Spherical Bearings and lubrication problems in the forward Attach Spherical Bearings, which would contribute to seizing.

Both SBs required the engine be removed to access the pylon, a costly undertaking in both manpower and money.  Replacing the engine with the pylon attached as one unit would lower maintenance costs as AA complied with the SBs for their DC-10 fleet; the timesaving procedure meant less aircraft down time and less man-hours used.  In order to modify the repair procedures, AA would need acceptable data from MCD for the removal and reinstallation of the combined engine/pylon, e.g. center-of-gravity (CG); the combined weights (13,480 pounds) and the dimensions from the lifting device to the end of the pylon.

MCD strongly advised against the procedure; however, MCD could not refuse the requested data nor prevent AA from performing the modified procedures.  The two main problems with modifying the maintenance procedures were: 1 – the engine/pylon’s CG is moved well outside the envelope that normal hoisting is accomplished in, and, 2 – the engine/pylon combination would be raised from beneath, not lifted (hoisted) from above.

Each of the DC-10’s three CF6-6D engines’ (11,600 pounds) main purpose: to provide 40,000 pounds of thrust to move the aircraft.  The energies each engine produces include shear, torsional, compression and tensile forces.  The pylon (1860 pounds) is what the CF6-6D engine mounts to; the pylon translates all loads – the previously mentioned four forces – from the engine into the aircraft, plus the energies e.g. weight, landing stresses, balance, thrust and reverse thrust, which are produced during normal operation.  The pylon’s design transfers these forces through its entire structure, which is why the pylon’s integrity must not be compromised.

A CF6-6D engine is normally lowered and raised by four come-along hoists that are hung off the pylon.  There are four points, left and right front, left and right rear.  With each come-along manned, a fifth person directs the other four to slowly raise and lower as the engine mates to its mounts, otherwise the 11,600 pound engine’s mass can cause considerable structural damage, e.g. cracks, scoring, blunt damage, bending or transmittal load damage.  The tight clearances between pylon and engine demand that someone direct the four points closely.

By itself, the 1860-pound pylon is similarly raised, a director controlling the ascent.  The CG of the engine or the pylon is managed within a small envelope front to back, the CG movement being limited by the forward and aft come-along hoists.  The AA procedure that led to the accident, required a CG movement that exceeded the normal limits.  Furthermore, per accident report AAR 79/17, the engine/pylon combination was raised and lowered from underneath by forklift – not from above.

The danger of this approach was from the lack of control and the resulting sustained damage.  The tight clearances between pylon and wing allowed no room for error; damage was easily suffered if the engine/pylon was not raised slowly and under complete control.  Any forklift hydraulic pressure loss would result in the pylon acting like a twenty-foot long pry bar in its mounts.  The twenty-foot separation between forklift and pylon meant that the forklift’s movements became exaggerated, e.g. a one-inch side movement of the forklift’s back wheels translated into the pylon moving many more inches at the wing.  Furthermore, the forklift driver was blind to what was happening at the pylon’s end; the driver was solely dependent on the Director’s instructions telling the forklift driver, e.g. to move forward, backward, left, right, slide the forks left, slide right, lean the forks back or forward.  Meanwhile the engine/pylon’s CG was well forward of the forklift; the forklift bounced with each movement, making the hoist unsteady while repositioning.  The localized forces on the pylon were not translated through its structure; damage resulted in specific local areas, weakening the integrity of the pylon. 

The modified procedures should never have gotten this far; the exaggerated movements of the forklift were the equivalent of swinging an 13,480-pound mallet or forcing an 13,480-pound, twenty-foot pry bar to the pylon’s structure.  Any damage incurred was not recognizable.  What would one look for?  How would they identify unexpected damage in a localized area not known for that type of stress cracking?  Non-destructive techniques to locate damage were either not employed or were not available to identify the injuries.

This is where communication between Engineering and Maintenance most likely broke down.  While Engineering wrote the procedures, Maintenance was on the scene; there should have been reported problems unforeseen during the planning stages.  Was Engineering part of the ongoing procedural changes; did Maintenance include them?  Was Engineering available on-site during the entire job?  Would the Maintenance group have stopped at the first sign of trouble or pushed on with a ‘can-do’ attitude employing unsafe compensations?  Did Engineering plan for proper mount bolt torques while taking into account the forklift’s possible hydraulic creep?  In other words, if the forks drooped, putting weight on the bolt heads, would the torque be correct when the mount bolts were tightened down?

Human Factor issues included the desire to get the job done in good time.  Was there pressure to get the engine/pylon changed out and pushed from the Hangar quickly?  This issue is affected by communication or the lack thereof, or simple reasons like inconvenient breaks, shift changes or poor shift turnovers.  Were the proper number of personnel assigned to the job?  Was any other work being performed around the wing that intruded or interfered with the engine/pylon change?

These are issues that usually never came up as part of this or other NTSB accident investigations.  Until 2001, the NTSB did not have an aircraft maintenance experienced accident investigator.  This investigator was an engineer, unfamiliar with the maintenance environment.  In order to understand a culture, one must have either worked in it or taken the time to become familiar with it; AA’s maintenance culture was never examined to determine where the failure of communication occurred. 

To correct the mistakes made in what was done incorrectly, the tangible details must be addressed, e.g. the modified maintenance procedures and the changes made in lifting.  However, to properly prevent an accident from recurring, the investigators must determine how and when the errors got out of hand.  Factors, e.g. time-constraints and their effects on the work force; M&E assuring two-way clear communications; guaranteeing Quality Control was on hand through the altered procedures to assure safety is maintained; decisions were made by consensus of those involved, not by a senior person or from off-site.  Unless the contributing factors and the root causes to the accident were properly identified and addressed, accidents will recur under similar circumstances, especially as Flight 191 is forgotten over time.