Data from the fatal Oct. 29 flight that killed 189 people, and from the prior day's flight of the same jet, raises questions about three factors that seem to have contributed to the crash.
A key instrument reading on Lion Air flight JT610 was faulty even as the pilots taxied out for takeoff. As soon as the Boeing 737 MAX was airborne, the captain’s control column began to shake as a stall warning.
And from the moment they retracted the wing flaps at about 3,000 feet, the two pilots struggled — in a 10-minute tug of war — against a new anti-stall flight-control system that relentlessly pushed the jet’s nose down 26 times before they lost control.
Though the pilots responded to each nose-down movement by pulling the nose up again, mysteriously they didn’t do what the pilots on the previous day’s flight had done: simply switched off that flight-control system.
The detail is revealed in the data from the so-called “black box” flight recorder (it’s actually orange in color) from the fatal Oct. 29 flight that killed 189 people and the prior day’s flight of the same jet, presented last Thursday to the Indonesian Parliament by the country’s National Transportation Safety Committee (NTSC).
This data is the major basis for the preliminary crash-investigation report that was made public Wednesday in Indonesia, Tuesday evening in Seattle.
The flight-recorder data is presented as a series of line graphs that give a clear picture of what was going on with the aircraft systems as the plane taxied on the ground, took off and flew for just 11 minutes.
The data points to three factors that seem to have contributed to the disaster:
- A potential design flaw in Boeing’s new anti-stall addition to the MAX’s flight-control system and a lack of communication to airlines about the system.
- The baffling failure of the Lion Air pilots to recognize what was happening and execute a standard procedure to shut off the faulty system.
- And a Lion Air maintenance shortfall that allowed the plane to fly repeatedly without fixing the key sensor that was feeding false information to the flight computer on previous flights.
Anti-stall system triggered
Peter Lemme, a former Boeing flight-controls engineer who is now an avionics and satellite-communications consultant, analyzed the graphs minute by minute.
He said the data shows Boeing’s new system — called MCAS (Maneuvering Characteristics Augmentation System) — “was triggered persistently” as soon as the wing flaps retracted.
The data confirms that a sensor that measures the plane’s angle of attack, the angle between the wings and the air flow, was feeding a faulty reading to the flight computer. The two angle-of-attack sensors on either side of the jet’s nose differed by about 20 degrees in their measurements even during the ground taxi phase when the plane’s pitch was level. One of those readings was clearly completely wrong.
On any given flight, the flight computer takes data from only one of the angle-of-attack (AOA) sensors, apparently for simplicity of design. In this case, the computer interpreted the AOA reading as much too high an angle, suggesting an imminent stall that required MCAS to kick in and save the airplane.
When the MCAS system pushed the nose down, the captain repeatedly pulled it back up, probably by using thumb switches on the control column. But each time, the MCAS system, as designed, kicked in to swivel the horizontal tail and push the nose back down again.
The data shows that after this cycle repeated 21 times, the captain ceded control to the first officer and MCAS then pushed the nose down twice more, this time without a pilot response.
After a few more cycles of this struggle, with the horizontal tail now close to the limit of its movement, the captain resumed control and pulled back on the control column with high force.
It was too late. The plane dived into the sea at more than 500 miles per hour.
Previous crew handled similar situation
Remarkably, the corresponding black-box-data charts from the same plane’s flight the previous day show that the pilots on that earlier flight encountered more or less exactly the same situation.
Again the AOA sensors were out of sync from the start. Again, the captain’s control column began shaking, a stall warning, at the moment of takeoff. Again, MCAS kicked in to push the nose down as soon as the flaps retracted.
Initially that crew reacted like the pilots of JT610, but after a dozen cycles of the nose going down and pushing it back up, they turned off MCAS using two standard cutoff switches on the control pedestal “within minutes of experiencing the automatic nose down” movements, according to the NTSC preliminary investigation report.
There were no further uncommanded nose-down movements. For the rest of the flight, they controlled the jet’s pitch manually and everything was normal. The jet continued to its destination and landed safely.
Because the cockpit voice recorder has not yet been recovered from the sea bed, it’s a mystery why the JT610 pilots didn’t recognize that it was the uncommanded horizontal tail movements pushing the nose down.
Beside their seats a large wheel, called the stabilizer trim wheel, which rotates as the horizontal tail swivels, would have been spinning fast and noisily. Such an uncommanded movement, which could be triggered by other faults besides MCAS, is called a “runaway stabilizer” and pilots are trained to deal with it in a short, straightforward procedure that’s in the flight manual. Flicking two cutoff switches stops the movement completely.
Somehow, the pilots ignored the spinning stabilizer wheel, perhaps distracted by the shaking of the control column — called a “stick shaker” — and the warning lights on their display which would have indicated disagreement between the AOA sensors and consequent faults in the readings of airspeed and altitude.
Most Read Business Stories
- As Seattle's new hotels roll out automation to serve guests, workers worry
- Boeing discovers flaw in sought-after 737 MAX simulator, the same kind that Ethiopian Airlines had
- Ethiopian Airlines calls criticism of its pilots an effort to 'divert public attention' from Boeing 737 MAX flaws
- Seattle-based supercomputer maker Cray agrees to $1.3 billion acquisition by Hewlett Packard Enterprise
- Boeing altered key switches in 737 MAX cockpit, limiting ability to shut off MCAS
The NTSC preliminary report confirms that, shortly after takeoff, the pilots experienced issues with altitude and airspeed data.
Still, their failure to shut off the automated tail movements is baffling.
“No one would expect a pilot to sit there and play tag with the system 25 times” before the system won out, said Lemme. “This airplane should not have crashed. There are human factors involved.“
Boeing design flaw?
However, even if the flight crew is found partly culpable, the sequence of this tragedy also points to a potential design flaw in Boeing’s MCAS system.
The sequence was triggered by a single faulty AOA sensor. A so-called “single point of failure” that could bring down an airplane is absolutely anathema in aviation safety protocols.
Lemme, who designed flight controls at Boeing, said that although the AOA malfunction is a single point of failure of the equipment — something airplanes are rigorously designed to avoid — in the safety categories used for certification it represents a “hazardous” failure, “not a single point catastrophic failure.”
The difference is when the pilots have at their disposal a straightforward way out of the danger. For example, if one engine fails on an airplane, trained pilots know exactly what to do to divert and land safely. If they don’t do it, of course the engine failure will bring down the plane. But the proper pilot reaction is an expected part of the safety system.
Lemme said that, in adding MCAS to the MAX, the Boeing system design engineers must have “made the judgment that a malfunction of the AOA sensor would be a ‘hazardous’ failure mode, not catastrophic, because the pilots can throw the cutoff switches.”
In aviation systems analysis for certification purposes, a hazardous failure must have a probability of no more than one in 10 million. A catastrophic failure must have a probability of less than one in a billion, which means it should never occur in the life of an airplane.
However, aside from the system design, Boeing must also answer questions about how much information it gave to pilots about the new system for which they are assumed to provide a safety backstop.
Capt. Dennis Tajer, chairman of the communications committee of the Allied Pilots Association (APA), the union representing American Airlines pilots, said that airline pilots “proudly stand as one of the layers of safety system success,” but he’s troubled that there was nothing in the flight manual about the MCAS system.
“We are part of the safety system, yes. But you haven’t provided knowledge of the aircraft system,” Tajer said. “Boeing is counting on the pilots as a second line of safety. But to not inform them is to undermine your own philosophy.”
He contrasted the malfunction of MCAS on the Lion Air flight and the lack of knowledge about the system before the accident to what happens when an engine fails in flight.
“I have an entire engine section in my manual. I know all about the system,” Tajer said. “We have to have the information.”
He said that following the accident and the FAA airworthiness directive, “every 737 pilot in the world is now aware this system is out there.” But the crew of JT610 lacked full information.
A software fix
Lemme said the Lion Air crash will inevitably lead to a re-evaluation of the MCAS system design.
In his view, it wasn’t a case of Boeing’s design engineers ignoring the consequences of a single sensor failure. “It’s a case of overvaluing the pilots’ response.”
“I’m sure the systems designers that approved this assumed the pilot would hit the cutout switches and move on,” Lemme added.
With hindsight, he said, when a calm assessment is done by engineers, they’ll probably conclude that a single input shouldn’t be allowed to trigger the system.
He said MCAS is designed to kick in only in extreme circumstances that an airliner should basically never face: something like a high-bank, high-stress turn, experiencing many times the ordinary force of gravity and approaching stall.
It should only engage when the sensors are certain that’s the situation. “You need a second input to make that judgment,” Lemme said. Some logic could also be inserted to consider the reliability of the AOA readings when the plane is still on the ground.
Such a fix is relatively easy to install, since it will involve only software changes, he said.
Boeing in a statement said it is “taking every measure to fully understand all aspects of this accident.”
“We will analyze any additional information as it becomes available,” Boeing said.
A third area of intense scrutiny as a result of the flight data is Lion Air’s maintenance procedures.
The preliminary NTSC report states that the maintenance logs for the accident aircraft recorded problems related to airspeed and altitude on each of the four flights that occurred over the three days prior to Flight 610.
The logs indicate that various maintenance procedures were performed, but issues related to airspeed and altitude continued on each successive flight. The logs indicate that, among other procedures, on Oct. 27, two days prior to the accident flight, one of the airplane’s AOA sensors was replaced.
On Oct. 28, the flight immediately prior to Flight JT610, the pilot in command and the maintenance engineer discussed the maintenance that had been performed on the aircraft. The engineer informed the pilot that the AOA sensor had been replaced and tested.
However, the issue clearly wasn’t fixed. As noted above, the same problems recurred during that flight. The report also states that, after landing, the pilot on this prior flight reported some of the experienced issues both on the aircraft maintenance log and to engineering.
Lion Air has a very poor safety record and has been accused of skimping on maintenance to cut costs.
Lemme said that in an aviation safety analysis, timely maintenance to fix faults is required to reduce a crew’s exposure.
“This plane flew repeatedly with faults that should have been repaired,” Lemme said. “That increased the exposure of the faults to more flight crews, until it found a flight crew that wasn’t able to handle the situation.”