Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, I've seen the graph. They had minutes, not seconds.


In which they thought each time „it‘s OK now.“ That‘s not what they were trained for, that a device can turn itself against them on again and again.


I beg to differ. If I had an intermittent failure that would nose the airplane down, you bet I'd turn it off. I wouldn't want it to happen when I was close to the ground. I would have shut it down the second time it happened.

It's like if an engine is on fire, and I shut down the engine, and the fire goes out. There's no way I'm going to think "It's OK now, I can restart the engine." No way in hell.


I think you still observe this with the benefit of the hindsight of what you know already killed 350 people.

However:

https://news.ycombinator.com/item?id=19420964

"pilots and aviation experts say that what happened on the Lion Air flight doesn’t look like a standard stabilizer runaway, because that is defined as continuous uncommanded movement of the tail."

"On the accident flight, the tail movement wasn’t continuous; the pilots were able to counter the nose-down movement multiple times."

"In addition, the MCAS altered the control column response to the stabilizer movement. Pulling back on the column normally interrupts any stabilizer nose-down movement, but with MCAS operating that control column function was disabled."

And do you really agree with the simplified interpretation like:

https://news.ycombinator.com/item?id=19424761

"two pilots fell out of the sky to their deaths and it simply never occurred to them to try pulling back hard."

I still believe that the way the airplane behaved in these two cases was exactly the opposite of what the pilots have been trained in their flights and in the simulators, and that the behavior was counter intuitive and unexpected to them and that that is the only reason they weren't able to save themselves.

Good UI is not trivially obvious. And it really makes a difference between the life and the death in certain use cases. And thinking about something in the safety of your armchair is very different from "having the plane that is doing exactly what you don't expect and then you die." The plane behaving the very way you haven't been trained in the simulations before.

Finally, I believe that all the issues reported here

https://www.seattletimes.com/business/boeing-aerospace/faile...

contributed to the manifestation that turned out to be deadly:

"The safety analysis:

- Understated the power of the new flight control system, which was designed to swivel the horizontal tail to push the nose of the plane down to avert a stall. When the planes later entered service, MCAS was capable of moving the tail more than four times farther than was stated in the initial safety analysis document.

- Failed to account for how the system could reset itself each time a pilot responded, thereby missing the potential impact of the system repeatedly pushing the airplane’s nose downward.

- Assessed a failure of the system as one level below “catastrophic.” But even that “hazardous” danger level should have precluded activation of the system based on input from a single sensor — and yet that’s how it was designed."

Interestingly and tangentially, the software people particularly like to blame the user. I've worked in many teams wit more people almost automatically dismissing any user input. But the users are, in fact, typically right in a sense that they often have the real world problems that are simply not recognized by the designers/developers assuming infinite knowledge and the infinite time the user has.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: