Friday, March 15, 2013

Vent It Out Friday: Automation Damnation

As an avid watcher of cable television show, Air Crash Investigation, I find it hard to believe that majority of crashes that happened, happened because of simple human error—specifically, a conflict between man and a piece of technology (that was supposed to keep him safe) which he didn’t understand. Indeed, the aviation industry has continuously learned from these mistakes and has repeatedly amended pilot training and plane technology to take into account the somewhat faulty “human logic”. More than that, there’s a serious lesson to be learned for the automakers as well, especially those who hope to make cars safer by taking the controls away from the driver: don’t do it.

The chilling transcripts of the 2009 crash of Air France 447 make for a very credible case where one of the two junior officers in charge, who was of the belief that he couldn’t crash the plane, crashed the Airbus A330 because of misunderstanding the complex systems designed to protect the aircraft.

Junior officer Pierre-Cédric Bonin flew the doomed flight into the middle of an intense thunderstorm when, suddenly, the pitot tubes on the exterior of the plane designed to measure airspeed iced over. When this happens, the aircraft system automatically goes from a more controlled autopilot mode to one that puts the control into the hands of the pilot.

Other than losing a sensor, the plane was completely capable of flying. What caused the plane to crash isn’t the failure of the sensor, but Bonin’s reaction to it. He pulled on the side stick, which caused the plane to climb. A stall warning was issued and never stopped, but the pilots didn’t react to it.

Assuming they’re in a mode that makes it nearly impossible to crash the plane, the loss of sensor data caused them to act not irrationally, but counter-intuitively. They try to ascend when they should have pointed the nose down to grab airspeed. In the end, they ignored the fact that they were flying towards the ocean at high speed.

This crash once again raises the disturbing possibility that the aviation industry may well long be plagued by a subtle menace, one that ironically springs from the never-ending quest to make flying safer: over-reliance on automation.

While the airplane’s avionics track crucial parameters such as location, speed and heading, the human pilots can pay attention to something else. But when trouble suddenly springs up and the computer decides it can no longer cope, far-from-land humans might find themselves with a very incomplete notion of what’s going on. They’ll wonder: What’s going on? What instruments and reliable and which cannot be trusted? What’s the most pressing threat?

It’s the same story with modern automobiles where automakers have begun taking away the basic controls from the driver and in the end, his situational awareness as well.

The best example of this now is adaptive cruise control, which uses radar or laser sensors to allow the driver to “follow” other cars without adjusting the throttle or brake inputs. Basically, the driver sets their desired speed and pulls behind another vehicle. If they select 100 km/h and pull behind a vehicle traveling below 100, the car will slow down, adjusting to the lead car’s speed while maintaining the proper distance.

In practice, it works quite well majority of the time. Unfortunately, because the beams shoot straight ahead in most systems, they tend to read a bend as “no car” and leap forward. This leads to people thinking that their adaptive cruise control is broken, leading them to accelerate dangerously (usually around a curve). This even happened to Apple co-founder Steve Wozniak, who believed his Toyota Prius was faulty; only in the end to admit he didn’t know how the cruise control worked.

This raises the danger of mixing someone so technologically unaware with an advanced system he doesn’t fully understand.

I think it’s about time car companies take a step back and don’t take away basic information and control from the driver. Though an automated system works extremely well when it’s fully in control, when it loses a key piece of information and requires an input from a human—just like the tragic Air France flight—things can go terribly wrong.

I’m not anti-technology. I think technology can enhance and improve safety. Google’s driverless cars and similar technology is designed primarily as safety equipment. Great. But instead of worrying if we’ve designed an autonomous system smart enough to drive without crashing when engaged, we need to worry about whether we’ve designed an autonomous system that’s smart enough not to make us dumb when they disengage.

Informed technology and the uninformed user can be a deadly combination.

No comments:

Post a Comment