Straight off the bat, I want to state that the Germanwings crash of 2015 was a terrible tragedy.
What is worse, is that it wasn’t an accident. It was a deliberate murder-suicide, and an act that was only possible owing to the reinforced cockpit door designed to prevent would-be hijackers from entering and taking control of the aircraft. That safety feature, something that came to the aviation industry following 9/11, was the reason that the rest of the crew weren’t able to wrest control back from the suicidal Andreas Lubitz, and why 150 people lost their lives.
However, I don’t want to talk about the pros and cons of the door. Many others have done so, especially in the days that followed the shocking revelation of the incident’s cause. Instead, I want to talk about other safety systems in the aircraft, and in possible future mitigations of wayward pilot action, be that intentional or otherwise.
To begin with, I’ll give a bit of an insight into the aircraft in question, an Airbus A320-211. Even those with only an incidental awareness of aviation will probably be aware that there are two large manufacturers of commercial airliners: Boeing, and Airbus. Commercial Aircraft from both companies now use a technology known as Fly By Wire in their control systems, whereby the control inputs of the pilot don’t directly connect to the control surfaces. Instead, the pilot’s inputs tells the computer how they wish the plane to be flown, and the computer then makes appropriate changes to the control surfaces. Prior to this, the pilot’s controls would have been physically connected (albeit with support from a hydraulics system) to the control surfaces, and so any control input would have been directly transmitted to the control surface, much as it is in a control system like the steering wheel of a car.
The Airbus involved in the incident was a relatively modern (in industry terms) aircraft, and could be compared, broadly speaking, to the Boeing 737. However, an important design philosophy difference emerged between Boeing and Airbus when it came to control over the aircraft in a Fly By Wire environment. Whereas Boeing chose to trust the pilot, and allow them to override the computer with further control input, Airbus chose to impose limits on a pilot’s input authority. The reasoning behind Airbus’ decision was, seemingly, quite rational: far more fatal accidents are caused by pilot error than by mechanical failure. By introducing a system of flight control laws with inbuilt protections (see here for more detail), many of the causes of previous accidents would theoretically no longer be possible: A watchful flight computer sitting between the pilot and the control surfaces would prevent or modify any pilot input which would endanger the aircraft, to avoid a stall, for example.
So how is this related to Germanwings, because for all this technical wizardry, it clearly did nothing to prevent the incident.
Because it wasn’t programmed to.
Commercial aircraft, quite rightly, take a long time to develop, test, and certify as safe for operation. What this means is that manufacturers must agree on a set of features and a configuration for all flight safety systems on board an aircraft, and then “set it in stone” for testing purposes, so that the regulator (Typically the FAA or EASA) can approve that system, knowing that all production systems that will carry passengers are exactly the same. While this allows for a great deal of confidence that a system has been rigorously tested, it does mean that change is slow, as all changes, however minor, must go through the whole recertification process all over again.
The most recent airliners to be developed include the Airbus A380, and the Boeing 787, both of which were developed more than a decade (indeed almost two decades for the 787) after the generation that the Germanwings A320 came from. Unsurprisingly, they are considerably more advanced aircraft, with a plethora of new safety features and enhancements. In recent years too, advances in computing have lowered the bar to entry in artificial intelligence and deep learning. So allow me to suggest a future direction for onboard computing in commercial aviation that may prevent tragedies such as Germanwings, as well as similar events that are not malicious, but rather the result of unfortunate pilot error.
Two common machine learning problem types are linear regression, and classification. Linear regression seeks to take known data and use that to predict unknowns. For example, using a given postcode, property type, and a square footage to predict the value of a property. Classification seeks to answer questions about what type of thing something is. For example, when looking at a picture, decide if it a dog or a cat. When looking at an email, is it spam or not spam? The important thing to know, however, is that the type of data feeding into a model does not need to be specific. A linear regression model can learn how to predict property prices, the popularity of a movie, movies you might like based on movies you already do like, median income based on gender, age, and education… it’s a data-agnostic way of making predictions. Similarly, the classification model can spot spam vs not spam, cat vs dog, man vs woman, Shakespeare vs Chaucer. My suggestion therefore, is that we should take this forward into improving aviation safety, by including the ability to understand pilot behaviour into the aircraft’s flight control computer. Broadly speaking, this would be implemented using three types of “intelligent” systems working both in isolation and interactively:
An Implementation of Linear Regression and Classification Algorithms to Determine “Unusual” Behaviour:
As you probably already know, linear regression is one of the ways that machine learning is currently employed to make predictions for a value based on another known value or values. A simple example would be to predict which gear a car is in, based on its speed. Or to predict house price using postcode and floorspace. The essential element here is that there is a correlation between one value and another, which allows the algorithm to determine a likely value for x based on a known value for y. Moreover, the more data that the model has, the more accurate it *should* be for predictions. Gathering this data should also be relatively trivial in the airline industry, as many millions of flights take place every day, providing literally billions of data points, and almost all modern airliners contain a device known as a Quick Access Recorder which would facilitate the easy collection of all that data.
With the data available for collection after every safe and successfully completed flight, it becomes possible for any given airline to quickly build up a large body of training data for an AI algorithm on what is a “normal” flight. That is to say, the AI algorithm will learn the underlying correlations between such variables as aircraft weight, takeoff speed, rate of climb and descent, flaps usage, angle of bank in turns, time spent in cruise, and so on. Furthermore, injecting relevant third party data such as atmospheric and meteorological conditions, delays from air traffic control etc, may help improve the algorithm further. Data from “atypical flights” can be created from analysis of flight data recorders from other crashes, or possibly recreated using simulators with pilots intentionally mishandling the aircraft, or ignoring certain safety features.
Once sufficiently well trained, the linear regression algorithm should be able to predict the flow of the flight from before the aircraft has even begun moving under its own steam, and so therefore it may monitor various systems as the flight progresses in order to identify any statistically significant deviations from normal operations. It is here, therefore, that the classification algorithm would come into play.
So what then if the system identifies behaviour that endangers the aircraft and its passengers? I propose essentially a four tier system as follows:
- If monitored parameters exceed normal by x%, alert the pilot and wait for the pilot to take remedial action
- If monitored parameters exceed normal by y%, alert the pilot and wait for the pilot to take remedial action, also alert the airline operations team
- If monitored parameters exceed normal by up to z%, alert the pilot, alert the airline operations, temporarily remove control from the pilot to return the aircraft to safe and stable flight
- If monitored parameters exceed normal by anything beyond z%, lock out all pilot controls until landed, alert authorities and airline operations, autopilot to fly and land at nearest suitable airport
Of course, any system which may take away control from the flight crew is going to be controversial. What about false positives, or problems with the algorithm? These will likely happen at some point, and hopefully problems like this would of course be found out in ground simulations before ever surfacing for real in an operational flight. However, for these situations, a means for the crew to override the system should exist. However, to prevent the kind of scenario we saw with Germanwings, the override should be something that takes place through the use of 2FA, and by at least two members of crew. That is, an option should exist if more than M of N crew members agree that the pilot should not have control returned, they can authenticate using 2FA with the aircraft and achieve this. I say this because I believe it better for the crew to be able to outvote a rogue pilot than hijackers to be able to coerce crew into overriding safety systems.
Ultimately – aircraft systems already know when they are being threatened (TOO LOW – TERRAIN!) (PULL UP!) (TRAFFIC TRAFFIC!) etc, so it isn’t hard to be able to extend this knowledge to make decisions about when the risk to the aircraft is being deliberately introduced. Once we know that threshold is met, we can lock out the source of danger.