1

Moving from automation to autonomy in aviation

By Erin I. Rivera and Anna Dietrich | October 16, 2020

Estimated reading time 12 minutes, 36 seconds.

We’ve all seen that movie scene where the pilots on a passenger jet become incapacitated due to food poisoning. The aircraft enters a nosedive as one of the pilots invariably slumps over the controls, pushing the aircraft’s nose down, as a siren slowly becomes louder in the background. Every pilot who has flown as a passenger, no matter their experience, envisions answering the flight attendant’s frantic announcement over the intercom: Is there a pilot on board??  

Airbus autonomous A350 takeoff and landing
Airbus recently achieved autonomous take-off and landing of an A350-1000 passenger jet, one of many projects pushing the boundaries of autonomy in aviation. Airbus Image

Truth be told, modern airliners pretty much fly themselves. Pilots still manipulate the stick and rudders and perform take-offs and the majority of landings, but most of the flight is flown on autopilot. Instead of flying, pilots spend the majority of their time monitoring aircraft systems, communicating, and tinkering with switches, dials, and a trackball to adjust the aircraft’s flight plan. But despite the fact that modern aircraft are increasingly reliant on automation for just about every aspect of flight, most cannot land safely without assistance from an onboard pilot.     

Developments in computer vision, voice recognition, artificial intelligence and automation, however, will make feasible the next generation of highly-automated and autonomous aircraft that can taxi, take-off, and land without human assistance. Over time, and given a proven safety record, the human pilot’s role will change as aircraft are certified for optionally or remotely piloted operations, and eventually self-piloted with the passenger inputting their desired destination. Still, humans — pilots or otherwise — will play an important role and remain “in-the-loop” or “on-the-loop” to varying degrees. Depending on the system’s capabilities, that role could be highly interactive or more passive monitoring.

Automation and Autonomy

The terms “automation” and “autonomous” do not have the same meaning. Think of automation as a system’s ability to perform a specific, repetitive, function without being aware of what is actually happening or being able to change its function. If you own a robotic vacuum cleaner (e.g., Roomba), then you have seen this little guy aimlessly roam over the floor, picking up dirt with its direct pathway until it bumps into the wall, causing a change in its trajectory.  Likewise, current aircraft autopilots are complex automated systems capable of more or less flying an aircraft. Autopilot systems and Roombas alike require human assistance — when flight plan changes occur, or the little guy thinks he is stuck on a cliff.  

Autonomous systems, on the other hand, employ artificial intelligence (AI) as a means to enable an automated system to detect and adapt to changes in its environment.  Today’s most advanced AI algorithms control Tesla’s autopilot, Amazon’s Alexa, and that facial recognition software on your phone that can’t recognize you with a mask on. Without general intelligence, i.e. a human’s ability to use reason, logic, and comprehend meaning, AI algorithms — usually trained through machine learning on massive data sets expected to be similar to the environments they will encounter — struggle with tasks rudimentary to humans. Identifying blurry letters in a captcha challenge employed to distinguish between human and computer users, which most of us have solved when resetting a password, is illustrative of the strengths and weaknesses of current AI-powered autonomy systems.   

System Programming Limitations

The performance of machine learning-driven autonomous systems is largely a factor of the quality of the data they are trained on and the extent of the system’s training in responding to particular situations. Computer vision systems, for example, are trained with immense amount of high-quality data, such as video feeds which have often been hand-tagged by humans to indicate cars, pedestrians, airplanes, birds, and other objects. With enough iteration and reprogramming, this type of data can produce a system capable of, for example, detecting objects on the ground beneath an aircraft and identifying suitable landing areas.

Autonomous systems generally only perform well in situations that were anticipated and planned for by their engineers, as evidenced by self-driving car accidents to date. In 2016, a Tesla Model S crashed into a semi-truck that was crossing the highway while its autopilot system was activated, fatally injuring the Tesla driver. The autopilot did not detect the semi and its 13 x 48 foot trailer as the truck crossed the highway, despite the Tesla’s forward-facing camera, radar, and ultrasonic sensors. 

One given explanation why the Tesla autopilot did not detect the trailer is that its white color against a brightly lit sky, combined with the trailer’s high ride height, may have confused the system into identifying it as an overhead road sign.  Following an investigation into the accident, the National Transportation Safety Board (NTSB) determined that Tesla’s autopilot system was simply not programmed to respond to situations involving traffic crossing a highway.

At the time, Tesla’s autopilot system was programmed for highway use only; a crossing semi-truck was a one-in-a-million scenario not envisioned by its architects.

In 2018, an Uber autonomous test vehicle operating on a pre-planned route on public highways for development purposes struck and killed a pedestrian crossing the highway at night. The vehicle detected the pedestrian, but according to the NTSB accident report, the autonomous system relied on the vehicle operator to intervene “to avoid a collision or mitigate an impact.” According to the report, the autonomous system was also not programmed to alert the vehicle safety operator, who was distracted throughout the trip by her use of a personal cell phone.

These accidents illustrate a few important lessons that should underpin the move from automated systems to more autonomous systems in aviation. Of course, system architects should attempt to envision and train for every possible scenario and failure mode. In aviation, minor pilot errors can result in life-threatening situations; the same holds true for programming errors, so systems developers must be aware of any potential safety risk, no matter how unlikely, and must capture that possibility within the system’s abilities.

In reality, however, that is simply not possible. Tesla drivers had driven more than 47 million miles on autopilot by the time the 2016 crash occurred; autonomous systems will likely be commercialized in aviation long before reaching that milestone in actual flight, though simulation can help discover edge cases.

To safely employ autonomy in aviation, systems will have to more effectively integrate the roles of machines and humans, whether that human is a pilot on the aircraft, a remote pilot on the ground or transitioned to a monitoring capacity. That means also understanding that humans are mentally ill-equipped to perform repetitive tasks, such as continuously monitoring an autonomous vehicle’s performance.

Airbus UTM futuristic concept
Futuristic visions of aircraft operating in urban spaces will require the integration of autonomous systems to be safe. Airbus UTM Image

Autonomy systems must be able to recognize when they are in unfamiliar situations and alert their human counterparts — the subject of an early-stage DARPA project called Competency-Aware Machine Learning, which hopes to produce machine learning systems able to communicate their confidence level in given situations and potentially alert human operators when intervention may be required.

In theory, autonomy systems will prove far more capable learners than humans. Our ability to comprehend information varies individually, the effectiveness of our training fades over time and requires periodic retraining, and even then, not every pilot will perform at the same level in every situation.

Again, in theory, errors encountered by autonomous systems will only need to be fixed once, and that fix will apply to all aircraft operating that system without degradation over time — so long as industry can quickly update and recertify system programming once an issue is identified. In practice, however, unique situations or failure modes can have many parameters that are not all captured by one correction or retraining.

Conclusion

For most of the aviation era and arguably still today, the most advanced equipment onboard the aircraft has been the pilot. Advancements in autonomous aircraft may one day render the human pilot obsolete, but that day is still quite a ways off. The introduction of autonomy systems promises tremendous improvements in safety over the current pilot-and-automation setup, but the transitioning years must be approached with a careful understanding of the capabilities, limitations, strengths, and weaknesses of available AI systems and humans alike.

In a not-so-distant future film, the flight attendant’s frantic announcement over the intercom may be: “Is there a computer programmer with expertise in autonomous aircraft on board?”

The attendant’s call will likely be greeted with silence, since there are far fewer autonomous aircraft programmers than airline pilots. Let’s all hope sufficient troubleshooting has been done in advance; until then, humans should be part of the loop.

Editor’s note: This is part one of the authors’ discussion of autonomy in aviation. Read part two here.

Join the Conversation

2 Comments

  1. Great intro/overview. The maturity of automation towards greater autonomy will come as we transition from based control to automate specific tasks, and judiciously add basic perception, reasoning, and cognitive abilities to systems that will assist the pilot in the near term, and methodically replace pilot functions as they mature.

Leave a comment

Your email address will not be published. Required fields are marked *

METRO AVIATION | Ever wondered what goes into installing a helicopter interior for saving lives?

Notice a spelling mistake or typo?

Click on the button below to send an email to our team and we will get to it as soon as possible.

Report an error or typo

Have a story idea you would like to suggest?

Click on the button below to send an email to our team and we will get to it as soon as possible.

Suggest a story