1

Certification challenges for autonomous aircraft systems

By Erin I. Rivera and Anna Dietrich | November 4, 2020

Estimated reading time 11 minutes, 40 seconds.

This is the second part of an opinion essay by Erin Rivera and Anna Dietrich on path from automation to autonomy in aviation. Read the first part here.

In many respects, developing a safe autonomous ground vehicle is a far greater challenge than developing a safe autonomous aircraft. An autonomous vehicle must not only identify and interpret the actions of other nearby vehicles, traffic lights, and roadway markings, which often vary between localities, but must also understand and be able to respond to the unpredictable actions of pedestrians and wildlife crossing the vehicle’s pathway.

Wisk's autonomous Cora
Wisk is taking a straight-to-autonomy approach with its Cora eVTOL, which will likely delay its approval for operations by regulators. Wisk Photo

There are fewer objects to identify and track in the air since there are no pedestrians, cyclists, traffic signs, and roadway markings in the sky, though birds are still a problem. Autonomous aircraft will use an array of sensors, radar, and computer vision to detect airborne objects — from birds to drones and other aircraft — with far greater reliability than human sight. The speeds and distances inherent to aviation are a challenge both for detection and processing capabilities, however.

Adding a third dimension — altitude — provides more flexibility and predictability in the environment in which autonomous aircraft can operate. This, too, is both a blessing and a curse; if a terrestrial vehicle is uncertain of its environment, it has the option to simply stop or pull over. In the sky, landing isn’t as simple a task, and the inefficiency of many eVTOL aircraft in hover will mean “stopping” is not ideal either.

The far greater challenge for autonomous aircraft is the certification of autonomous airborne systems. The software or programming in any aircraft computer system must be developed and certified according to the impact of a system failure on aircraft, crew, and passenger safety. There are five levels, from A to E, of certification based on RTCA DO-178C, Software Considerations in Airborne Systems and Equipment Certification. Level A, the highest level of safety certification, is required for systems designated as “flight-critical,” which, for example, applies to the aircraft autopilot, navigation, and all fly-by-wire systems in all type-certificated aircraft. Level E applies to airborne systems with no or minimal impact on safety, such as entertainment consoles. Under the current certification standards, autonomous systems and their programming would be deemed flight-critical, requiring the highest safety certification level — though FAA and EASA requirement levels for eVTOL aircraft have not yet been finalized. 

The current FAA software certification framework poses significant challenges for the certification of autonomous systems that are nondeterministic. Current standards require verification of every system output to ensure that the system will not generate a command that will jeopardize safety of flight. By design, the output of a nondeterministic system cannot be predicted since the system can choose an infinite number of pathways to produce the desired output. Thus, it is impossible to test and verify that every system output complies with the current certification safety and assurance standards. 

To deal with nondeterministic systems, most software developers using machine learning for autonomy purposes intend to “freeze” a version of their system software that meets requirements, test it relentlessly, and then put it through the certification process to ensure it is safe for passenger aircraft. This presents the challenge of updating the aircraft software as new learning is accomplished: any change in aircraft software requires recertification of the software before use. Instead of real-time learning, system updates are generated by collecting flight data from test and operational aircraft which can be used to retrain and develop an improved version of the system software. Incorporating that learning then requires a great deal of time and effort to recertify the updated system, which is why aircraft software developers don’t frequently make programming updates — even when doing so would increase system efficiencies and performance. 

The speed at which autonomy systems can advance is closely linked to how rapidly developers will be able to incorporate new data, improve the system and push an updated version out to the global fleet to restart the process. Current certification systems won’t allow for this to be done rapidly, since the certification process is often measured in years and millions of dollars. This is in stark contrast to automotive software, which in some models is updated remotely every couple of weeks. Providing the ability to make safety-enhancing updates to advanced aviation software in a timely and cost-effective way will be critical to the safe rollout of this technology, as one of the main advantages of autonomy over human pilots is that once the software has been fixed, the autonomy won’t make the same mistake twice — but we can only benefit from this if we have a mechanism by which we can efficiently certify updated software that has incorporated that learning.

Because of these challenges surrounding autonomous and highly automated systems, the current certification requirements and guidance standards for aircraft systems and software are undergoing significant revisions by standards groups ATSM, SAE, and RTCA. One design and certification approach, Autonomy Design and Operations in Aviation: Terminology and Requirements Framework, developed by ASTM (AC377), evaluates the risks and benefits of automating individual aircraft systems or pilot functions rather than evaluating the vehicle by “Levels of Autonomy” — a common classification approach used for ground vehicles. The framework proposes evaluating the added safety benefit and reliability of automating a system or function, even if the system is less than perfect but overall increases safety compared to the available human options. This approach to certification allows the flexibility to increase aircraft automation/autonomy over time, leading to the certification of aircraft where specific systems or functions are autonomously performed. For instance, the pilot could control an aircraft’s heading, speed, and altitude, but the aircraft automatically performs takeoffs and landings.          

Implementing this approach, however, requires developing standards for which to measure an autonomous system’s performance.  Measuring their performance against human standards is more complicated than one might think because human pilots’ performance varies over a wide spectrum of abilities, raising the question of what performance bar the system must meet. Like anything else, the better the system, the more expensive it is to develop and certify. If every autonomous system is certified to the highest design assurance level, then developing and building the aircraft will be too expensive for its intended use.

Fully Autonomous Operations

Traditionally, advanced aircraft systems such as autopilots have relied on human pilots to be their backup in case of a system failure. If an error occurs, the system reverts control of the aircraft back to the pilots. This is not a robust approach to safety since humans are not always reliable backups — as discussed in our earlier column — and often may be the cause of the problem in the first place. Additionally, using humans as backups increases the training burden on human pilots, who require periodic retraining since training goes stale over time. Finally, building autonomous systems to revert control back to a human when something fails defeats the end goal of autonomy in the first place — and undercuts its safety case. 

Instead, pathways must be found that allow an autonomous or highly automated system to stand on its own through high reliability and fail-functional architectures. In an off-nominal situation, such as a mechanical system malfunction or a degraded system performance, a fail-functional autonomous system will be capable of recognizing it is in a degraded state and proceed to activate a limp or safe mode that safely operate or land the aircraft. 

One backup solution is to “bound” the autonomous or highly automated system with a much simpler safety monitor, which ensures that the performance and the commands generated by the autonomous system are reliable (see ATSM F3269). In the event the safety monitor detects degraded system performance or failure, the monitor takes control and reverts to a simpler, less adaptable but more determinate mode of operation. This is commonly referred to as runtime assurance. In the event that a computer vision system fails and is unable to verify the safety of an unimproved landing area, for example, instead of the pilot taking over, a backup system will use instrument landing procedures and navigation aids to land at a known helipad or runway. 

Conclusion

Ongoing collaborative efforts are underway between the autonomous developers and the regulators to define requirements and procedures that will allow autonomy to become an ever-greater part of aviation. As with any new technology, it is crucial to proceed deliberately and with caution to ensure that our certification approaches are beneficial for aircraft safety. If we are too restrictive in our approach, the industry will be hobbled. Alternatively, if we are too permissive, we will likely see preventable accidents, which may set the industry back further. 

Join the Conversation

5 Comments

  1. “Under the current certification standards, autonomous systems and their programming would be deemed flight-critical, requiring the highest safety certification level — though FAA and EASA requirement levels for eVTOL aircraft have not yet been finalized. ” Is that the correct way to interpret AC-23.1309-1E? Check page 23.

  2. Awesome article which lays bare the steep road to fully autonomous aircraft. As a drone pilot/inspector for the City of Stuart, Fl and a licensed helicopter pilot I appreciate this type of informative article.

  3. I believe the kinetic energy of the vehicle and the number of persons aboard should be used as the basis for the levels of certitude of hardware, software and most importantly, those designing, operating and maintaining them.
    A 75 lbs. UAV cruising at 250 knots in Urban areas navigating in 4D managed airspace close to homes for package delivery is a different risk than a 200 lbs. machine monitoring pipe lines.
    A UAV system carrying 4 souls for hire should be evaluated differently than one carrying 10 or more.

    Excellent article. We need a new approach. 10% of all deployed ADS-B equipped aircraft fail ATC requirements today. Look how well that’s working using existing systems & equipment certification paths to manage aircraft spacing rules with the newest technology.
    Thanks for your time to develop these thesis. I enjoy studying them.

  4. Great article. Let’s hope sanity and the right level of rigor prevail, as when we first put EFIS systems into GA!

Leave a comment

Your email address will not be published. Required fields are marked *

METRO AVIATION | Ever wondered what goes into installing a helicopter interior for saving lives?

Notice a spelling mistake or typo?

Click on the button below to send an email to our team and we will get to it as soon as possible.

Report an error or typo

Have a story idea you would like to suggest?

Click on the button below to send an email to our team and we will get to it as soon as possible.

Suggest a story