Why testing must address the trust-based issues surrounding artificial intelligence – Aerospace Testing International

Words byJonathan Dyble

Aviation celebrates its 118th birthday this year. Over the years there have been many milestone advances, yet today engineers are still using the latest technology to enhance performance and transform capabilities in both the defence and commercial sectors.

Artificial Intelligence (AI) is arguably one of the most exciting areas of innovation and like many sectors, AI is garnering a great amount of attention in aviation.

Powered by significant advances in the processing power of computers, AI is today making aviation experts probe the opportunities of what was once seemingly impossible. It is worth noting that AI-related aviation transformation remains in its infant stages.

Given the huge risks and costs involved, full confidence and trust is required for autonomous systems to be deployed at scale. As a result, AI remains somewhat of a novelty in the aviation industry at present but attention is growing, progress continues to be made and the tide is beginning to turn.

One individual championing AI developments in aviation is Luuk Van Dijk, CEO and founder of Daedalean, a Zurich-based startup specializing in the autonomous operation of aircraft.

While Daedalean is focused on developing software for pilotless and affordable aircraft, Van Dijk is a staunch advocate of erring on the side of caution when it comes to deploying AI in an aviation environment.We have to be careful of what we mean by artificial intelligence, says Van Dijk. Any sufficiently advanced technology is indistinguishable from magic, and AI has always been referred to as the kind of thing we can almost but not quite do with computers. By that definition, AI has unlimited possible uses, but unfortunately none are ready today.

When we look at things that have only fairly recently become possible understanding an image for example that is obviously massively useful to people. But these are applications of modern machine learning and it is these that currently dominate the meaning of the term AI.

While such technologies remain somewhat in their infancy, the potential is clear to see.

Van Dijk says, When we consider a pilot, especially in VFR, they use their eyes to see where they are, where they can fly and where they can land. Systems that assist with these functions such as GPS and radio navigation, TCAS and ADS-B, PAPI [Precision approach path indicator], and ILS are limited. Strictly speaking they are all optional, and none can replace the use of your eyes.

With AI imagine that you can now use computer vision and machine learning to build systems that can help the pilot to see. That creates significant opportunities and possibilities it can reduce the workload in regular flight and in contingencies and therefore has the potential to make flying much safer and easier.

A significant reason why such technologies have not yet made their way into the cockpit is because of a lack of trust something that must be earned through rigorous, extensive testing. Yet the way mechanical systems and software is tested is significantly different, because of an added layer of complexity in the latter.

For any structural or mechanical part of an aircraft there are detailed protocols on how to conduct tests that are statistically sound and give you enough confidence to certify the system, says Van Dijk. Software is different. It is very hard to test because the failures typically depend on rare events in a discrete input space.

This was a problem that Daedalean encountered in its first project with the European Union Aviation Safety Agency (EASA), working to explore the use of neural networks in developing systems to measurably outperform humans on visual tasks such as navigation, landing guidance, and traffic detection.While the software design assurance approach that stems from the Software Considerations in Airborne Systems and Equipment Certification (DO-178C) works for more traditional software, its guidance was deemed to be only partially applicable to machine learned systems.

Instead of having human programmers translating high level functional and safety requirements into low-level design requirements and computer code, in machine learning a computer explores the design space of possible solutions given a very precisely defined target function that encodes the requirements, says Van Dijk.

If you can formulate your problem into this form, then it can be a very powerful technique, but you have to somehow come up with the evidence that the resulting system is fit for purpose and safe for use in the real world.

To achieve this, you have to show that the emergent behavior of a system meets the requirements. Thats not trivial and actually requires more care than building the system in the first place.

From these discoveries, Daedalean recently developed and released a joint report with EASA in the aim of maturing the concept of learning assurance and pinpointing trustworthy building blocks upon which AI applications could be tested thoroughly enough to be safely and confidently incorporated into an aircraft. The underlying statistical nature of machine learning systems actually makes them very conducive to evidence and arguments based on sufficient testing, Van Dijk confirms, summarizing the findings showcased in the report.

The requirements to the system then become traceable to the requirements on the test data you have to show that your test data is sufficiently representative of the data you will encounter during an actual flight.For that you must show that you have sampled any data with independence a term familiar to those versed in the art of design assurance, but something that has a much stricter mathematical meaning in this context.

Another person helping to make the strides needed to make the use of AI in the cockpit a reality is Dan Javorsek, Commander of Detachment 6, Air Force Operational Test and Evaluation Center (AFOTEC) at Nellis Air Force Base in Nevada. Javorsekt is also director of the F-35 US Operational Test Team and previously worked as a program manager for the Defense Advanced Research Projects Agency (DARPA) within its Strategic Technology Office.

Much like Van Dijk, Javorsek points to trust as being the key element in ensuring potentially transformational AI and automated systems in aircraft becoming accepted and incorporated more into future aircraft. Furthermore he believes that this will be hard to achieve using current test methods.Traditional trust-based research relies heavily on surveys taken after test events. These proved to be largely inadequate for a variety of reasons, but most notably their lack of diagnostics during different phases of a dynamic engagement, says Javorsek.

As part of his research, Javorsek attempted to address this challenge directly by building a trust measurement mechanism reliant upon a pilots physiology. Pilots attentions were divided between two primary tasks concurrently, forcing them to decide which task to accomplish and which to offload to an accompanying autonomous system.

Through these tests we were able to measure a host of physiological indicators shown by the pilots, from their heart rate and galvanic skin response to their gaze and pupil dwell times on different aspects of the cockpit environment, Javorsek says.

As a result, we end up with a metric for which contextual situations and which autonomous system behaviors give rise to manoeuvres that the pilots appropriately trust.

However a key challenge that Javorsek encountered during this research was related to the difficulty machines would have in assessing hard to anticipate events in what he describes as very messy military situations.

Real world scenarios will often throw up unusual tactics and situations, such as stale tracks and the presence of significant denial and deception on both sides of an engagement. In addition electronic jammers and repeaters are often used attempt to mimic and confuse an adversary.

This can lead to an environment prone to accidental fratricide that can be challenging for even the most seasoned and experienced pilots, Javorsek says. As a result, aircrews need to be very aware of the limitations of any autonomous system they are working with and employing on the battlefield.

It is perhaps for these reasons that Nick Gkikas, systems engineer for Airbus Defence and Space, human factors engineering and flight deck, argues that the most effective use of AI and machine learning is outside the cockpit at present. In aviation, AI and machine learning is most effective when it is used offline and on the ground in managing and exploiting big data from aircraft health and human-in / on-the-loop mission performance during training and operations, he says.

In the cockpit, most people imagine the implementation of machine learning as an R2D2 type of robot assistant. While such a capability may be possible today, it is currently still limited by the amount of processing power available on-board and the development of effective human-machine interfaces with machine agents in the system.

Gkikas agrees with Javorsek and Van Dijk in believing that AI currently hasnt be sufficiently developed to be part of the cockpit in an effective and safe manner. Until such technologies are more advanced, effectively tested, and able to be powered by an even greater sophistication in computing power, it seems AI may be better placed to be used in other aviation applications such as weapons systems.

Javorsek also believes it will be several years before AI and machine learning software will be successful in dynamically controlling the manoeuvres of fleet aircraft traditionally assigned to contemporary manned fighters. However, there is consensus amongst experts that there is undoubted potential for such technologies to be developed further and eventually incorporated within the cockpit of future aircraft.

For AI in the cockpit and in aircraft in general, I am confident we will see unmanned drones, eVTOL aircraft and similarly transformative technologies being rolled out beyond test environments in the not-so-distant future, concludes Van Dijk.

Read this article:
Why testing must address the trust-based issues surrounding artificial intelligence - Aerospace Testing International

Related Posts
This entry was posted in $1$s. Bookmark the permalink.