Free Shipping on orders over US$49.99

Can We Trust AI in Safety Critical Systems?


//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

AI, famously, is a black-box solution. While neural networks are designed for specific applications, the training process produces millions or billions of parameters without us having much understanding of exactly which parameter means what.

For safety critical applications, are we comfortable with that level of not knowing how they work?

“I think we are,” Neil Stroud, VP of marketing and business development at CoreAVI, told EE Times. “Safety is a probability game. [Systems are] never going to be 100% safe, there’s always going to be some corner case, and the same is true with AI.”

While some familiar concepts from functional safety can be applied to AI, some can’t.

“With AI, even if probability says you’re going to get the same answer, you may get there a different way, so you can’t lock-step it,” Stroud said, referring to the classic technique where the same program is run in parallel on identical cores to cross check results.

There are ways, however, to make AI inference deterministic enough for demanding avionics systems like the kind CoreAVI works with. While determinism can be improved with the right training, CoreAVI also analyzes trained AI models to strip out and recompile any non-deterministic parts.

“Part of the onus is on the model developer to come with a robust model that does the job it’s supposed to,” Stroud said, adding that if developers write proprietary algorithms, that often adds to the complexity.

Another technique is to test run a particular AI inference many times to find the worst-case execution time, then allow that much time when the inference is run in deployment. This helps AI become a repeatable, predictable component of a safety critical system. If an inference runs longer than the worst-case time, this would be handled by system-level mitigations, such as watchdogs, that catch long-running parts of the program and take the necessary action—just like for any non-AI powered parts of the program, Stroud said.

CoreAVI works with GPU suppliers to build safety-critical drivers and libraries for GPUs, originally for graphics in aircraft cockpit displays, but increasingly for GPU acceleration of AI in avionics, automotive and industrial applications. The company is one of several driving an effort towards an industry standard API for safety-critical GPU applications as part of the Vulkan standard, which is designed to allow safety certification and code portability to different hardware.

Stroud cited CoreAVI customer Airbus Defence and Space’s use of AI in fully autonomous air-to-air refueling systems as an example of how AI can—and does—work in even the most safety critical applications today.

CoreAVI in safety critical systems
CoreAVI’s middleware (GPU drivers and safety-critical libraries) is designed for demanding systems such as avionics and automotive driver assistance systems. (Source: CoreAVI)

Organic mismatch

Certifying safety critical ADAS systems when we don’t know exactly how the AI arrives at an answer is certainly a challenge, said Miro Adzan, general manager for ADAS and automotive systems at Texas Instruments (TI).

Miro Adzan (Source: TI)
Miro Adzan (Source: TI)

“Functional safety with ISO 26262 is about understanding what is happening, it’s about determining with a certain probability that a certain outcome will happen,” Adzan said. “Now if we talk about artificial intelligence, just by the nature of artificial intelligence and how it works, this is exactly what is not happening… I think there’s an organic mismatch between these two. And that’s the challenge.”

ISO 26262 certification relies on determining probabilities of failure. In some cases, the standard will suggest how this should be done, and while it’s easier to prove compliance by doing it this way, there is another way, Adzan added.

“There’s a specific subsection in the ISO standard that accepts proof by example,” he said. “There’s a section that says you can prove not by design but by testing – or by real life usage – so if you can show in real life that there is certain level of non-failure, you can say that this works… The only problem is that this is not transferrable, so for the next system you would have to prove it again.”

In practice, the amount of test data required without the ability to bring certified subsystems to new designs may mean this route to certification is not economical, he added.

Safety concepts

Ryan Zhao (Source: TI)
Ryan Zhao (Source: TI)

General safety concepts like redundancy are not mutually exclusive with AI, according to Ryan Zhao, general manager for motor drives and robotics at TI.

“We can do some redundancy in the design,” he said. “We can use multiple chips, multiple cores, for redundancy, not only at the silicon level, also at the software level.”

TI’s TDA4, part of the Jacinto processor series, can be set up with a safety island: isolated cores on chip can monitor or cross check each other without having to execute full lock-step operation. The safety island uses a separate clock and memory.

The TDA4 has dual Arm Cortex-RF5 cores, plus 8 TOPS of AI acceleration via a C7X DSP and in-house developed matrix multiplication accelerator. Partitioning can be done not only between cores, but also at the virtual level on the same core.

Matthias Thoma (Source: TI)
Matthias Thoma (Source: TI)

“You might also have the redundancy more at the sensor level, rather than at the AI model level,” said Matthias Thoma, robotics systems manager at TI. Thoma’s robotics example was a warehouse robot with radar and camera, with camera data used to grab a package, or both camera and radar used to detect a person walking into the robot’s safety zone. Today’s industrial robots, however, usually have non-AI technologies like light grids to detect if a person enters the safety zone, he said.

Explainable AI

Mohammed Umar Dogar, VP of the IoT and infrastructure business unit at Renesas, told EE Times the overall impact of AI on safety critical systems is positive, particularly on the factory floor.

“Real time analytics is where I see a lot of growth and that’s why we’re investing into this area very heavily,” he said. “But one of the big problems with AI in general is the explainability… the model is a black box. A lot of the companies can develop the model itself, but if I’m an OEM, I need to know what it’s doing.”

Renesas gained significant AI capabilities with the acquisition of Reality AI last year. Reality AI’s tool can help provide explainability for AI models. The tool performs automated feature extraction from sensor data, then shows the designer what those features are and how they correlate to the prediction.

Nalin Balan, head of sales at Reality AI, gave the example of an unbalanced load in a dryer drum and the conditions this created in the motor. Reality’s tool shows which feature—in this case, a frequency feature—correlates to the prediction of an unbalanced load. The designer can then use their physics knowledge to understand why an unbalanced load might correspond to that frequency.

Explainability tool for safety critical systems
Renesas/Reality AI’s tool offers automatic feature extraction and can provide some explainability. (Source: EE Times)

In an automotive example, this tech might be used to monitor motors in breaking or steering systems to look for fault conditions, Balan said, but the tool also applies to audio processing applications like See With Sound—a proprietary AI that can detect pedestrians or cyclists near a car from the sounds they make. In this case, a variety of features can be used—from bicycle tire sounds to footsteps—but Reality’s tool can tell you exactly what the AI is listening for.

“We can tell the R&D team what features we’re picking up in the targets that allow us to detect them,” Balan said. “For a vehicle, it might be a combination of features—perhaps engine noise and tire noise. But we can reliably show you what features we have extracted in the data that correlate to that prediction.”

While this level of explainability may not help certification in a safety critical application, it may have an indirect effect—giving the designer confidence that they have some insight into how the AI arrives at its answer, thereby helping to open the black box, even just a tiny bit.





Source link

We will be happy to hear your thoughts

Leave a reply

larkbiz
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0
Shopping cart