The automotive industry is racing to deliver highly anticipated autonomous driving vehicles and companies are exploring many different paths to SAE Level 5 (L5) autonomy. However, L3 autonomous driving has posed significant challenges to the automotive industry on its way to achieve L5 due to the complexities of combing automation technology with human involvement. Developing an L3 vehicle requires sophisticated hardware, software, algorithms and an immense amount of data. All of this needs to work together seamlessly to ensure that the electronic control units (ECUs) are able to make decisions and execute them in the instant it can take for a vehicle to slam on its brakes or for a child to wander into the street.
The automotive industry is actively working on fail-operational safety architectures for systems with powertrain in the context of the Advanced Driver Assistance System (ADAS) and Autonomous Driving (AD) processing chain – from sensors to perception and decision algorithms. The approach is reminiscent of what occurred in the airline industry when the first studies on full authority fly-by-wire flight control systems were conducted. This work led to architectures incorporating hardware and process redundancy, real-time fault detection, masking and advanced reconfiguration to sustain normal operations after a fault. The underlying assumptions for fail-operational architectures include partial fundamental redundancy through graceful degradation, detached sensors/actuators, communications via separate paths, synchronised compute platforms and dissimilar powertrain platforms.
Even with several fail-safes in place, L3 driving poses specific safety risks that are inevitable when a system relies on both automation and human supervision
The automotive industry has long recognised that traditional electrical/electronic (E/E) architecture is too domain-centric and legacy heavy. This has likely delayed original timelines to achieve SAE L4 autonomy and is compounded by the electrification of the powertrain, which introduces additional safety concerns. The significantly higher compute capacity implied by Automotive Safety Integrity Level D (ASIL-D) systems for autonomous driving, which represents the highest level of risk management classification, has led to the development of more efficient multicore ECUs. This will hopefully help reduce the count by domain and thus overall complexity.
Early designs had suggested a simplified model of a vehicle based on Adaptive AUTOSAR, which leverages commercial off-the-shelf in-vehicle infotainment, classic AUTOSAR and 5G/IP communication with a high degree of coordination between ECUs, one-out-of-two fail-operational systems and metrics such as a maximum fault-handling interval used to identify a safe-state transition. More recent efforts explore two-out-of-three fail-operational systems of multicore ECUs. This design is widely used in avionics and consist of three fully independent redundant elements from sensor to actuator.
Even with several fail-safes in place, L3 driving poses specific safety risks that are inevitable when a system relies on both automation and human supervision. The “irony of automation” refers to the fact that an automated system that requires supervision can retain the attention of a human operator, while a system with greater autonomy will eventually lose the attention of the operator, making re-entry into the supervision loop much harder. The recent Boeing 737 MAX MCAS situation has added to the ongoing discussions on the balance between fully automated functions of a fail-operational system and relying on the driver or pilot to deal with emergencies, something pilots are trained for specifically but are not part of our current driver’s license curricula. While flying a plane may seem more complex than driving a car, it’s easier to achieve a self-flying airplane than a self-driving car. This is because there are many events that could interfere with a car, including pedestrians, traffic, animal crossings and countless other interactions that drivers need to be aware of. An L3 system can’t be expected to make those decisions and self-correct accordingly without human supervision.
While flying a plane may seem more complex than driving a car, it’s easier to achieve a self-flying airplane than a self-driving car
The development of machine learning (ML) algorithms that are suitable to meet the collision avoidance safety goals on the ADAS/AD processing chain will require continuous improvement and development. Preferably, a vehicle is equipped with the requisite connectivity such that the ML algorithms can be upgraded after market. Rich, diverse, and curated training data and efficient algorithm-tuning workflows will be necessary to ensure the system is performing correctly, as well as provide an opportunity to continuously improve upon the algorithm over time to help the vehicle make better decisions.
With clever data engineering and data science to manage and optimise their ML pipeline for training autonomous driving algorithms, automotive companies can leverage L3 driving with consideration for the higher levels of autonomy. It will require constant innovation, perfected combinations of sensors, as well as transparency with consumers about the vehicle’s capability to process data and the need for the driver to remain vigilant and prepared to jump back into the supervision loop.
The opinions expressed here are those of the author and do not necessarily reflect the positions of Automotive World Ltd.
Jean-Paul de Vooght is Senior Director, Client Solutions at Ness Digital Engineering
The Automotive World Comment column is open to automotive industry decision makers and influencers. If you would like to contribute a Comment article, please contact editorial@automotiveworld.com