Any visitor to CES in Las Vegas or the North American International Auto Show (NAIAS) in Detroit in January 2018 would have noticed that the terms artificial intelligence (AI) and deep learning have been convincingly embraced within the lexicon of the automotive industry.
It is perhaps useful to adopt a Janus outlook to this trend, reflecting on its historical evolution and anticipating its future role as the automotive industry prepares for an epochal transformation.
In many ways, the role for a ‘brain’ within the automobile gained momentum in the 1970s, as the automotive industry found itself facing challenges on three fronts: safety, fuel efficiency and emissions. In each case, it appeared that traditional mechanical systems needed to be aided with contextual information and with it, the ability to modulate or adapt their response. For example, the steel safety structure was, by itself, inadequate to achieve the desired levels of occupant safety. This safety cell needed to be augmented by airbags whose manner of deployment needed to be tailored, based on occupants and the dynamics of the crash. Similarly, the pursuit of fuel efficiency gains and emission reduction, to meet newly mandated norms, led to the conclusion that carburation and fuel management required more precise dynamic control, based on engine operating conditions, load, and transient torque demand.
These functions were enabled by a combination of electronic processors and software. Keeping pace with advances in electronics, these systems became more accurate, employed more memory and faster processors. If one were to use the number of lines of software code as a surrogate measure for system capability, purely for the sake of sizing the problem, between 1970 and 1990, the number of lines of software code in a typical luxury car grew ten-fold from operating with about 100,000 lines to about a million. In 1990, as a reference, the International Space Station operated with a similar quantum of software code.
As data and information play a rapidly growing role in mobility, AI and deep learning will become indispensable for the operation of our future mobility architecture
In many ways, this era triggered the trend for the 1990s, as the role for ‘intelligence’ on board vehicles was engaged to serve many functions. Increased use of electronics in the automobile was typically aimed at four main aspects of driving: automating tasks that were considered chores, such as shifting gears, or parallel parking in crowded city streets; improving the driving experience and reducing stress, such as using adaptive cruise control to maintain safe following distances; reducing maintenance cost with cars that monitored themselves and could delay the next routine service; and expanding the capabilities of the driver with features like electronic stability program (ESP). The parallel adoption of electronics in many interior systems also meant that on-board intelligence was employed for functions such as climate control, entertainment, navigation and communication. A modern day luxury sedan, like the current generation Mercedes-Benz S-Class incorporates almost all of the above features and then some. It requires over 100 processors or controllers and 200 million lines of software code to keep the car operating as intended.
The current frontiers of technology focus on three capabilities – autonomous driving, connectivity to other vehicles and infrastructure, and electrified drivetrains. Collectively, these systems have placed huge demands on AI and deep learning. Autonomous driving, often characterised by the hierarchical levels of autonomy as defined by SAE, demands that the sequence of sensing, perceiving, cognition, judgement and taking action be managed in all sorts of driving and traffic conditions.
Machines have already surpassed human beings in their sensory capabilities, as we have seen with LiDAR, cameras and accelerometers. Similarly, their ability to manage subtle, precise and timely action surpasses human limits – most modern fighter aircraft are dynamically unstable by design, and cannot be ‘hand-flown’ even by expert pilots, except in limited portions of their flight envelope. However, the sequence of perception, cognition and judgement presents huge challenges for machine intelligence. On-board systems in the car must plan the path to traverse in the given conditions, follow rules of the road, and respond to movements by pedestrians and other vehicles in near proximity. This domain remains the biggest challenge today, as technology advances to offer Level 5 fully autonomous driving capabilities. To allow machines to manage these complex aspects of driving, they are aided by vast quanta of data to ease the task. It is useful to consider how much easier it is to drive in well-known surroundings as opposed to driving through a new locality where the streets and traffic patterns are unfamiliar. In the latter situation, one is forced to rely on sensory inputs and rapidly adjust one’s actions while driving – a more challenging task. Detailed and frequently updated maps and traffic information, often aggregated from numerous other road users, render much of the driving environment ‘known’ even before the car encounters that area. With this, the path planning task of the autonomous car is executed with fewer uncertainties.
Machines have already surpassed human beings in their sensory capabilities, as we have seen with LiDAR, cameras and accelerometers. Similarly, their ability to manage subtle, precise and timely action surpasses human limits
The roadmap to autonomous driving faces a critical fork in the road. Some of the protagonists are willing to accept current limitations in sensing and AI (as with Level 3 autonomy), and allow for some situations where the car’s ‘brain’ can be overwhelmed by the complexities of the driving condition, such as poor visibility, ambiguous traffic patterns, etc. In such circumstances, the machine is expected to hand over control to the human driver. Others, such as Google and Cruise, are wary of the danger inherent in such a scenario. The human driver, who has been lulled into a state of relaxation while the car is driven in autonomous mode, may be ill-prepared to hurriedly assume control of the car. While the former group addresses a less complex problem and assumes some of the risk, the latter group is compelled to tackle the far more complex challenge of preparing the car to handle all sorts of conditions and operate fully independent of the driver. It is likely that this latter approach will perhaps find initial use in contained and well-regulated traffic environments. Singapore’s effort to pilot such mobility within an industrial park is one example.
Even as these challenges are being mastered, the next horizon of challenge for AI and deep learning is emerging. Across the globe, the effects of urbanisation are causing population densification in cities. Serving the mobility needs of dense populations, while avoiding high economic and social costs related to congestion in city streets, is a topic that sits high on the list of most city administrators. Many cities have embarked on policies aimed at reducing vehicular traffic and encouraging ride-sharing or use of mass transit.
Operators such as Via and Chariot now offer ride-share services in many global cities with conventional 8-12 seat vans with human drivers. Planning the journey for each vehicle, as it seeks to serve multiple users, optimising routes, pick-up and drop-off points, while adjusting for dynamic traffic conditions and user convenience, also demands use of AI and deep learning tools. These operators are finding that for each rider, they may need to compute over 10,000 scenarios before they may match the prospective rider to a specific vehicle and a specific route. As one seeks to personalise such travel requests, based on user profiles and preferences, the task becomes even more complicated.
Collectively, autonomous driving, connectivity to other vehicles and infrastructure, and electrified drivetrains have placed huge demands on AI and deep learning
Many experts predict that the truly impactful transformation with autonomous driving will only be realised when these autonomous cars become a part of shared-use fleets in urban environments. It is telling that both Volkswagen and Toyota’s recent auto show exhibits featured autonomous cars intended for shared rides in city fleets. To make these solutions work, the industry must not only solve the autonomous drive mode for the vehicle, but also manage how each journey is shared and used by multiple commuters. AI and deep learning will find a critical role in managing both vehicle and shared fleet operations.
A further horizon is also now coming into view. In many global cities, administrators are making investments towards employing sensors and using data to render their cities ‘smart’, to mimic living organisms with health, safety, sanitation, and mobility. These cities are expected to stream data related to traffic lights, parking availability, road congestion, and increasingly dynamic road-use pricing, thereby expanding the range of relevant information that will be processed within the context of each journey.
Going forward, as vehicular technologies evolve with wider adoption of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication, one may expect city infrastructure to dynamically interoperate with vehicles. The role played by these technologies will be further amplified as they are also employed in logistics fleets that are a growing fraction of city traffic. We may not be far away from the day when users and their vehicles dynamically negotiate with cities and city infrastructure about routes, parking, and user fees to determine and select the best options.
As data and information play a rapidly growing role in mobility, AI and deep learning will become indispensable for the operation of our future mobility architecture.
The authors have recently produced a book, “Faster, Smarter, Greener: The future of the Car and Urban Mobility” published by the MIT Press
This article appeared in the Q1 2018 issue of Automotive Megatrends Magazine. Follow this link to download the full issue