Skip to content

One-size-fits-all won’t work for AI regulations

Artificial intelligence has applications across numerous industries, but guidelines should be sector-specific. By Megan Lampinen

Artificial intelligence (AI) is emerging as a central pillar within the software-defined vehicle (SDV). Not only is it helping engineers design and validate smart and connected vehicles, but it also underpins a growing number of automated driving systems. AI’s role will only expand as these trends accelerate. But where are the regulations?

Broadly speaking, they have lagged behind technology deployments, and today definitive AI regulation remains limited. Much of the regulatory framework that is emerging, such as the EU AI Act, is not targeted at automotive specifically. That could be problematic, as failing to take into account the use-case specifics could stymie innovation and delay implementation.

“As policies are developed, it is important that there is a separation between the discussions for transportation use cases and other sectors,” cautions Earl Adams Jr, a partner at global law firm Hogan Lovells and former Deputy Administrator and Chief Counsel at the US Department of Transportation’s (DoT) Federal Motor Carrier Safety Administration (FMCSA). “Otherwise, automotive players might find themselves having to answer to issues that are just not relevant to how they’re using AI.”

Extra rules and requirements

At the FMCSA, Adams was part of the team that set out to create the first set of ‘guard rails’ for self-driving trucks on public highways—a typical example of AI at work—but that sort of AI application is distinct from what is deployed in other sectors. For instance, the idea with automated driving is that algorithms gather environmental data to react to what’s happening on the road. “The AI system is agnostic to the ‘who’ of the situation and only interested in the ‘what’,” says Adams. “It is just simply reacting to behaviour, meaning how to respond when another car cuts in front.”

That’s very different from an AI system predicting mortgage rates based on an individual’s personal characteristics. They are both harnessing generative AI (GenAI) in that process of prediction, but only the latter raises issues around discrimination and privacy. “The trouble is that the emerging guidance lumps all these things in together with a one-size-fits-all,” he tells Automotive World. “AI has become a buzzword, a marketing piece. In many industries it makes perfect sense for the government to be sceptical and want to ensure it is trustworthy, because in some respects we are in the Wild West days of the AI evolution.”

AI is helping to realise autonomous driving

But automotive, he suggests, is more evolved in its applications. The transportation sector has long been using large data models for predictive analytics and has a head start over many other sectors. In light of these factors, it merits its own guidelines.

Looking ahead, Adams believes that AI guidelines will need to be sector-specific: “If the ultimate goal is that regulations create parameters, then they have to be specific to a given industry. In a world where there is such specialisation, deploying a one-size-fits-all runs the risk of limiting deployment. It means applying extra rules and requirements that are not applicable, which creates inefficiencies.”

Partner, educate, communicate

Adams isn’t the only voice raising these concerns. In May 2024, the DoT’s Advanced Research Projects Agency—Infrastructure (ARPA-I) put out a Request for Information (RFI) seeking input on the use of AI within transportation. The stated purpose is “to obtain input from a broad array of stakeholders on AI opportunities, challenges and related issues in transportation pursuant to Executive Order (E.O.) 14110 of October 30, 2023 entitled ‘Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’.”

For Adams, this is “an acknowledgement that AI in transportation is different than AI in banking and retail.” Today he advise automotive clients—many of which are autonomous vehicle developers—on regulatory compliance and interpretation, helping them prepare for what upcoming regulations may require. His advice to them, and to everyone active in this space, boils down to three key steps: partner, educate, and communicate regularly.

“AI players need to be working with research organisations and centres of excellence,” he elaborates. “The developers have a lot of data, but the policymakers don’t. As much as possible, they need to share that data in order to educate the policymakers and the public.” Adams cautions that public acceptance could prove one of the biggest obstacles to both AI and the functionality it enables, such as autonomous driving.

Automotive players might find themselves having to answer to issues that are just not relevant to how they’re using AI

As for communicating with policymakers, failure to do so could have significant consequences for developers. “If you take a wait-and-see approach, someone else will write that story for you, and you will not have the ability to control how regulators and policymakers are looking at these issues,” he warns.

There are promising signs emerging in terms of recognising industry-specific guidelines. The DoT organised its first AI Assurance workshop in November 2023 to explore “the challenges, opportunities, risks, and priorities for AI assurance” within transportation. Additional meetings are taking place over the course of 2024. Drawing on the roadmap used by the aviation industry, workshop targets include developing an initial AI and data assurance framework, defining common terminology, identifying opportunities for intermodal knowledge sharing, and developing AI assurance coordination plans. It’s still early days, but as Adams points out, “At least a conversation is happening, and the DoT is beginning to ask itself important questions.”

Related Content

Welcome back , to continue browsing the site, please click here