BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Act One: Opposition Takes Center Stage Against EU AI Legislation

Following

Despite their public calls for regulation, major AI players are pushing back on the first real effort to do so, working to temper the European Union’s draft Artificial Intelligence Act.

A June open letter, signed by over 160 executives from companies ranging from Renault to Meta, expressed concerns about the proposed act, arguing that it would “jeopardize Europe's competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.”

Meanwhile, influential players in the AI industry are working behind the scenes to soften elements of the act. Despite OpenAI CEO Sam Altman's public advocacy for global AI regulation, for example, he provided feedback on earlier drafts, and some of that feedback was integrated into the draft, such as his argument that general-purpose AI systems, like GPT-3 and Dall-E 2, should not be designated as “high risk.” This stance puts OpenAI on a similar path as other tech giants like Microsoft and Google, all advocating for a lessening of the act’s regulatory burden on large AI providers.

OpenAI declined to comment.

The AI Act has been in development for years and only recently did European legislators turn their attention to generative AI in large language models like GPT-4. The European Parliament, therefore, has largely limited its focus on generative AI to the issues of bias and copyright, which it was already grappling with before, rather than tackling the greater societal threats of AI agency and autonomy.

“The act is trying to be a very dynamic piece of legislation because we are quite aware of the fact that artificial intelligence is a set of technologies developing at an astonishing pace,” said Alexandra Geese, a German member of the EU Parliament. "So, what we're trying to do is to introduce different categories and look at applications of artificial intelligence rather than the technology itself."

The act introduces a sophisticated “product safety framework” constructed around a set of four risk categories, imposing strict requirements for market entrance and certification of High-Risk AI Systems through a mandatory "Conformité Européenne" procedure. The CE certification indicates that the manufacturer takes responsibility for the compliance of a product marketed within the EU.

High-risk AI systems are those covered by any of the 19 EU regulations designed to harmonize standards for certain products across the market or deployed in any of the following high-risk verticals: biometric identification and categorization; critical infrastructure for which AI could put people’s life and health at risk; educational and vocational settings where the system could determine access to education or professional training, employment, worker management and self-employment; access to essential private and public services (including financial services such as credit scoring systems), law enforcement, migration, asylum and border control (including verifying the authenticity of travel documents); as well as the administration of justice and democratic processes.

However, with so-called foundation models like LLMs already trained on vast amounts of data, there remains confusion about how to handle copyright and bias problems within the training data. Just as a person cannot “unread” a book, foundation models and other generative AI systems cannot “forget” their training data.

“It's very difficult to put the responsibility on companies using AI tools that are based on foundation models, and then tell them, well, you need to get rid of the bias,” said Geese. “I mean, how are they supposed to do that if it's in the foundation?"

Geese said it is likely that existing LLMs, trained at a cost of hundreds of millions of dollars, may be grandfathered in rather than asking their creators to start from scratch. But providers of foundation models would be liable for harm caused by products based on that model if the provider knew or should have known that the model was biased or infringed copyright.

For example, foundation models are being used to develop facial recognition systems, which could be used to discriminate against people based on their race or ethnicity. In these cases, it is important to ensure that the providers of foundation models are held accountable for the harm that their models cause.

The draft AI Act proposes, however, that the provider of a foundation model must take reasonable steps to mitigate the risks of bias or copyright infringement. This means that the provider of a foundation model would not be liable if it took all reasonable steps to mitigate the risks, but the harm still occurred.

"What exactly that means for copyright, I can't see at the moment," Geese said.

Timothy Martin, executive vice president of product and development at Yseop, a company that develops artificial intelligence software to automate complex business processes, said companies with proprietary foundation models may have to reveal more than they have about their training data. “The regulation is saying, ‘you need to be more transparent about what you're doing,’” he said. “If you don't tell us anything, we have no way to evaluate the impact on citizens.”

Meanwhile, penalties for breaches under the AI Act are still under debate, and it is possible that companies may accept fines as a cost of doing business in the EU.

The EU’s approach to AI regulation has caught the attention of other global players. China has swiftly turned proposals into rules, while countries like Brazil, Canada and the United States are following suit, suggesting the EU's AI Act could indeed set the standard globally. The EU's legislation is particularly noteworthy for its tiered structure based on risks, imposing different levels of scrutiny or banning certain applications.

"The proposed EU legislation provides an example worth emulating in the U.S. and the rest of the world," said Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights and author of the recently published Safeguarding AI: Addressing the Risks of Generative Artificial Intelligence. "Rather than an obstacle to innovation, well designed regulation can provide an incentive to industry to design products that are both safe and commercially valuable," he noted, citing precedents from the automobile and energy industries.

The final impact of the EU AI Act is yet to be seen. Its success will significantly depend on how effectively it can strike a balance between fostering innovation and ensuring safety. As the world watches, the act's evolution and execution might become the template for a new era of AI regulation.

"You can regulate AI without stifling innovation,” argued Geese. “I’m optimistic.”

Follow me on TwitterCheck out my website or some of my other work here