Advertisement

With new legislation, Europe is leading the world in the push to regulate AI

ChatGPT app open on an iPhone
Authorities worldwide are racing to rein in artificial intelligence such as ChatGPT.
(Richard Drew / Associated Press)
Share

Lawmakers in Europe signed off Wednesday on the world’s first set of comprehensive rules for artificial intelligence, clearing a key hurdle as authorities across the globe race to rein in AI.

The European Parliament vote is one of the last steps before the rules become law, which could act as a model for other places working on similar regulations.

For the record:

3:27 a.m. June 14, 2023A previous version of this story misspelled Kris Shrishak’s last name as Shashak.

A years-long effort by Brussels to draw up guardrails for AI has taken on more urgency as rapid advances in chatbots such as ChatGPT show the benefits the emerging technology can bring — and the new perils it poses.

Advertisement

European Parliament lawmakers are due to vote on the proposed legislation, which includes controversial amendments on facial recognition.

Here’s a look at the EU’s Artificial Intelligence Act.

How do the rules work?

The measure, first proposed in 2021, will govern any product or service that uses an artificial intelligence system. The act will classify AI systems according to four levels of risk, from minimal to unacceptable.

Riskier applications, such as for hiring or tech targeted at children, will face tougher requirements, including being more transparent and using accurate data.

Paul McCartney clarifies that ‘nothing has been artificially or synthetically created’ in new Beatles song that incorporated AI technology to enhance John Lennon demo tapes.

June 13, 2023

Violations will draw fines of up to $33 million or 6% of a company’s annual global revenue, which in the case of tech companies such as Google and Microsoft could amount to billions.

It will be up to the European Union’s 27 member states to enforce the rules.

What are the risks?

One of the EU’s main goals is to guard against any AI threats to health and safety and to protect fundamental rights and values.

That means some AI uses are an absolute no-no, such as “social scoring” systems that judge people based on their behavior.

Advertisement

Also forbidden is AI that exploits vulnerable people, including children, or uses subliminal manipulation that can result in harm — for example, an interactive talking toy that encourages dangerous behavior.

Predictive policing tools, which crunch data to forecast who will commit crimes, is also out.

It was a short list that ignored a swath of cuisines and neighborhoods. No tacos, Chinese, Thai, Ethiopian, Vietnamese, Japanese or anything beyond sandwiches and fried chicken.

April 24, 2023

Lawmakers beefed up the original proposal from the European Commission, the EU’s executive branch, by widening the ban on remote facial recognition and biometric identification in public. The technology scans passersby and uses AI to match their faces or other physical traits to a database.

But it faces a last-minute challenge after a center-right party added an amendment allowing law enforcement exceptions such as finding missing children, identifying suspects involved in serious crimes or preventing terrorist threats.

“We don’t want mass surveillance, we don’t want social scoring, we don’t want predictive policing in the European Union, full stop. That’s what China does, not us,” Dragos Tudorache, a Romanian member of the European Parliament who is co-leading its work on the AI Act, said Tuesday.

AI systems used in categories such as employment and education, which would affect the course of a person’s life, face tough requirements, such as being transparent with users and taking steps to assess and reduce risks of bias from algorithms.

Advertisement

Most AI systems, such as video games or spam filters, fall into the low- or no-risk category, the commission says.

What about ChatGPT?

The original measure barely mentioned chatbots, mainly by requiring them to be labeled so that users know they’re interacting with a machine. Negotiators later added provisions to cover general-purpose AI such as ChatGPT after it exploded in popularity, subjecting that technology to some of the same requirements as high-risk systems.

ChatGPT and other new AI services benefit from a science fiction-infused marketing frenzy unlike anything in recent memory. There’s more to fear here than killer robots.

March 31, 2023

One key addition is a requirement to thoroughly document any copyrighted material used to teach AI systems how to generate text, images, video and music that resemble human work.

That would let content creators know if their blog posts, digital books, scientific articles or songs have been used to train algorithms that power systems such as ChatGPT. Then they could decide whether their work has been copied and seek redress.

Why are the EU rules so important?

The European Union isn’t a big player in cutting-edge AI development. That role is taken by the U.S. and China. But Brussels often plays a trend-setting role with regulations that tend to become de facto global standards and has become a pioneer in efforts to target the power of large tech companies.

The sheer size of the EU’s single market, with 450 million consumers, makes it more efficient for companies to comply than to develop different products for different regions, experts say.

Advertisement

But it’s not just a crackdown. By laying down common rules for AI, Brussels is also trying to develop the market by instilling confidence among users.

“The fact this is regulation that can be enforced and companies will be held liable is significant” because other places such as the U.S., Singapore and Britain have merely offered “guidance and recommendations,” said Kris Shrishak, a technologist and senior fellow at the Irish Council for Civil Liberties.

A falsified photograph of an explosion near the Pentagon spread widely on social media Monday morning, briefly sending U.S. stocks lower in possibly the first instance of an AI-generated image moving the market.

May 22, 2023

“Other countries might want to adapt and copy” the EU rules, he said.

Businesses and industry groups warn that Europe needs to strike the right balance.

“The EU is set to become a leader in regulating artificial intelligence, but whether it will lead on AI innovation still remains to be seen,” said Boniface de Champris, a policy manager for the Computer and Communications Industry Assn., a lobbying group for tech companies.

“Europe’s new AI rules need to effectively address clearly defined risks, while leaving enough flexibility for developers to deliver useful AI applications to the benefit of all Europeans,” he said.

Sam Altman, chief executive of ChatGPT maker OpenAI, has voiced support for some guardrails on AI and signed on with other tech executives to a warning about the risks it poses to humankind. But he also has said it’s “a mistake to go put heavy regulation on the field right now.”

Others are playing catch-up. Britain, which left the EU in 2020, is jockeying for a position in AI leadership. Prime Minister Rishi Sunak plans to host a world summit on AI safety this fall.

Advertisement

A judge is deciding whether to sanction two lawyers who blamed ChatGPT for tricking them into including fictitious legal research in a court filing.

June 9, 2023

“I want to make the U.K. not just the intellectual home but the geographical home of global AI safety regulation,” Sunak said at a tech conference this week.

Britain’s summit will bring together people from “academia, business and governments from around the world” to work on “a multilateral framework,” he said.

What’s next?

It could be years before the rules fully take effect. The vote will be followed by three-way negotiations involving member countries, the European Parliament and the European Commission, and the legislation possibly faces more changes as those bodies try to agree on the wording.

Final approval is expected by the end of this year, followed by a grace period for companies and organizations to adapt, often around two years.

Brando Benifei, an Italian member of the European Parliament who is co-leading its work on the AI Act, said supporters would push for quicker adoption of the rules for fast-evolving technologies like generative AI.

To fill the gap before the legislation takes effect, Europe and the U.S. are drawing up a voluntary code of conduct that officials promised at the end of May would be drafted within weeks and could be expanded to other “like-minded countries.”

Advertisement