EU agrees to landmark AI rules as governments aim to regulate products like ChatGPT

Technology

A photo taken on November 23, 2023 shows the logo of the ChatGPT application developed by US artificial intelligence research organization OpenAI on a smartphone screen (left) and the letters AI on a laptop screen in Frankfurt am Main, western Germany.
Kirill Kudryavtsev | Afp | Getty Images

The European Union on Friday agreed to landmark rules for artificial intelligence, in what’s likely to become the first major regulation governing the emerging technology in the western world.

Major EU institutions spent the week hashing out proposals in an effort to reach an agreement. Sticking points included how to regulate generative AI models, used to create tools like ChatGPT, and use of biometric identification tools, such as facial recognition and fingerprint scanning.

Germany, France and Italy have opposed directly regulating generative AI models, known as “foundation models,” instead favoring self-regulation from the companies behind them through government-introduced codes of conduct.

Their concern is that excessive regulation could stifle Europe’s ability to compete with Chinese and American tech leaders. Germany and France are home to some of Europe’s most promising AI startups, including DeepL and Mistral AI.

The EU AI Act is the first of its kind specifically targeting AI and follows years of European efforts to regulate the technology. The law traces its origins to 2021, when the European Commission first proposed a common regulatory and legal framework for AI.

The law divides AI into categories of risk from “unacceptable” — meaning technologies that must be banned — to high, medium and low-risk forms of AI.

Generative AI became a mainstream topic late last year following the public release of OpenAI’s ChatGPT. That appeared after the initial 2021 EU proposals and pushed lawmakers to rethink their approach.

ChatGPT and other generative AI tools like Stable Diffusion, Google’s Bard and Anthropic’s Claude blindsided AI experts and regulators with their ability to generate sophisticated and humanlike output from simple queries using vast quantities of data. They’ve sparked criticism due to concerns over the potential to displace jobs, generate discriminative language and infringe privacy.

WATCH: Generative AI can help speed up the hiring process for health-care industry

Products You May Like

Articles You May Like

FTX co-founder Gary Wang avoids prison time for role in crypto fraud
Intel and Commerce Department close to finalizing roughly $8 billion CHIPS Act grant, source says
How tech bros bought ‘America’s most pro-crypto Congress ever’
Rachel Reeves promises she will not raise taxes again
SpaceX Might Get FAA Approval for 25 Starship Launches in 2025

Leave a Reply

Your email address will not be published. Required fields are marked *