What's included in the new EU law on AI

What's included in the new EU law on AI

Breaking the EU's new rules on AI can be costly -- fines for any violations range between 7.5 and 35 million euros
Breaking the EU's new rules on AI can be costly -- fines for any violations range between 7.5 and 35 million euros. Photo: PAU BARRENA / AFP/File
Source: AFP

The European Parliament on Wednesday gave the final nod to far-reaching rules on artificial intelligence that the EU hopes will both harness innovation and defend against harms.

The law, known as the "AI Act", was first proposed in April 2021 by the European Commission, the EU's executive arm.

But it was only after Microsoft-funded ChatGPT burst onto the scene in late 2022 that the real AI contest began -- and also the race to regulate.

China and the United States last year introduced regulation on AI but the European Union's law is the most comprehensive.

The EU will take a staggered approach to applying the law.

Outright bans on forms of AI considered highest-risk will kick in later this year, while rules on systems like ChatGPT will apply 12 months after the law enters into force, and the rest of the provisions in 2026.

Read also

EU parliament adopts 'historic' rules on AI

AI models

As EU negotiators debated the text, tensions within and lobbying from outside were at their highest over how to regulate general-purpose AI models, like chatbots.

Developers of such models will have to give details about what content they used -- such as text or images -- to train their systems and comply with EU copyright law.

There are a greater set of requirements for models, for example OpenAI's latest ChatGPT-4 and Google's Gemini, that the EU says pose "systemic risks."

Those risks could include causing serious accidents, being misused for far-reaching cyberattacks, or to propagate harmful biases online.

Companies offering these technologies must assess and mitigate the threats, track and report serious incidents -- like deaths -- to the commission, take action to ensure cybersecurity and give details about their models' energy consumption.

Read also

Boeing says employees must take 'immediate' action on safety measures

The commission has already established the AI office that will enforce the rules on general-purpose AI.

Risk-based approach

The EU looks at AI systems from the perspective of risk to democracy, public health, rights and the rule of law.

High-risk products such as medical devices, those used in education or systems used in key infrastructure like water, face more obligations to mitigate any danger.

For example, high-risk providers must develop the systems with quality data, ensure human oversight and maintain robust documentation.

Even after they place their product on the market, providers have to keep a close eye.

EU citizens will have the right to complain about AI systems, while public bodies must register the high-risk AI systems they deploy in a public EU database.

Breaking the rules can be costly.

The EU can slap AI providers with fines ranging between 7.5 million and 35 million euros ($8.2 million and $38.2 million), or between 1.5 and seven percent of a company's global turnover, depending on the size of the violation.

Read also

False GPS signal surge makes life hard for pilots

The rules also stipulate that citizens should be aware when they are dealing with AI.

For example, deepfake images produced using AI must be labelled as such while chatbots must say that they are AI-powered in their interactions.

Bans

There are some types of AI banned by the EU because the risks they pose are considered too great.

These include predictive policing, emotion recognition systems in workplaces or schools and social scoring systems that assess individuals based on their behaviour.

The law also bans police officers using real-time facial recognition technology, with exceptions for law enforcement if they are searching for an individual convicted or suspected of a serious crime, such as rape or terrorism.

Police can ask to use the technology to find victims of kidnapping or trafficking -- subject to approval from a judge or another judicial authority, and for a use limited in time and location.

New feature: Сheck out news that is picked for YOU ➡️ click on “Recommended for you” and enjoy!

Source: AFP

Authors:
AFP avatar

AFP AFP text, photo, graphic, audio or video material shall not be published, broadcast, rewritten for broadcast or publication or redistributed directly or indirectly in any medium. AFP news material may not be stored in whole or in part in a computer or otherwise except for personal and non-commercial use. AFP will not be held liable for any delays, inaccuracies, errors or omissions in any AFP news material or in transmission or delivery of all or any part thereof or for any damages whatsoever. As a newswire service, AFP does not obtain releases from subjects, individuals, groups or entities contained in its photographs, videos, graphics or quoted in its texts. Further, no clearance is obtained from the owners of any trademarks or copyrighted materials whose marks and materials are included in AFP material. Therefore you will be solely responsible for obtaining any and all necessary releases from whatever individuals and/or entities necessary for any uses of AFP material.