MEPs ready to negotiate rules for safe and transparent AI

The rules aim to promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its effects.

The European Parliament has adopted its negotiating position on the Artificial Intelligence (AI) Act with 499 votes in favour, 28 against and 93 abstentions ahead of talks with EU member states on the final shape of the law. The rules would ensure that AI developed and used in Europe is fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing.

Prohibited AI practices

The rules follow a risk-based approach and establish obligations for providers and those deploying AI systems depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s safety would therefore be prohibited, such as those used for social scoring (classifying people based on their social behaviour or personal characteristics). MEPs expanded the list to include bans on intrusive and discriminatory uses of AI, such as:

  • “Real-time” remote biometric identification systems in publicly accessible spaces;
  • “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
  • biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
  • predictive policing systems (based on profiling, location or past criminal behaviour);
  • emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy)

High-risk AI

MEPs ensured the classification of high-risk applications will now include AI systems that pose significant harm to people’s health, safety, fundamental rights or the environment. AI systems used to influence voters and the outcome of elections and in recommender systems used by social media platforms (with over 45 million users) were added to the high-risk list.

Obligations for general purpose AI

Providers of foundation models - a new and fast-evolving development in the field of AI - would have to assess and mitigate possible risks (to health, safety, fundamental rights, the environment, democracy and rule of law) and register their models in the EU database before their release on the EU market. Generative AI systems based on such models, like ChatGPT, would have to comply with transparency requirements (disclosing that the content was AI-generated, also helping distinguish so-called deep-fake images from real ones) and ensure safeguards against generating illegal content. Detailed summaries of the copyrighted data used for their training would also have to be made publicly available.

Supporting innovation and protecting citizens' rights

To boost AI innovation and support SMEs, MEPs added exemptions for research activities and AI components provided under open-source licenses. The new law promotes so-called regulatory sandboxes, or real-life environments, established by public authorities to test AI before it is deployed.

Finally, MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their fundamental rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.

After the vote, co-rapporteur Brando Benifei (S&D, Italy) said: “All eyes are on us today. While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose. We want AI’s positive potential for creativity and productivity to be harnessed but we will also fight to protect our position and counter dangers to our democracies and freedoms during the negotiations with Council”.

Co-rapporteur Dragos Tudorache (Renew, Romania) said: “The AI Act will set the tone worldwide in the development and governance of artificial intelligence, ensuring that this technology, set to radically transform our societies through the massive benefits it can offer, evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law”.

 

Next steps

Share

Featured Articles

Norton: Report Highlights Rising Trend of AI Dating Scams

Norton report shows that as AI becomes more sophisticated and accessible, so do the risks of romantic scams conducted via AI

Barracuda: Why Businesses Struggle to Manage Cyber Risk

Barracuda Networks CIO report shows that six in 10 businesses struggle to manage cyber risk, with issues such as policy struggles and management buy-in

Evri, Amazon and Paypal Among Brands Most Used by Scammers

With the development of AI, cybercriminals are becoming more and more sophisticated in their attacks, using fake websites and impersonating popular brands

Tech & AI LIVE: Key Events that are Vital for Cybersecurity

Technology & AI

MWC Barcelona 2024: The Future is Connectivity

Technology & AI

AI-Based Phishing Scams Are On The Rise This Valentine’s Day

Cyber Security