To use this sharing feature on social networks you must accept cookies from the 'Marketing' category
Create PDF

Artificial Intelligence: restrictive legislation in EU?

EU tightening on Artificial Intelligence, from biometric recognition to chatbots

We have already presented in an article the differences with respect to Artificial Intelligence legislation in the US, China and Europe and we have seen how Europe lags behind the US and China digitally. 

On April 21, 2021, the European Commission unveiled a proposed regulation to balance European Artificial Intelligence rules. 

As reported by Il Sole 24 Ore, "The goal, a priority on the executive von der Leyen's agenda, is to counteract uses of the technology that may be detrimental to "the fundamental rights and security" of EU citizens. Among the applications that should be banned are those capable of "manipulating people through subliminal techniques beyond their consciousness" or that exploit the vulnerabilities of particularly fragile groups, such as children or people with disabilities.

Brussels' goal is to protect citizens with the application of ethical standards, but at the same time reduce the gap between Europe and the US, China in the application of new AI technologies. 

The risk levels of AI for the Commission

In its proposal, which must now be approved by the European Parliament and Council, the Commission identifies three different levels of risk for AI technologies. 

  • High-risk technologies: uses of AI in critical infrastructure, education, employment, essential public and private services, border and immigration control, administration of justice, and law enforcement. In general, they are all those technologies judged to carry a threat to the security or rights of people, as in the cases of manipulation tools (e.g., toys with voice assistants that can incite minors to certain behaviors) or "scoring" citizens (systems developed with AI that allow governments to identify and classify citizens based on certain characteristics, as in China). However, their use is possible after rigorous evaluation and assurance of protection for citizens. For example, facial and biometric recognition systems could be theoretically banned, but actionable in exceptional and emergency cases (such as searching for kidnapping victims, countering terrorist activity, or investigating criminal offenses). 
  • low-risk technologies: for example chatbots, voice assistants used in customer care services.
  • low risk technologies: for example anti-spam filters or video games developed with AI systems.

The control of the application of the rules will be entrusted to national authorities. Violators in the use of high-risk technologies could face administrative sanctions of up to 30 million euros or, in the case of companies, fines of up to 6% of turnover. Military use of AI is excluded from the scope of the regulation. 

The ambitious European plan aims to embrace the full benefits of AI, promoting innovation while building trust, but also to export its ethical standards internationally. 

Translated with www.DeepL.com/Translator


Did you like this article? Sign up for the newsletter and receive weekly news!

Subscribe to Newsletter

Comments:

No comments are in yet. You be the first to comment on this article!

Post a comment

User:
E-Mail (only for alert)
Insert your comment: