Artificial Inteligence in the European Union. Considerations regarding Regulation (EU) 2024/1689

Artificial intelligence has been a hot topic lately and this is mainly due to the effects it has, or could have, on the lives of citizens in the Member States.

At the same time this is a controversial topic, and opinions on the issue are divided. There is talk about the positive effects artificial intelligence can have on people’s lives and how it can impact their lives in a positive way. On the other hand, there is also a lot of concern about the negative effects of artificial intelligence and the fact that it can be misused and, in this way, people’s fundamental rights and interests can be affected.

Moreover, justice is also an area in which artificial intelligence can produce relevant effects and there are various discussions on this topic, in relation to the role that artificial intelligence could play in the act of justice. Indeed, justice is a significant area in which artificial intelligence (AI) can produce notable and complex effects. Its implementation in the legal sector can bring considerable benefits, such as streamlining judicial processes, automating administrative tasks, and facilitating access to relevant information by analyzing data, as long as its use does not run counter to the fundamental principles on which the act of justice is based.

Taking all these issues into account, at the European level, there have been intense discussions on this issue, which have been concluded by the issuance of Regulation (EU) 2024/1689 of the European Parliament and of the Council of June 13, 2024, which entered into force on August 1, 2024 (Artificial Intelligence Regulation/AI Act).

The aim of the Artificial Intelligence Regulation is to establish a uniform framework for the use and development of AI in the European Union, ensuring the protection of fundamental rights, risk classification and transparency, stimulating responsible innovation and facilitating international cooperation. This European Regulation also aims to protect the human factor by ensuring a higher degree of protection of fundamental rights enshrined in the Charter of Fundamental Rights of the European Union.

The newly-introduced rules on artificial intelligence aim to ensure European citizens that it is safe and trusted and that it is used in accordance with fundamental rights obligations.

At the European level, the benefits that can be derived from the use of artificial intelligence and the economic and social benefits that its use can bring are also being considered. It is considered that artificial intelligence can offer key competitive advantages to businesses and can help deliver socially and environmentally favorable outcomes in areas such as health care, agriculture, food safety, education and training, justice, etc.

The Regulation applies to the following categories of persons:

a) Suppliers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, whether those suppliers are established or located in the Union or in a third country;

(b) Implementers of IA systems established or located in the Union;

c) Suppliers and implementers of IA systems which are established or located in a third country, where the outputs produced by IA systems are used in the Union;

(d) Importers and distributors of IA systems;

(e) Manufacturers of products placing an AI system on the market or putting into service an AI system together with their product and under their name or trademark;

f) Authorized representatives of suppliers not established in the Union;

g) Affected persons located in the Union.

It is important to note, in this context, that this Regulation does not apply to areas which do not fall within the scope of Union law and does not affect the competences of Member States in matters of national security.

In our opinion, the provisions found in Article 5 of the AI Regulation on prohibited practices at European level in the field of artificial intelligence are of paramount importance in this context.

Thus, Article 5 of the Regulation lays down certain limitations on the use of artificial intelligence. The Regulation prohibits the use of certain practices which may seriously infringe fundamental human rights. These include the use of AI systems that manipulate human behavior through subliminal or misleading techniques, the exploitation of vulnerable persons (on the basis of age, disability or socio-economic status), and social classification on the basis of behavior or personal characteristics. AI real-time biometric recognition systems are also prohibited in public spaces, with limited exceptions for situations such as counter-terrorism or searching for missing persons. Any such use must be authorized and strictly monitored by competent authorities, ensuring the protection of fundamental rights and compliance with national and EU law.

Examples of AI practices prohibited by this EU Regulation include:

Vulnerability Exploitation: Using AI to manipulate vulnerable people, such as children, the elderly or the disabled, by taking advantage of their vulnerabilities to distort their behavior;

Social Scoring: AI systems that assess or classify people based on their social behavior, resulting in discriminatory or unfavorable treatment in contexts that are nonspecific or disproportionate to the severity of the behavior;

Crime risk prediction: AI systems that assess a person’s risk of committing crimes based solely on personality traits or profiles, without objective and verifiable evidence of previous criminal activity;

Real-time biometric recognition: The use of biometric identification systems (e.g. facial recognition) in public places, in real time, for surveillance and monitoring of persons, except in limited and well-regulated situations;

Real-time biometric recognition: The use of biometric identification systems (e.g. facial recognition) in public places, in real time, for surveillance and monitoring of persons, except in limited and well-regulated situations;

Exceptions for the use of AI in international law enforcement and judicial cooperation

This Regulation shall not apply to public authorities of a third country and to international organizations falling within the scope of this Regulation as referred to in paragraph 1, where those authorities or organizations use artificial intelligence systems in the framework of international cooperation or arrangements for law enforcement and judicial cooperation with the Union or one or more Member States, provided that the third country or international organization concerned offers adequate safeguards with respect to the protection of the fundamental rights and freedoms of individuals.

In concrete terms, the Regulation does not apply to such authorities or international organizations when they use IA systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the European Union or one or more Member States. However, for this exception to be valid, it is necessary that the third country or international organization provides adequate safeguards with regard to the protection of individuals’ fundamental rights and freedoms.

In essence, this provision allows for some flexibility in international cooperation, but on the condition that fundamental rights are protected in line with EU standards.

The Artificial Intelligence Act (AI Act) classifies AI systems into four risk categories, each with specific obligations:

Minimal risk: Includes systems such as email spam filters and recommendation engines. This category is not subject to the detailed rules of the Regulation;

Limited risk: AI systems, such as chatbots, must reveal to users that they are machines. AI-generated content, including deep fakes, should be clearly labeled, and users should be informed when systems are used for biometric classification or emotion recognition;

High risk: Systems considered to be high risk, such as those used in recruitment processes or credit scoring, must meet strict requirements. These include ensuring superior data quality, record keeping, human oversight and the implementation of cyber security measures. “Regulatory sandboxes will facilitate the development of systems that meet these requirements. In the case of high-risk AI systems, at EU level, there is a need for strict and effective supervision by individuals while they are in use. We therefore note that supervision measures should be proportionate to the degree of risk generated by AI systems.

In order for the competent authorities to exercise a specific control, the Regulation requires, by means of Article 21, appropriate cooperation between the providers of CEWs and the competent authorities. Therefore, at the request of a competent authority, suppliers of high-risk IA systems are obliged to provide all the necessary information and documentation in order to demonstrate the compliance of the IA system with the requirements of the Regulation. Also, upon reasoned request of the competent authority, providers are obliged to grant access to the automatically generated log files of the high-risk IA system.

On the issue of high-risk AI systems, it is necessary to assess their potential impact on fundamental rights.

Thus, the rule found in Article 27 of the Regulation lays down some specific obligations, referring to the assessment to be performed. According to this article, before implementing a high-risk AI system, implementers must carry out an assessment consisting of the following:

 a) A description of the time period for which each high-risk AI system is intended to be used and the frequency of such use;

 b) The categories of individuals and groups likely to be affected by its use in the specific context;

 c) The specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified under point (c) of this paragraph;

d) A description of the implementation of human suveillance measures in accordance with the instructions for use;

e) The measures to be taken should those risks materialize, including internal governance mechanisms and complaint handling mechanisms.”

The Regulation imposes obligations on Member States to carry out the necessary checks, i.e. it requires each Member State to establish at least one responsible notifying authority, with the establishment and carrying out of the necessary procedures for the assessment, designation, notification and monitoring of conformity assessment bodies.

Unacceptable risk: AI applications that pose clear threats to fundamental rights, such as manipulative systems or those enabling social scoring, are prohibited. Certain biometric applications, including emotion recognition in the workplace and some forms of biometric identification by law enforcement, are also prohibited.

Entry into force and application of the Artificial Intelligence Regulation

According to Article 113, the Regulation entered into force on August 1, 2024, after initially being published in the Official Journal of the European Union on July 12, 2024.

Therefore, the new law becomes applicable on August 2, 2026, twenty-four months after the date of entry into force.

However, there are three special transitional periods for certain categories of articles of the IA Act:

The Regulation will apply in full from Aug. 2, 2026.

Chapters I (Introductory Provisions) and II (Prohibitions) will become applicable as of February 2, 2025.

Chapters III Section 4 (Notified Bodies), V (Artificial Intelligence Models for General Use), VII (Governance), XII (Sanctions) and Article 78 (Confidentiality) will become effective, with the exception of Article 101, from August 2, 2025.

Article 6 para. (1) and the corresponding obligations of the Regulation will become applicable, as of August 2, 2027.

Therefore, in our view, the use of artificial intelligence can bring benefits to people’s lives, as long as it is used in a balanced way, within reasonable limits and in compliance with applicable national and Union law.

This article was prepared, for the Blog of Costaș, Negru & Asociații, by Paul Buzea and Oana Budi, lawyers with the Arad Bar Association.

Costaș, Negru & Asociații is a civil law firm with offices in Cluj-Napoca, Bucharest and Arad, which provides assistance, legal representation and consultancy in several areas of practice through a team of 20 lawyers and consultants. Details of the legal services and the composition of the team can be found on the website https://www.costas-negru.ro. All rights for the materials published on the company’s website and through social networks belong to Costaș, Negru & Asociații, their reproduction being allowed only for information purposes and with the correct and complete citation of the source.

Leave your comment

Please enter your name.
Please enter comment.