top of page

Data Privacy Counsultant

First Rules of the EU AI Act Come into Effect: What Does it Mean?

  • Writer: Davies Parker
    Davies Parker
  • Mar 12
  • 4 min read

The evolving digital landscape in the 21st century have placed a challenge for governments and organizations as they attempt to keep pace with the technological advancements since the internet’s inception. From social media regulations to data protection and consumer rights in e-commerce, the legal frameworks have struggled to stay relevant. In the present decade, artificial intelligence (AI) has emerged as an integral part of daily life, influencing industries from healthcare to finance. However, the absence of comprehensive regulations has left this type of tech largely unchecked. Crafting meaningful legislation for AI is particularly challenging, as it not only requires a deep understanding of the technology but also extensive input from diverse stakeholders as the scope of AI keeps evolving at an astronomical pace.

The European Union (EU) has taken a pioneering step in this direction by introducing the AI Act, the first of its kind by any governmental or intergovernmental organization. Passed in 2024, the Act sets a precedent for governing AI systems, emphasizing safety, transparency, and fundamental rights. As of February 2, 2025, two of its provisions have come into effect, marking the beginning of a new regulatory era for AI technologies in Europe. The Act aims to balance innovation with ethical deployment, setting the stage for a safer and more transparent digital future, while the initial provisions that have come into effect try to increase literacy in regard with AI and restrict high risk AI platforms immediately.

Key Rules That Have Come into Effect

The EU has placed a timeline for the Act to be enforced over the next 2 years. This has been done specifically to ensure that both organizations and governmental agencies can firstly grasp at concepts of AI before they try to govern them or ensure compliances with the Act. Thus, Article 4 of the Act which places an AI Literacy requirement and mandates that organizations deploying AI systems must ensure that their personnel possess an adequate knowledge and understanding of AI technologies has been one of the first parts of the Act to come into effect. This rule aims to promote informed and responsible deployment of AI systems by equipping staff with the necessary skills to recognize the opportunities and risks associated with AI. The rule applies to all providers and deployers of AI systems, irrespective of the risk level of the AI system. Organizations must tailor AI literacy programs according to the technical knowledge, experience, and roles of their staff. Recital 20 of the AI Act emphasizes the broader scope of AI literacy, suggesting that it should extend beyond staff to all relevant actors in the AI value chain, including affected individuals.

Article 5, which prohibits certain AI practices deemed to be fundamentally unethical or dangerous has also come into effect along with Article 4. This includes the banning of systems that deploy subliminal techniques to manipulate users or exploit vulnerable groups. Notably, this prohibition is grounded in safeguarding fundamental rights as enshrined in the EU Charter of Fundamental Rights, ensuring AI applications do not compromise human dignity, freedom, or democracy. The AI Act classifies AI systems into four different risk categories:

Unacceptable risk: AI systems posing unacceptable risks to fundamental rights and Union values are prohibited under Article 5 of AI Act.

High risk: AI systems posing high risks to health, safety and fundamental rights are subject to a set of requirements and obligations. These systems are classified as ‘high-risk’ in accordance with Article 6 AI Act in conjunction with Annexes I and III AI Act.

Transparency risk: AI systems posing limited transparency risk are subject to transparency obligations under Article 50 AI Act.

Minimal to no risk: AI systems posing minimal to no risk are not regulated, but providers and deployers may voluntarily adhere to voluntary codes of conduct.

AI systems categorized as posing an “unacceptable risk” are those that present a clear threat to fundamental rights and Union values. This ensures that fundamental rights, democratic values and public safety is safeguarded. The practices that have been banned by Article 5 are:

Subliminal Manipulation: AI systems designed to influence individuals’ behaviour without their conscious awareness are prohibited. This includes manipulative advertising techniques that exploit cognitive biases.

Exploitation of Vulnerabilities: AI systems that exploit the vulnerabilities of specific groups, such as children, the elderly, or socio-economically disadvantaged individuals, are banned. This measure aims to safeguard sensitive populations from targeted manipulation.

Social Scoring Systems: AI systems that evaluate or classify individuals based on their social behaviour, personal characteristics, or socio-economic status are prohibited. This ban is inspired by concerns over discriminatory practices associated with social credit systems.

Predictive Policing and Profiling: AI systems that assess personality traits or predict criminal behaviour based on profiling techniques are banned due to potential biases and ethical concerns.

Untargeted scraping for facial recognition databases: The untargeted collection of facial images from public sources for identification purposes.Biometric Data Collection: Real-time biometric identification in public spaces for law enforcement without stringent oversight.

Biometric categorization of persons based on sensitive characteristics: AI systems which categorise individuals based on gender, ethnicity, race, religion or sexual orientation.

Assessing the emotional state of a person: Any assessment of emotions of a person through AI, at a workplace or educational institution. Exceptions exist for safety features, such as warning systems in cars for a driver’s attention level.

Furthermore, Article 3 provides comprehensive definitions of all relevant topics, including AI systems, providers, deployers etc., setting the scope for regulation. It defines an AI system as a machine-based system designed to operate autonomously, potentially exhibiting adaptive behaviours post-deployment. The European Commission’s Guidelines on the Definition of an AI System, released on February 2, 2025, elaborate on this, highlighting seven key elements including machine-based operation, autonomy, and adaptive capabilities. This definition is crucial for identifying systems that fall under the Act’s jurisdiction.

 
 
 

Recent Posts

See All

Comments


bottom of page