Articles 1-5 of the AI Act have been in force since 2 February 2025, failure to comply may result in heavy fines.
11 February 2025 / Articles
🚨 IMPORTANT INFORMATION – Articles 1-5 of the AI Act have been in force since 2 February, failure to comply may result in heavy fines. Many companies in the EU have not yet taken the required action – here are the most important details.
From 2 February 2025, the first provisions of the Artificial Intelligence Act (AI Act) will apply, aimed at increasing safety and regulating the AI market in the European Union.
The most important changes include:
- Prohibited practices – a ban on the use and placing on the market or for trading of AI systems that meet the criteria of prohibited practices. Examples include manipulative systems that take advantage of human weaknesses, social scoring systems and systems that analyse emotions in the workplace or education. Violations of these regulations are subject to high fines of up to 35 million euros or 7% of a company’s annual global turnover.
- Obligation of AI literacy. Employers must provide their employees with adequate training and knowledge about AI so that they can safely use AI systems at work. Lack of AI training can lead to non-compliance with regulations and increase the risk of incorrect use of AI systems. In connection with AI literacy, it is also worth ensuring the implementation of an AI use policy in the company. How to do it?
See articles:
- What should an AI policy include?
- Artificial intelligence – what is it (from a legal point of view) and how does the world deal with it?
The policy for using AI in a company can include, for example, clear procedures, rules for using AI, conditions for the approval of systems, procedures in case of incidents and the appointment of a person responsible for the effective implementation and use of AI in the organisation (AI Ambassador).
The importance of AI education (Article 4 AI Act)
Awareness and knowledge of AI is not only a legal requirement, but also a strategic necessity for organisations. Article 4 of the AI Act obliges companies to implement training programmes tailored to the knowledge, role and experience of their employees.
‘Suppliers and deployers of AI systems shall take measures to ensure, to the greatest extent possible, an appropriate level of competence with regard to AI among their personnel and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training, and the context in which the AI systems are to be used, as well as taking into account the persons or groups of persons against whom the AI systems are to be used.’ Failure to act in this regard carries serious consequences. in which the AI systems are to be used, as well as taking into account the persons or groups of persons against whom the AI systems are to be used.
Failure to act in this regard has serious consequences, including:
- The risk of violating personal data protection and privacy regulations.
- An increased likelihood of violating the law and incurring financial penalties.
In addition to regulatory compliance, AI education helps to build a culture of responsible use of technology and minimises potential operational risks.
Where can I find guidance?
The ISO/IEC 42001 standard on artificial intelligence management systems can help. As part of the measures relating to the relevant competences of persons dealing with AI in an organisation, the standard indicates, for example, the following issues:
- mentoring
- training
- transferring employees to appropriate tasks within the organisation based on an analysis of their competences.
At the same time, important roles or areas of responsibility should be assigned to, for example:
- supervision of the AI system
- security
- protection
- privacy
Prohibited AI practices (Article 5 AI Act)
The AI Act prohibits the use of certain AI systems that could pose serious risks to society. Suppliers and companies using AI must ensure that they are not directly or indirectly involved in their development or implementation. Among other things, the AI Act lists specific prohibited practices that are considered particularly dangerous. These include:
- Subliminal or manipulative techniques – AI systems that subconsciously change the user’s behaviour so that they make a decision they would not otherwise have made.
- Exploitation of human weaknesses – systems that take advantage of a person’s disability, social or economic situation.
- Social scoring – systems that evaluate citizens and grant them certain rights based on their behaviour.
- Assessment of the risk of committing a crime – systems that profile individuals and evaluate their individual characteristics without legitimate grounds.
- Creation of facial image databases – untargeted acquisition of images from the internet or city surveillance for the purpose of creating facial recognition systems.
- Analysis of emotions in the workplace or education – AI systems that analyse the emotions of employees or students.
- Biometric categorisation of sensitive data – using biometric data to gain information about race, political views, etc.
- Remote biometric identification in real time – using facial recognition systems in public spaces for the purpose of prosecuting crimes.
Where to look for guidance?
- draft Guidelines on prohibited artificial intelligence (AI) practices – 4 February 2025. The Commission published guidelines on prohibited artificial intelligence practices to ensure consistent, effective and uniform application of the AI Act across the EU.
Important dates:
- From 2 February 2025 – Chapter II (prohibited practices)
- From 2 August 2025 – Chapter V (general-purpose models), Chapter XII (Penalties) without Article 101
- From 2 August 2026 – Article 6(2) and Annex III (high-risk systems), Chapter IV (transparency obligations)
- From 2 August 2027 – Article 6(1) (high-risk systems) and corresponding obligations
Key takeaways for companies:
- Compliance with Articles 1-5 of the AI Act is mandatory and cannot be ignored.
- Training in AI is crucial to avoid mistakes related to employee ignorance and potential company liability.
- Conducting audits of technology providers is necessary to ensure that AI systems comply with regulations.
- Implement an AI use policy – introduce clear documentation to organise the risks. The policy can include, for example, clear procedures, rules for using AI, conditions for admitting systems, how to deal with incidents, and appointing a person responsible for supervision (AI Ambassador).
- Developing AI tools in accordance with the law – companies developing AI tools must consider legal and ethical aspects at every stage of development. This includes analysing the compliance of the system’s objectives with the law, database legality, cybersecurity and system testing. It is important that the process of creating AI systems complies with the principles of privacy by design and privacy by default under the GDPR – How to create AI tools legally?.
Be sure to check out these sources:
- https://eur-lex.europa.eu/legal-content/PL/TXT/?uri=CELEX:32024R1689
- https://digital-strategy.ec.europa.eu/pl/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act
- 4R1689 https://digital-strategy.ec.europa.eu/pl/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act
- https://digital-strategy.ec.europa.eu/en/events/third-ai-pact-webinar-ai-literacy
https://www.gov.pl/attachment/9bb34f05-037d-4e71-bb7a-6d5ace419eeb
Need help with this topic?
Write to our expert
Articles in this category
SME Fund 2025 – funding for trademark registration for SMEs
SME Fund 2025 – funding for trademark registration for SMEsDeepSeek – Chinese AI in open source mode. Does Hong Kong have a chance against OpenAI?
DeepSeek – Chinese AI in open source mode. Does Hong Kong have a chance against OpenAI?Trump changes artificial intelligence regulations – new approach to AI in the USA
Trump changes artificial intelligence regulations – new approach to AI in the USAHow does artificial intelligence improve the analysis of GOCC’s financial data and increase the transparency of the charity?
How does artificial intelligence improve the analysis of GOCC’s financial data and increase the transparency of the charity?Why are we afraid of the simple public limited company (PSA)?
Why are we afraid of the simple public limited company (PSA)?