Chapter I lays down general provisions on the subject matter of the AI Act, sets out definitions, the scope of the AI Act, and provisions on AI literacy. These provisions are essential as they determine the material, territorial, and personal scope of the AI Act. For example, Article 3 provides the definition of an “AI system” (which means: “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”). The European Commission announced it will issue guidelines on defining AI systems, to help the industry determine whether a given software system qualifies as AI.
Chapter II contains provisions on prohibited AI practices. These define AI systems that pose an unacceptable risk and are therefore completely prohibited within the EU. These include AI technologies that could violate fundamental rights and freedoms. Specifically, the AI Act bans manipulative AI systems that deceive or exploit users, as well as AI that takes advantage of the vulnerabilities of certain individuals, such as children or people with disabilities. Additionally, social scoring, like practices in China, is explicitly prohibited. The AI Act also forbids risk assessment based solely on profiling to predict criminal behavior. Another example is the untargeted scraping of online data to create facial recognition databases – such as in the case of Clearview AI – is not allowed. The prohibition of these systems aims to prevent AI misuse and uphold ethical standards within the EU.
Obligation to take AI literacy measures
AI literacy is defined as “skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause”. In short, AI literacy is about understanding AI systems, their risks, and opportunities. AI literacy measures should allow users, providers, and affected individuals to make informed decisions about AI, while respecting their rights and obligations.
The EU legislator emphasizes the importance of AI literacy by introducing, early in the AI Act (Article 4), an obligation for providers and deployers of AI systems to ensure a sufficient level of AI literacy. The aim is to ensure that their staff are fully aware of the AI systems they are dealing with. As this obligation is part of the general provisions, it applies universally to all AI systems, regardless of their risk level. The AI Act specifies that AI literacy efforts should be customized to the specific needs of different individuals or groups.
AI literacy measures primarily involve training and/or hiring qualified personnel. It would be useful for organizations to identify which employees deal with AI systems, to find out any differences in difficulty levels of these systems and to tailor appropriate training accordingly. It is recommended to document all training courses, for evidential purposes.
The European Commission, together with the European Artificial Intelligence Board, will promote AI literacy tools, and has announced it will release a dynamic database of AI literacy practices. The AI Act also provides that voluntary codes of conduct will provide elements relating to AI literacy.
Compliance and penalties?
The AI Act provides that non-compliance with the prohibition of the AI practices referred to in Chapter II shall be subject to administrative fines of up to 35.000.000,00 EUR or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. These penalties will apply from 2 August 2025.
Regarding the non-compliance of the AI literacy principle, the AI act does not provide for a specific sanction. However, it is up to Member States to complete the AI Act’s rules on penalties and other enforcement measures, which may also include warnings and non-monetary measures. In Belgium, no such regulations have been adopted yet, as authorities are still discussing the supervisory structure to be established. It is expected that the Belgian legislator will adopt rules on the supervisory structure and on penalties in the coming months.