19 May 2025, 08:58
-
The European Commission published a report analyzing stakeholder feedback on AI definitions and prohibited practices.
-
The report, prepared by CEPS for the EU AI Office, focuses on the AI Act's regulatory obligations effective since February 2025.
-
Industry stakeholders provided the majority of feedback, with calls for clearer definitions of terms like "adaptiveness" and "autonomy".
-
Guidelines on prohibited practices address harmful manipulation, social scoring, and real-time remote biometric identification.
-
The Commission aims to assist providers in determining whether a software system qualifies as an AI system through these guidelines.
-
Both sets of guidelines will evolve based on practical experience and emerging use cases.
The Commission published a report, prepared by the Centre for European Policy Studies (CEPS) for the EU AI Office, analysing stakeholder feedback from two recent public consultations.
These consultations focused on two AI Act regulatory obligations - the definition of AI systems and prohibited AI practices – which have been in application since 2 February 2025.
The Commission drew on both these public consultations and other sources of feedback to draft its non-binding guidelines on prohibited practices and definition of AI systems, aimed at assisting providers and stakeholders in the effective application of the AI Act.
The report presents a comprehensive analysis of responses to each of the 88 questions of the stakeholder consultation, organised into nine key sections. It shows that industry stakeholders accounted for the majority of responses to the public consultation (47.2% of nearly 400 replies), while citizen participation remained limited (5.74%). Respondents called for clearer definitions of technical terms such as "adaptiveness" and "autonomy", cautioning against the risk of inadvertently regulating conventional software.
The guidelines on prohibited practices specifically address areas such as harmful manipulation, social scoring, and real-time remote biometric identification, and include legal clarifications and practical examples to support stakeholder understanding and compliance with the AI Act.
Among other findings, the report highlights that prohibited practices - such as emotion recognition, social scoring, and real-time biometric identification - occasioned significant concern. Stakeholders called for concrete examples of what is prohibited and what not.
By issuing guidelines on the AI system definition, the Commission aims to assist providers and other relevant persons in determining whether a software system constitutes an AI system to facilitate the effective application of the rules.
Both sets of guidelines are expected to evolve over time in response to practical experience, emerging use cases, and new questions.
Source: CECE