Table of Contents:
1. AI Act and GDPR – How AI Regulations Will Impact Data Protection
2. GDPR and AI Act – Key Differences and Similarities
3. Risk Assessment and User Consent
4. Right to Explanation and Transparency in AI Systems
5. Impact of the AI Act on Data Controllers’ Obligations
6. Penalties and Fines
On July 12, 2024, the Artificial Intelligence Regulation (AI Act) was published in the Official Journal of the European Commission. This crucial document defines the regulatory framework for the development, implementation, and use of artificial intelligence within the European Union. The Ministry of Digital Affairs is already working on a bill to enable the application of the AI Act in Poland.
AI Act and GDPR – How AI Regulations Will Impact Data Protection
Amid the dynamic growth of artificial intelligence (AI) technologies, the European Union is developing specific regulations to unify AI usage standards across different sectors. Published on July 12, 2024, in the Official Journal of the European Union, the AI Act is the first legal document of its kind in the EU. It regulates AI technology use and complements existing regulations, including the GDPR (General Data Protection Regulation). As the AI Act will impact many areas of personal data processing, it is worth examining how it will relate to the requirements and principles set forth by GDPR.
GDPR and AI Act – Key Differences and Similarities
Both regulations—GDPR and the AI Act—aim to protect citizens’ rights and prevent abuses, but their scope and measures differ. GDPR focuses on privacy protection and ensuring appropriate safeguards for personal data, covering broadly defined data processing. The AI Act, on the other hand, addresses risks associated with using AI systems, classifying them by risk level, with an emphasis on transparency, safety, and accountability. It is worth noting that the AI Act is directly linked to GDPR, as many AI systems process personal data, meaning that users will need to comply with both regulations. For example, if an AI system analyzes sensitive data, such as biometric data, the operator will need to follow both data protection rules and the AI Act’s risk assessment and compliance requirements.
Risk Assessment and User Consent
One of the key GDPR requirements is obtaining consent for data processing and assessing the risks associated with this process. The AI Act introduces a similar concept but extends it to include detailed risk assessments specifically for AI systems, requiring the classification of systems by risk levels—from low to unacceptable.
AI systems classified as “high risk” will be subject to strict regulations. The AI Act requires that operators of such systems ensure full compliance with transparency and reliability principles, which include requirements such as algorithm documentation, data management systems, and regular audits. These obligations address gaps in GDPR, which did not provide specific requirements for the operation of AI-based systems.
Right to Explanation and Transparency in AI Systems
GDPR introduces the principle of transparency, giving users the right to information on how their data is processed. The AI Act develops this principle by introducing more detailed regulations on the “right to explanation” for individuals who may be affected by decisions made by AI systems. In practice, this means that individuals will have the right to understand how an AI decision was made and its main criteria.
This is crucial in the context of AI systems used in areas such as recruitment, credit approval, or risk assessment. Both regulations therefore work together, creating a more comprehensive system for protecting citizens’ rights. However, the AI Act supplements GDPR with specific guidelines that were lacking in the context of AI-driven decision-making.
Impact of the AI Act on Data Controllers’ Obligations
For entities that act as data controllers (e.g., companies, public institutions), the AI Act’s entry into force will bring additional requirements. Controllers using AI systems will need not only to comply with GDPR but also to meet the new AI Act criteria. This will include risk monitoring, documenting AI-driven decision-making processes, and conducting security and compliance tests for systems.
The AI Act also emphasizes monitoring algorithms throughout their lifecycle—from development, testing, to deployment and oversight. Controllers will be required to implement appropriate oversight and control measures over AI systems, which in practice may necessitate the creation of new compliance procedures or compliance management teams.
Penalties and Fines
Like GDPR, the AI Act introduces a system of financial penalties for violations. According to the AI Act proposal, infringements could be penalized with fines of up to 6% of a company’s annual turnover or 30 million euros, which exceeds the maximum fines stipulated in GDPR (4% of turnover or 20 million euros). This system aims to deter the use of non-transparent and dangerous AI systems and encourages companies to implement appropriate security measures.
How the AI Act and GDPR Will Shape the Future of Data Protection
The AI Act and GDPR are designed to work together, creating a more complex, multifaceted system for protecting data and citizens’ rights. While GDPR establishes general principles for data protection, the AI Act focuses on the specific nature of AI technology and the risks posed by automated data processing.
Through the synergy of the AI Act and GDPR, the European Union aims to create a secure environment in which technological innovations can develop transparently and in alignment with human rights. As artificial intelligence continues to advance, EU regulations may set a regulatory standard for other regions worldwide, establishing a new level of responsibility and protection in the digital era.