The EU is the first to adopt a law on Artificial Intelligence, defending the fundamental rights of individuals without sidelining the drive for innovation.
The European Union has taken a significant step in the field of artificial intelligence (AI) with the introduction of the AI Act. This regulation, adopted to address challenges and maximize the benefits of AI, marks a turning point and is the first regulation of this advanced technology. However, the adoption has not been without discussions and controversies, particularly regarding the use of AI for surveillance and security.
The AI Act is a legislative framework aimed at creating a clear and consistent regulatory environment for the use of AI within the Union. It is based on fundamental values such as transparency, accountability, non-discrimination, and security. The regulation establishes specific rules for high-risk AI systems, such as those used in the fields of health, education, transportation, and justice.
During the negotiations for the approval of the AI Act, a significant deadlock occurred regarding the use of AI for surveillance and security. Simply put, the European Parliament feared that the widespread use of AI-based surveillance systems could undermine individual privacy and fundamental rights, proposing a total ban. In contrast, the European Council, representing individual states, took a more permissive approach (not to mention the strong push from Italy, France, and Germany to exclude foundational models, the training systems at the core of the final products, from regulation).
After a few weeks of negotiation, the member states yielded, and the AI Act introduced restrictions on the use of facial recognition systems and other surveillance technologies in public places, emphasizing the need to respect the principles of proportionality and necessity.
The issue of foundational models was also resolved by identifying high-impact AI (with a computing power of 10^25 FLOPs), which must apply legislation ex ante, ensuring cybersecurity, transparency in training processes, and sharing technical documentation before commercialization. For lower-level models, the legislation is applied at the time of product commercialization.
This approach aims to ensure that surveillance technologies are used only when absolutely necessary and in accordance with fundamental rights.
Companies operating in the EU must now face the challenge of complying with the AI Act. Specific regulations for high-risk systems require companies to implement security measures, document automated decision-making processes, and designate compliance officers. Furthermore, the transparency requirement obliges companies to provide clear information about AI usage, enabling citizens to understand how decisions that concern them are made.
Companies developing surveillance systems or high-risk technologies must pay special attention to compliance with the new regulations. The implementation of ethical and responsible measures becomes essential, not only to comply with the law but also to gain the trust of customers and citizens.
Companies will have 24 months to adapt, and within 6 months, they must cease uses prohibited by the AI Act. To promote innovation, small and medium-sized enterprises have the option to create 'regulatory sandboxes': test environments exempt from certain rules.
The European Union has also established a dedicated office to promote and monitor the enforcement of the law.
The adoption of the AI Act by the European Union represents a step forward in AI regulation, balancing technological innovation with the protection of fundamental rights. The deadlock on surveillance and security highlighted the need for a balanced approach, encouraging innovation without compromising the privacy and security of citizens. Now, it is the responsibility of companies to adapt to this new regulatory reality, implementing ethical and responsible practices to ensure a sustainable and transparent future for AI in the European Union.
Comments