We have frequently discussed the AI Act on our blog, the world's first regulation on Artificial Intelligence proposed by the EU. The AI Act has already entered into force, but the timeline for certain regulatory applications has been extended to allow stakeholders to adapt to the new rules.
Starting next February, bans will come into effect for the most controversial uses of AI, such as biometric surveillance and facial recognition (for mass control), emotion recognition in the workplace and schools, and social scoring and predictive policing systems.
From 2026, laws will be in place to more strictly regulate the experimentation and commercialization of AI services within the European Union, especially for medium-to-high-risk AI models.
To facilitate a common path, the Commission has organized the AI Pact, a voluntary agreement between the EU and stakeholders aimed at preparing for the entry into force of the AI Act and, if possible, committing to applying the required standards in advance.
The commitments outlined in the AI Act focus on three key actions:
Adopting a governance strategy to promote the adoption of AI within the organization and working towards compliance with the AI Act.
Identifying and mapping potentially high-risk systems according to the EU framework.
Promoting AI awareness and literacy among staff, ensuring ethical and responsible development.
In addition to these fundamental commitments, there are other minor ones, such as ensuring human oversight, mitigating risks, and labeling certain content (fighting deepfakes). You can find the full text of the commitments (.pdf) by clicking here.
The EU's call has been answered by many: over a hundred stakeholders have signed the AI Pact, including Adobe, Amazon, Autodesk, Cisco, Google, HP, IBM, Lenovo, Microsoft, Palantir, OpenAI, Qualcomm, Scania, Samsung, TIM, and Vodafone.
However, some notable absences have been observed: Apple, Meta, and ByteDance (TikTok) are not on the list.
Meta has previously expressed disagreement with the EU. Shortly before, in an open letter signed by Mark Zuckerberg, Daniel Ek of Spotify, John Elkann of Exor, and other entrepreneurs, the EU was warned about the regulatory obstacles that would hinder the race for AI research, causing the old continent to lose ground to the US, China, and India.
According to Zuckerberg, the EU's fragmented regulatory landscape would prevent citizens from accessing open-source AI models (such as Meta's Llama) and multimodal generative AI models. Interestingly, the letter also criticized the intervention of data protection authorities, who had halted the use of Facebook and Instagram users' public data without consent to train AI models.
It is possible, and even likely, that these companies will join the AI Pact in the coming months. Regardless, from 2026, they will have to fully comply with the rights of European citizens as guaranteed by the AI Act.
Comments