By Huma Ishfaq ⏐ 5 months ago ⏐ Newspaper Icon Newspaper Icon 2 min read
Meta Rejects Eus Ai Code Citing Legal Uncertainty Overreach

Meta has officially opted out of signing the European Union’s new Code of Practice for general-purpose AI (GPAI) models. This move comes just weeks before the EU’s binding regulations under the AI Act are scheduled to take effect on August 2.

In a public post, Meta’s Chief Global Affairs Officer Joel Kaplan made the company’s position clear:

“Europe is heading down the wrong path on AI,” he wrote on LinkedIn, adding:

“We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

What the EU Code Demands

The voluntary Code of Practice, published earlier this month, is designed to serve as a bridge to full compliance with the EU’s AI Act. It encourages AI companies to:

  • Maintain up-to-date documentation on their AI tools
  • Avoid using pirated data for model training
  • Respect copyright holders’ requests not to include their content in datasets

Meta’s Concerns: Legal Hazards and Innovation Barriers

Kaplan argued that the current framework poses a threat to innovation:

“The law will throttle the development and deployment of frontier AI models in Europe and will stunt European companies looking to build businesses on top of them.”

He labeled the EU’s approach as legislative “overreach.”

What the AI Act Covers

The AI Act is the world’s first major legal framework focused on AI. It takes a risk-based approach, with three key categories:

  • Unacceptable risk: Prohibits uses such as social scoring or psychological manipulation
  • High-risk: Applies to sectors like education, recruitment, biometrics, and facial recognition
  • Limited/Minimal risk: Requires transparency and documentation, but fewer restrictions

AI developers are expected to register their systems and meet strict governance and risk-mitigation requirements under the Act.

To guide providers through this transition, the European Commission released implementation guidelines on Friday. These apply to companies building general-purpose AI models with systemic risk, including major players like OpenAI, Anthropic, Google, and Meta.

Firms with existing models in the market before August 2, 2024, must ensure full compliance by August 2, 2027.

Despite lobbying efforts from tech giants, including Alphabet, Meta, Microsoft, and Mistral AI, urging the EU to postpone enforcement, the Commission remains firm. Moreover, it will not alter its rollout timeline.