So What Happened on July 18, 2025 With the EU AI Act?
July 18, 2025 marked a critical juncture in the implementation of the European Union’s Artificial Intelligence Act, with significant developments that will shape the future of AI regulation in Europe.
This date served as both a consultation deadline and witnessed major industry pushback, particularly from Meta (formerly Facebook), one of the world’s largest technology companies.
Key Developments on July 18, 2025
Consultation Response Deadline
July 18, 2025, was the deadline for stakeholders to respond to a crucial EU consultation regarding the implementation of the AI Act’s broad principles into actionable guidance.
This consultation addressed several critical questions about how companies should comply with the Act’s requirements, representing a pivotal moment in translating regulatory theory into practical business operations.
Meta’s Refusal to Sign Code of Practice
The most significant development occurred when Meta refused to sign the European Union’s code of practice for its AI Act, weeks before the bloc’s rules for providers of general-purpose AI models take effect.
This decision sent shockwaves through the tech industry and highlighted the growing tensions between major AI companies and European regulators.
Stay Ahead of AI Regulation
Join AI Act Alert, eyreACT’s newsletter delivering concise updates, compliance tips, and insights to keep your AI systems market-ready and trusted.
What Is the EU AI Act Code of Practice?
The EU’s code of practice represents a voluntary framework published in July 2025, designed to help companies implement processes and systems necessary for AI Act compliance.
The code of practice requires companies to provide and regularly update documentation about their AI tools and services. It also bans developers from training AI on pirated content. AI companies from now on must comply with content owners’ requests to not use their works in their data sets.
Core Requirements of the Code
The code of practice establishes several fundamental obligations for AI developers:
Documentation requirements: Companies must maintain comprehensive, regularly updated documentation about their AI systems, tools, and services. This includes technical specifications, training data sources, and operational parameters.
Copyright protection: The framework explicitly prohibits training AI systems on pirated or unauthorised copyrighted content, addressing one of the most contentious issues in AI development.
Content owner rights: Companies must establish mechanisms to honour requests from content creators and rights holders who do not want their works included in AI training datasets.
Why Did Meta Refuse to Sign?
Meta’s Chief Global Affairs Officer Joel Kaplan outlined the company’s position in a LinkedIn post, stating that “Europe is heading down the wrong path on AI” and that “This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act”.
Meta’s Specific Concerns
Legal uncertainty: Meta argues that the code creates ambiguous legal requirements that make compliance difficult to achieve and measure.
Regulatory overreach: The company contends that certain provisions exceed the original scope and intent of the AI Act itself.
Innovation impact: Kaplan claimed that the law will “throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them”.
How eyreACT Responds to the New Code of Practice
Happy to confirm that many existing features in eyreACT AI Act Compliance platform perfectly align with the Code of Practice, making it easier for our clients to comply with AI Act.
Documentation & Audit Management
- Automated compliance documentation generation
- Real-time system change tracking
- Centralised compliance dashboard
Content Rights Protection
- Training data source verification
- Rights holder request processing
- Copyright compliance monitoring
Risk & Transparency Tools
- Automated risk classification system
- AI interaction disclosure features
- Regulatory reporting automation
Operational Monitoring
- Continuous compliance status tracking
- Multi-jurisdiction requirement mapping
- Stakeholder communication portal
These core features would help companies reliably navigate the new EU AI Act requirements and reduce manual (and expensive!) compliance overhead.
Are you ready for the EU AI Act’s deadline?
eyreACT’s AI Act compliance platform will help organisations like yours seamlessly navigate these complex requirements. Be among the first to access our comprehensive solution for AI system classification, risk assessment, and ongoing compliance management.
How Does This Affect the AI Industry?
The July 18 developments reflect broader industry resistance to EU AI regulations. Tech companies from across the world, including those at the forefront of the AI race like Alphabet, Meta, Microsoft and Mistral AI have been fighting the rules, even urging the European Commission to delay its roll out.
Industry-Wide Implications
Compliance Challenges: Companies are unsure how to comply with the rules and a guidance document to help AI developers comply with the act, missed its expected publication timeline, creating additional uncertainty for businesses attempting to prepare for compliance.
Market Access Concerns: Companies that refuse to comply with the code of practice may face restrictions on their ability to operate AI systems within the European market.
Competitive Dynamics: The split between compliant and non-compliant companies could create significant competitive advantages or disadvantages depending on market access and regulatory enforcement.
What Are the Upcoming AI Act Deadlines?
The EU has maintained its implementation timeline despite industry pressure. The Commission has held firm, saying it will not change its timeline.
Critical Dates for 2025-2027
August 2, 2025: New rules for providers of “general-purpose AI models with systemic risk” take effect, affecting companies like OpenAI, Anthropic, Google, and Meta.
August 2, 2027: Companies that have such models on the market before August 2 will have to comply with the legislation by August 2, 2027.
Affected Companies and Systems
The regulations specifically target providers of general-purpose AI models with systemic risk, which includes:
- Large language models with significant computational resources
- Foundation models used across multiple applications
- AI systems with broad societal impact potential
Understanding the AI Act’s Risk-Based Framework
A risk-based regulation for applications of artificial intelligence, the AI Act bans some “unacceptable risk” use cases outright, such as cognitive behavioral manipulation or social scoring.
Risk Categories Explained
Unacceptable Risk: Applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned.
High-Risk Applications: High-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. The rules also define a set of “high-risk” uses, such as biometrics and facial recognition, and in domains like education and employment.
Transparency Obligations: The AI Act introduces specific disclosure obligations to ensure that humans are informed when necessary to preserve trust. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision.
What Does This Mean for AI Innovation in Europe?
The July 18 developments highlight a fundamental tension between regulatory oversight and technological innovation. While the EU aims to create a safe and trustworthy AI ecosystem, major technology companies argue that excessive regulation could stifle innovation and European competitiveness in the global AI race.
Balancing Innovation and Regulation
European AI Ecosystem: The regulations could either foster trust and adoption of AI systems through clear safety standards or limit the development and deployment of cutting-edge AI technologies in Europe.
Global Competitive Position: The outcome of this regulatory approach may determine whether Europe becomes a leader in responsible AI development or falls behind in the global AI competition.
Business Impact: Companies operating in Europe must navigate increasingly complex compliance requirements while maintaining competitive positioning in a rapidly evolving market.
Book a demo with eyreACT to simplify your AI Act compliance
EU AI Act is more complex than GDPR but we help you nail it. From automated AI system classification to ongoing risk monitoring, we’re creating the platform of developer-friendly, business-friendly tools you need to confidently deploy AI within the regulatory European framework.
Final Thoughts
As the August 2025 deadline approaches, the industry faces a period of significant adjustment. Global companies must decide whether to comply with EU requirements, potentially limiting their European operations, or adapt their global AI development practices to meet European standards.
The July 18, 2025 developments mark a defining moment in the global conversation about AI governance, innovation, and the balance between technological advancement and societal protection.
The resolution of these tensions will likely shape the future of AI development not just in Europe, but worldwide, as other jurisdictions observe the outcomes of this regulatory experiment.
For the latest updates on AI Act implementation and compliance requirements, companies should monitor official EU publications and consult with legal experts specialising in AI regulation.


