AI Regulation One Year Later: What Changed in 2025
Note: This article represents a practitioner’s interpretation of the relevant rules and regulations in place at the time of writing. I am not a lawyer, and readers should consult with their own legal counsel and compliance teams before taking any action based on this information.
About a year ago, I published a series on AI regulation around the world, covering how seven major jurisdictions were approaching AI governance. At the time, a lot of the frameworks I wrote about were still proposals, guidelines, or works in progress. A year later, the picture looks quite different. Laws that were drafts are now enforceable. Countries that relied on voluntary guidelines have enacted their first binding legislation. And the US has managed to create a regulatory environment that’s somehow both more permissive and more contentious than before.
I’ve gone back and updated each post in the series with the latest developments. This post summarizes what changed across the board.
The Big Picture
If I had to sum up what happened in one sentence: comprehensive AI legislation went from the exception to the norm. The EU is actively enforcing the AI Act. South Korea’s AI Basic Act took effect in January 2026. Japan passed its first AI law. Multiple US states brought enforceable AI legislation online. China embedded AI in its national cybersecurity law. Even India, the most hands-off jurisdiction in our series, introduced mandatory AI content labeling rules.
The era of “wait and see” is mostly over. What remains is a lot of divergence in how strictly different countries are willing to regulate.
United States: The Federal-State Collision
The biggest story in the US is the growing tension between federal and state approaches. The Trump administration revoked the Biden-era Executive Order 14110 and issued new orders prioritizing deregulation. In December 2025, Executive Order 14365 created an AI Litigation Task Force specifically designed to challenge state AI laws on grounds of federal preemption.
Meanwhile, the states kept moving. California signed the Transparency in Frontier Artificial Intelligence Act (TFAIA) in September 2025, requiring frontier AI developers to publish governance frameworks and report critical incidents. Colorado’s AI Act became enforceable on February 1, 2026, imposing a “duty of reasonable care” on high-risk AI systems. Illinois amended its Human Rights Act to regulate AI in employment, effective January 1, 2026.
This collision course between federal deregulation and state enforcement will define the US regulatory story through 2026.
European Union: From Paper to Practice
The EU AI Act moved from theoretical to operational. Prohibitions on unacceptable-risk AI systems were enforced starting February 2, 2025. General-Purpose AI (GPAI) model obligations, including energy consumption documentation, kicked in on August 2, 2025. Full enforcement of high-risk system requirements arrives August 2, 2026.
The Commission also proposed the Digital Omnibus in November 2025, a simplification package that would streamline conformity assessments, provide SME relief, and link high-risk system timelines to the availability of harmonized standards. It’s not enacted yet, but it signals that Brussels is aware the Act’s requirements may need practical adjustment.
China: Rapid Formalization
China’s regulatory pace in 2025 was striking. New AI labeling rules took effect in September, three national generative AI security standards were released in April (effective November), and the State Council issued an ambitious AI Plus Action Plan targeting 70% AI penetration in key sectors by 2027. The biggest move was the October 2025 amendment to the Cybersecurity Law, which brought AI provisions into national legislation for the first time (effective January 1, 2026).
Looking ahead, more than 30 new standards are expected in 2026, with agentic AI emerging as a particular focus area.
Read the updated China analysis
Japan: No Longer Just Guidelines
Japan’s shift from pure soft law was one of the more interesting developments of 2025. In May, Parliament enacted the AI Promotion Act, the country’s first law to expressly address AI. Most provisions took effect in June 2025, and the AI Strategic Headquarters (chaired by the Prime Minister) was formally established in September.
The law is deliberately lightweight. It creates institutional structures and establishes principles without imposing fines or heavy compliance burdens. Japan’s explicit goal is to be the “most AI-friendly country,” and the legislation reflects that ambition. But the institutional infrastructure is now in place for more binding regulation if and when the government decides to go that direction.
Read the updated Japan analysis
United Kingdom: Still Waiting
The UK is the exception to the trend of countries moving toward binding AI legislation. Despite expectations, no standalone AI bill materialized in 2025. A Private Members’ Bill proposing a new “AI Authority” was re-introduced in the House of Lords in March 2025, and the government opened consultation on an AI Growth Lab in October, but neither has resulted in enforceable legislation.
Reports suggest nothing concrete will happen until a decision is made about including an AI bill in the spring 2026 King’s Speech. Any eventual legislation is expected to be more limited in scope than originally anticipated.
South Korea: Framework in Force
South Korea’s AI Basic Act became effective on January 22, 2026, making it the second jurisdiction after the EU to bring a comprehensive AI regulatory framework into force. The Ministry of Science and ICT issued a draft Enforcement Decree in September 2025 to spell out implementation details.
The late 2024 political turmoil (the martial law crisis and subsequent upheaval) created some uncertainty about the timeline, but the government pressed forward as planned. South Korea now has an enforceable risk-based classification system, high-risk AI registration requirements, and generative AI transparency obligations.
Read the updated South Korea analysis
India: Signs of Movement
India remains the most hands-off jurisdiction in our series, with no specific AI legislation on the books. But 2025 brought more movement than the previous several years combined. MeitY released official AI Governance Guidelines in November 2025, a Private Member’s Bill proposed a statutory AI Ethics Committee in December, and the amended IT Rules 2026 introduced mandatory labeling of AI-generated content with accelerated takedown requirements.
None of this adds up to comprehensive regulation yet, but the direction of travel is clear. India’s purely non-regulatory era appears to be winding down.
Read the updated India analysis
Cross-Cutting Themes
A few patterns stand out looking across all seven jurisdictions:
Comprehensive legislation is now the default. The EU, South Korea, Japan, China (via Cybersecurity Law amendments), and multiple US states all have enforceable AI-specific legal frameworks. Only the UK and India remain without binding comprehensive regulation, and both are moving in that direction.
AI content labeling is going global. China, India, and California all now require some form of labeling or transparency around AI-generated content. This is becoming a baseline expectation rather than a novel requirement.
The innovation-vs-safety spectrum persists. The EU and US states like California and Colorado represent the “duty of care” end with real penalties. Japan and the UK explicitly prioritize innovation. The US federal government under the Trump administration is actively trying to preempt stricter regulation. These philosophical differences aren’t going away.
Agentic AI is the next frontier. China has already released draft rules addressing agentic AI systems. Other jurisdictions are watching but haven’t acted yet. If 2025 was about getting basic AI frameworks in place, 2026 will likely see the first serious attempts to regulate autonomous AI agents.
What This Means for Builders
If you’re developing or deploying AI systems, the practical takeaway hasn’t changed from what I wrote a year ago: think about regulation early. What has changed is that the regulations are no longer hypothetical. If you’re operating in the EU, compliance is mandatory today. If you’re in the US, you need to track both federal signals and the specific states where you operate. If you’re targeting markets in Asia, each of the four countries we cover has its own distinct framework with real obligations.
The good news is that despite the divergence in approaches, there’s substantial overlap in what regulators actually care about: transparency, risk management, bias mitigation, human oversight, and documentation. Building solid practices around those fundamentals will go a long way toward compliance in most jurisdictions.
References
White & Case. “AI Watch: Global Regulatory Tracker.” https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states
Wilson Sonsini. “2026 Year in Preview: AI Regulatory Developments for Companies to Watch Out For.” https://www.wsgr.com/en/insights/2026-year-in-preview-ai-regulatory-developments-for-companies-to-watch-out-for.html
Stay in the loop
Get notified when I publish new posts. No spam, unsubscribe anytime.