AI Governance

New York Moves to Rein In Frontier AI With Transparency & Reporting Rules

‍On December 22, Kathy Hochul signed the Responsible AI Safety and Evaluation Act, or RAISE Act, setting what state leaders describe as a nation-leading standard for transparency and accountability among developers of so-called frontier AI models. The legislation requires large AI developers to publicly document their safety practices and to notify the state within 72 hours when serious harm linked to their systems is identified.

Korea’s Privacy Regulator Pivots Toward Prevention as AI Reshapes the Data Landscape

The Personal Information Protection Commission (PIPC) recently unveiled its policy directions for 2026, laying out a sweeping plan to move Korea’s privacy regime away from after-the-fact penalties and toward a more preventive, risk-based approach designed for an AI-embedded society. The roadmap was presented on December 2 at the Sejong Convention Center during a joint reporting session with the Ministry of Science and ICT, the Korea Aerospace Administration, and the Korea Media and Communications Commission.

Australian Regulator Warns Rapid AI Expansion Is Outpacing Competition & Consumer Safeguards

Artificial intelligence is moving from novelty to infrastructure at a pace that is increasingly difficult for regulators to ignore. According to a new industry snapshot from the Australian Competition and Consumer Commission, AI is rapidly becoming embedded across products, platforms, and services in ways that could reshape competition, consumer trust, and market dynamics. While the ACCC acknowledges the growing benefits AI is delivering to businesses and consumers, it warns that the pace and scale of adoption are outstripping existing oversight frameworks, strengthening the case for continuous regulatory monitoring.

Trump Executive Order Seeks to Rein in State AI Laws, Drawing Pushback From States & Lawmakers

President Donald Trump recently signed a sweeping executive order aimed at curbing state-level regulation of artificial intelligence, framing the move as necessary to preserve U.S. competitiveness and prevent what the administration describes as a fragmented and burdensome regulatory landscape.

When AI Moves Faster Than Governance

The first wave of obligations under Europe’s AI Act quietly came into force on August 2, 2025. It was the moment organizations were meant to turn policy debates into practice, especially for general-purpose AI models already woven into customer service, analytics, and day-to-day operations. But just as this new era of AI oversight began, another development signaled how uneven the landscape still is.

Brussels Opens Antitrust Investigation Into Meta’s WhatsApp AI Restrictions

The European Commission has launched a formal antitrust investigation into whether Meta is unfairly limiting third-party artificial intelligence providers’ access to WhatsApp, potentially shutting out competitors to its own “Meta AI” service.

Italy Challenges Meta’s AI Strategy on WhatsApp as Antitrust Risks Intensify

Italy’s competition regulator is tightening the screws on Meta, warning that the company’s latest changes to WhatsApp could choke off competition in the fast-moving AI chatbot market just as it begins to take shape.