AI Governance

EDPB Opinion Puts GDPR Principles at the Heart of Responsible AI Development

The European Data Protection Board (EDPB) has weighed in on one of the most pressing issues of our time: how to ensure that AI technology respects privacy while driving innovation. In a newly adopted opinion, the EDPB tackled the thorny questions of when AI models can be considered anonymous, how “legitimate interest” fits into the equation, and what happens if an AI model is built on shaky—if not outright unlawful—data practices.

AI Takes Off in Sweden’s Financial Sector, but Risk Management Lags

AI is reshaping Sweden’s financial industry at a breakneck pace. From streamlining operations to analyzing mountains of data, generative AI tools like ChatGPT, Copilot, and Gemini are becoming indispensable for firms looking to gain an edge. But while the technology is rapidly becoming a fixture, the frameworks to manage the risks it brings are lagging dangerously behind.

FTC Cracks Down on Evolv Technologies Over Misleading AI Security Claims

The Federal Trade Commission (FTC) is sending a strong message to Evolv Technologies and the broader artificial intelligence (AI) industry that if you’re going to promise product features powered by artificial intelligence, you’d better deliver. The agency has accused the Massachusetts-based company of making false claims about its AI-driven security scanners, which are widely used in schools, hospitals, and stadiums.

Regulators Must Find the Balance Between AI Innovation & Financial Stability, Says Fed Governor Bowman

At the 27th Annual Symposium on Building the Financial System of the 21st Century, Federal Reserve Governor Michelle W. Bowman addressed the growing role of artificial intelligence (AI) in the financial sector. Speaking to an audience of industry professionals and regulators, Bowman emphasized the importance of striking a delicate balance between fostering innovation and ensuring the stability of the financial system.

Confronting AI’s Complexities & Risks: The GRC Perspective

Artificial Intelligence (AI) is no longer a distant technological marvel; it's a driving force in reshaping how industries operate, innovate, and grow. From transforming healthcare with predictive analytics to revolutionizing the financial sector with automated trading systems, AI is everywhere. But as organizations embrace these advancements, they must also confront a growing set of challenges—legal, ethical, and operational—that can have serious consequences if not properly managed. This is where governance, risk, and compliance (GRC) come into play.

New AI Privacy Guidance from OAIC Simplifies Compliance for Businesses

The Office of the Australian Information Commissioner (OAIC) has released two new guides to help businesses navigate privacy obligations when using artificial intelligence (AI) products. These guides provide clarity on how the Australian Privacy Act 1988 applies to AI, aiming to improve compliance and safeguard privacy as AI technologies become more prevalent in business practices.

Frances Haugen Advocates for AI Whistleblowing: Highlights from Her Recent Wall Street Journal Interview

Frances Haugen, the former Facebook product manager who gained attention for exposing internal documents that formed the basis of The Wall Street Journal’s Facebook Files series, is now turning her focus to the burgeoning field of artificial intelligence (AI). In a recent interview with The Wall Street Journal, Haugen underscored the increasing importance of whistleblowing in industries that rely heavily on AI—industries that, she argues, are often shrouded in secrecy and controlled by a small number of powerful players.