AI Governance

CFTC Issues Warning on the Growing Threat of AI-Driven Fraud

As technology evolves, so do the tactics of fraudsters. The Commodity Futures Trading Commission’s (CFTC) Office of Customer Education and Outreach (OCEO) has issued a timely warning about a growing threat: criminals are using generative artificial intelligence (AI) to create fraud that’s increasingly difficult to detect. In its latest advisory, Criminals Increasing Use of Generative AI to Commit Fraud, the OCEO highlights how this emerging technology is making it easier than ever for bad actors to create fake identities, convincing social media profiles, and even fraudulent financial platforms—posing significant risks to consumers and businesses alike.

AMF Explores the Future of AI in Regulatory Data Processing

The French Financial Markets Authority (AMF) is leading the charge in using technology to ease the burden of financial market supervision. Over the past couple of years, the AMF has been delving into the world of artificial intelligence (AI), testing how well these tools can automate the processing of regulatory data. This isn’t just about keeping pace with innovation—it’s about creating a smarter, more efficient system for overseeing a rapidly expanding regulatory landscape.

Japan's FSA Publishes AI Discussion Paper to Promote Responsible Use of AI in the Financial Sector

The Financial Services Agency (FSA) has recently published a thought-provoking discussion paper, urging the financial industry to explore the potential of artificial intelligence (AI) while being mindful of its risks. Titled “Preliminary Discussion Points for Promoting the Sound Utilization of AI in the Financial Sector,” the paper highlights how generative AI is revolutionizing industries across the board, including finance, and the FSA’s role in ensuring these advancements are responsibly harnessed.

Second Round of the Danish AI Regulatory Sandbox Now Open for Applications

The Danish Data Protection Agency and the Danish Digital Agency have announced the opening of the second round of applications for their AI regulatory sandbox. This initiative, which was launched in 2024, provides companies and authorities with access to expert guidance on navigating the complexities of AI regulations, particularly the General Data Protection Regulation (GDPR) and AI Regulation.

FTC Takes a Stand Against DoNotPay’s “AI Lawyer” Claims

In a world where AI promises seem to be becoming as frequent as pop-up ads, the FTC’s decision to take on DoNotPay is a notable one. The company, which once boasted about offering “the world’s first robot lawyer,” has now been forced to face the music for its misleading marketing. The Federal Trade Commission has finalized an order against DoNotPay, following an investigation that questioned the legitimacy of their AI-powered legal services.

Five Data Protection Authorities Commit to Privacy-Protecting AI Governance

At the AI Action Summit in Paris this week, five global data protection authorities made an important pledge. On the 6th of February, a joint declaration was signed by officials from Australia, Korea, Ireland, France, and the UK—each committed to fostering an artificial intelligence ecosystem that doesn’t just innovate, but also respects privacy and safeguards fundamental rights.

The CNIL’s New AI Recommendations: Fostering Innovation While Protecting Privacy in the Age of AI

In a world where artificial intelligence is pushing boundaries and reshaping industries, the question of how to protect individuals' privacy has never been more pressing. Fortunately, the GDPR (General Data Protection Regulation) isn't just a barrier to innovation—it can be the very tool that enables responsible AI development. The French Data Protection Authority, or CNIL, has just issued new recommendations that take the best of both worlds: advancing AI while ensuring personal data is treated with the respect it deserves.