The CNIL’s New AI Recommendations: Fostering Innovation While Protecting Privacy in the Age of AI

The CNIL’s New AI Recommendations: Fostering Innovation While Protecting Privacy in the Age of AI

By

Key Takeaways

  • CNIL’s New Guidelines for AI and GDPR: The French Data Protection Authority (CNIL) has issued new recommendations to ensure that AI development aligns with the GDPR, supporting innovation while safeguarding personal data privacy.
  • Flexibility in AI Development: The CNIL recognizes that AI systems, especially general-purpose models, may not have predefined uses at the start. Operators can describe the system’s purpose and functionalities rather than specifying all potential applications, allowing for flexibility in AI design.
  • Transparency and Data Minimization: The guidelines emphasize transparency in informing individuals when their data is used for AI training, and stress the importance of data minimization—using only the necessary data without over-processing personal information.
  • Data Retention and Reuse: While the GDPR limits data retention, the CNIL acknowledges that long-term data storage may be necessary for scientific or financial research, provided the data is secured. Reusing data for AI models is permissible as long as it aligns with its original collection purpose.
Deep Dive

In a world where artificial intelligence is pushing boundaries and reshaping industries, the question of how to protect individuals' privacy has never been more pressing. Fortunately, the GDPR (General Data Protection Regulation) isn't just a barrier to innovation—it can be the very tool that enables responsible AI development. The French Data Protection Authority, or CNIL, has just issued new recommendations that take the best of both worlds, whether that be advancing AI while ensuring personal data is treated with the respect it deserves.

This year, France is hosting the AI Action Summit, where the country is showcasing AI’s vast potential. From February 6-11, 2025, a variety of events are focusing on how AI can boost Europe’s competitiveness. Yet as the future unfolds, it’s critical to stay grounded in the regulations that ensure both security and trust. Enter the GDPR.

Since 1978, France has been ahead of the curve in protecting personal data. And with the GDPR's far-reaching impact, it’s now the international benchmark for data privacy. But as the technology around us evolves, so too must the regulation. That’s where the CNIL comes in, ensuring that the GDPR isn’t just a list of rules, but a living framework that supports responsible AI while safeguarding people’s rights.

So, How Does the GDPR Actually Apply to AI?
While some AI models, such as those using anonymous data, don't trigger GDPR requirements, others—like large language models—certainly do. These models can process personal data in ways that require careful attention.

The CNIL’s new guidance clarifies that when AI models use personal data—whether during training or through prompts—those principles of data protection remain. But here's the kicker: the way those principles apply has to be adapted to the AI landscape.

For instance, when developing a general-purpose AI system, the CNIL recognizes that operators may not always know exactly how the system will be used at the outset. This is fine! Instead of defining every possible application, the operator can describe the system’s overall purpose and its key functionalities. Flexibility is the name of the game:

  1. Transparency Matters: When personal data is used to train an AI model that might "remember" this information, those individuals deserve to know. But here's the nuance: how you inform them can vary. If an AI model is built using third-party data, and contacting each individual isn’t feasible, organizations can get away with a more general disclosure—perhaps posted on a website. In some cases, listing the categories of sources might suffice.
  2. Data Minimization with a Twist: Yes, the GDPR’s data minimization principle still stands. But when it comes to AI, this doesn't mean you can’t use large datasets. The trick is selecting and cleaning the data to ensure it’s useful for training without over-processing personal information. It’s a fine line, but one the CNIL is helping to walk.
  3. When to Retain Data (and for How Long): Usually, retaining personal data is a limited practice under the GDPR. However, the CNIL recognizes that some datasets are too valuable to toss aside. For instance, large-scale scientific and financial research often requires long-term data storage. In such cases, provided the data is protected and stored securely, retention can be extended.
  4. Reuse of Data: Can you reuse data in AI models, even if it was collected for a different purpose? Yes, but with conditions. If the data was legally collected and you’re reusing it in a way that aligns with its original purpose, it’s usually fair game. The CNIL encourages this kind of responsible reuse but stresses that the original purpose must always be respected.

The Right to Be Informed, Access, & Delete
The GDPR gives individuals the right to access, correct, and delete their personal data—but in the world of AI, making that happen is easier said than done. AI models can make it tricky to identify personal data within a system, and modifying a model might be practically impossible in some cases.

That said, the CNIL doesn’t expect developers to jump through impossible hoops. While it may not always be feasible to fulfill a request to exercise these rights right away, flexibility is key. If fulfilling a request is too difficult or costly, the CNIL may allow for more time or propose alternative solutions. What’s important is that privacy is prioritized from the start of any AI development, and that developers actively work to minimize risks.

The CNIL also highlights the importance of anonymizing models where possible, even if this means adjusting the model's functionalities. Designing AI systems with privacy in mind is not just the responsible thing to do—it’s the smart thing to do.

Real World Insights at the Heart of the Guidelines
What’s refreshing about these recommendations is that they aren’t just the work of policymakers sitting behind desks—they’re shaped by real-world input. The CNIL consulted with a broad range of stakeholders, such as businesses, academics, legal advisors, trade unions, and more. The result? Recommendations that take into account the practical challenges developers face in the real world, not just in theory.

And the CNIL’s work isn’t stopping here. As the field of AI continues to evolve, the authority will continue to update its recommendations, ensuring that privacy protection keeps pace with innovation. The CNIL is also keeping an eye on broader European efforts, like the development of a good practices code for general-purpose AI, which could further refine how the GDPR applies to these technologies.

The CNIL’s new recommendations are a step forward in making sure that AI can continue to flourish—without compromising on privacy. As businesses and developers strive to harness the power of AI, these guidelines will provide the roadmap to do so responsibly. The CNIL has struck a balance that not only supports innovation but protects the rights of individuals, ensuring that AI’s future in Europe remains both bright and trustworthy.

As AI’s capabilities continue to grow, the conversation about privacy and data protection will only become more critical. The CNIL’s work reminds us that progress doesn’t have to come at the expense of rights—it can be the very force that drives them forward.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.  

Oops! Something went wrong