New AI Privacy Guidance from OAIC Simplifies Compliance for Businesses

New AI Privacy Guidance from OAIC Simplifies Compliance for Businesses

By

The Office of the Australian Information Commissioner (OAIC) has released two new guides to help businesses navigate privacy obligations when using artificial intelligence (AI) products. These guides provide clarity on how the Australian Privacy Act 1988 applies to AI, aiming to improve compliance and safeguard privacy as AI technologies become more prevalent in business practices.

The first guide is designed to assist businesses in complying with privacy laws when using commercially available AI products. It also offers practical advice on selecting AI tools that align with privacy best practices. The second guide targets developers, specifically those utilizing personal information to train generative AI models. It highlights how existing privacy laws govern AI development and emphasizes the need for responsible data usage in creating AI systems.

Privacy Commissioner Carly Kind acknowledged the growing interest in AI governance, emphasizing its importance in today’s business landscape: “How businesses should be approaching AI and what good AI governance looks like is one of the top issues of interest and challenge for industry right now,” she said. The new guides aim to demystify privacy obligations and foster a clearer understanding of how businesses can use AI while maintaining compliance with privacy regulations.

"AI products should not be used simply because they are available," Kind stated. "Robust privacy governance and safeguards are essential for businesses to gain advantage from AI and build trust and confidence in the community."

Tackling Privacy Risks in AI

The guidance also aligns with the OAIC’s broader focus on promoting privacy in emerging technologies and addressing concerns over AI’s increasing use. The Commissioner underscored that the regulator will continue to monitor AI development and take action when necessary, particularly when businesses fail to implement adequate privacy protections.

According to Kind, Australians are increasingly wary of how their personal data is being used, especially in the context of generative AI models. “The community and the OAIC expect organisations seeking to use AI to take a cautious approach, assess risks, and make sure privacy is a key consideration,” she said.

While the guides focus on current privacy laws and practices, Commissioner Kind noted the importance of future-proofing privacy protections in the rapidly evolving technological landscape. “With developments in technology continuing to evolve and challenge our right to control our personal information, the time for privacy reform is now,” she said, suggesting that stronger privacy protections—such as a positive obligation on businesses to ensure personal data handling is fair and reasonable—would further enhance AI governance.

The OAIC's guidance on generative AI development outlines clear boundaries for developers using personal information. It emphasizes the need for developers to be cautious, particularly when collecting, storing, or disclosing personal information. The guidance also warns of the risks associated with data re-identification, a growing concern as advancements in AI make de-identified information increasingly vulnerable to re-identification.

While not all generative AI models involve personal data, developers are urged to consider whether their AI systems might inadvertently collect such information. Early risk assessments and privacy-by-design principles are recommended to minimize privacy breaches and build trust with users.

In addition to privacy compliance, the OAIC’s new guidance stresses that following best practices in data protection can help developers avoid contributing to public anxiety around AI use. Businesses are encouraged to adopt transparency and offer choices to users to foster trust and confidence in their AI systems.

Global Alignment & Local Compliance

To ease the burden on businesses that operate across multiple jurisdictions, the OAIC has taken into account global privacy standards when drafting its guidance. The Australian regulator examined similar frameworks, including those from the UK Information Commissioner’s Office and Canadian privacy regulators, to ensure its guidance aligns with international best practices. However, the guidance remains firmly grounded in the Australian Privacy Act, highlighting the country’s unique legal framework.

Notably, Australia’s Privacy Act lacks provisions for legitimate interests or business improvement exceptions, which are common in other jurisdictions. Additionally, the collection of sensitive information is tightly controlled, generally requiring explicit consent.

The OAIC’s guidance reflects these differences and encourages businesses to carefully consider the legal and ethical implications of using personal data to train AI models.

Evolving Technology, Evolving Guidance

As AI technology advances, the OAIC acknowledges that privacy risks and mitigation strategies will need to adapt. The guidance includes a combination of high-level principles and practical examples to help developers remain compliant as the industry evolves. Issues like model unlearning and the growing risk of data re-identification are particularly relevant in today’s AI landscape, and businesses are urged to stay vigilant in addressing these concerns.

By providing clear expectations, the OAIC aims to make privacy compliance more straightforward for businesses, fostering innovation while ensuring community trust. Commissioner Kind concluded by emphasizing that privacy compliance is not just a legal obligation but a business advantage.

“Privacy compliance is good for consumers, who feel more confident participating in the digital economy, but also for organizations, which can innovate knowing guardrails are in place that will help to earn the trust of the community,” she said.

The OAIC’s new guides mark an important step in shaping AI governance in Australia, providing businesses with the tools they need to navigate the complexities of privacy and AI. As AI continues to evolve, the OAIC’s commitment to safeguarding personal data remains critical to ensuring a safe and trusted digital future.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.