Skip to Content
From Monday 12 September 2020, OVIC's website will no longer be supported in Internet Explorer (IE).
We recommend installing Microsoft Edge, Google Chrome, Safari, Firefox, or Opera to visit the site.

Public consultation on Artificial Intelligence privacy guidance

Overview

The Office of the Victorian Information Commissioner (OVIC) is updating its online resource, Artificial Intelligence – Understanding privacy obligations (OVIC’s AI privacy guidance).

The purpose of OVIC’s AI privacy guidance is to assist Victorian public sector organisations (VPS organisations) to consider their privacy obligations under the Privacy and Data Protection Act 2014 (Vic), when using or considering using personal information in Artificial Intelligence (AI) systems and applications.

Feedback summary

Between May and June 2024, the Office of the Victorian Information Commissioner (OVIC) consulted with agencies and the public about updating OVIC’s privacy guidance, titled ‘Artificial Intelligence – Understanding Privacy Obligations’.

OVIC asked you to tell us what should be included, to ensure the guidance is relevant, practical, clear, accessible, and useful.

Thank you to those who responded to the consultation.

The tables below provide information about what you told us, and what OVIC is doing with your feedback. The information is grouped under common themes.

Theme 1: Relationship with other laws and policies

What you told us What OVIC is doing with this information
Refer to and align guidance with Australia’s national AI framework and ethics principles, and whole-of-Victorian Government policies, framework and guidance (when it is published). Accept, to the extent that OVIC’s jurisdiction overlaps with broader AI Ethics Principles, Frameworks and Guidance.
  • OVIC is not able to provide guidance on all aspects of safe and responsible AI.
  • OVIC’s jurisdiction is limited to considerations of privacy, information security and public access to information.
  • OVIC’s guidance will be updated to link to existing principles and frameworks.
Draw from other state and territory-based AI frameworks, legislation and guidance. Accept, where relevant to OVIC’s jurisdiction.
Refer to sector or subject specific AI frameworks. Accept, where relevant to the organisations OVIC regulates.
Refer to other legislative, regulatory and administrative requirements, relevant to the use of AI and Generative AI by VPS organisations, staff and contracted service providers. Accept
Include more information on AI governance, accountability, and human rights. Accept
Include more information on records management, and link to PROV guidance on Artificial Intelligence and record keeping. Accept
Consider potential impact of Commonwealth Privacy Reform. Accept
Develop a whole-of-Victorian-Government strategy on AI systems, including guidance and mandatory risk assessments. This is not OVIC’s responsibility.

The Department of Government Services is responsible for whole-of-government strategy in this area and has consulted OVIC during its development.
Identify government decisions that can and cannot be legally made or assisted by AI technologies and provide guidance on review rights pathways for persons impacted by decisions made or assisted by AI. This is outside of OVIC’s jurisdiction, except where decisions result in breaches of the Privacy and Data Protection Act 2014 (Vic) or Freedom of Information Act 1982 (Vic).

Theme 2: Privacy principles, risks, harms and mitigations

What you told us What OVIC is doing with this information
Explain privacy risks and harms of automated decision-making systems throughout the AI lifecycle. Accept
Explain and differentiate between the privacy risks and harms of various kinds of generative AI systems (for example, tenanted/enterprise/freely available), throughout the AI lifecycle. Accept
Include guidance on how to mitigate privacy risks. Accept, to illustrate methods and types of controls and mitigations.

OVIC is not able to endorse specific controls or mitigations, given that they are risk based and context specific.
Provide guidance on the risks of inputting certain types of information into various kinds of generative AI systems (for example, tenanted/enterprise/freely available). Accept
Include guidance on when the use of automated decision making is inappropriate and encourage a moratorium on these instances. Accept
Consider issues relating to de-identification, rei-dentification, data minimisation and the use of privacy enhancing technologies. Accept
Include more information on the risks of outsourcing. Accept
Clarify guidance on collection under IPP 1. Accept
Clarify guidance on secondary use under IPP 2. Accept
Clarify guidance on ‘generally available publication’. Accept
Clarify guidance around consent. Accept
Add a section on handling children’s information. Accept

Theme 3: Definitions

What you told us What OVIC is doing with this information
Ensure guidance reflects and refers to best practice standards and definitions. Accept
Expand on definitions and explanations of AI and different types of AI systems. Include examples and common use cases. Accept
Explain the difference between:
  • AI broadly, and generative AI; and
  • standalone generative AI tools (tenanted and non-tenanted) and generative AI that is integrated into existing enterprise architectures and systems.
Accept

Theme 4: Examples and case studies

What you told us What OVIC is doing with this information
Include ‘do’ and ‘don’t’ examples and case studies to illustrate privacy principles, risks, harms, mitigations and controls throughout the AI lifecycle. Accept
Include guidance and examples of how to communicate effectively with the public about an organisation’s use of AI. Accept

Theme 5: Structure, accessibility and plain language

What you told us What OVIC is doing with this information
Create a short, plain language summary, to accompany the full guidance. Accept
Develop a checklist of AI-specific privacy considerations. Consider.

If OVIC developed a checklist, it would be designed to complement an organisation’s broader AI governance and compliance checklist (which would cover AI safety considerations that extend beyond OVIC’s jurisdiction).
Structure guidance around the AI lifecycle. Consider
Improve the use of plain language. Accept
Include footnotes at the bottom of every page, and at the end of the document. Consider
Improve accessibility of the resource. Accept
Develop educational videos, infographics and learning programs relating to AI, and provide training on generative AI tools to promote awareness and responsible use. Consider feasibility, as part of future work program. This is dependent on OVIC’s priorities.

Next steps

OVIC is now updating the guidance, taking into consideration the feedback received.

A second round of public consultation is planned for later this year (2024) or early 2025, to seek your feedback on the updated content and layout of the guidance.

You can stay up to date with OVIC events and activities by signing up for the OVIC mailing list.

Why is OVIC updating its AI privacy guidance?

Since OVIC first published its AI privacy guidance in 2021, the breadth and availability of AI technologies has grown, and its use in the Victorian public sector has increased. The now widespread public availability of generative AI Large Language Model applications (LLMs) is just one example of the changing and varied AI landscape.

OVIC is updating its AI privacy guidance, to ensure it is relevant, useful, clear, accessible, and practical to VPS organisations when they are designing, procuring, deploying and using AI systems.

Download

AI-Privacy-Obligations-Public-consultation-Feedback-received-and-next-steps-August-2024.docx

AI-Privacy-Obligations-Public-consultation-Feedback-received-and-next-steps-August-2024.docx
Size 170.14 KB

Download
AI-Privacy-Obligations-Public-consultation-Feedback-received-and-next-steps-August-2024.pdf

AI-Privacy-Obligations-Public-consultation-Feedback-received-and-next-steps-August-2024.pdf
Size 125.86 KB

Download

Contents

Back to Index
Back to top
Back to Top