Artificial intelligence in the public sector – inference and collection
This blog post was written by Asher Gibson, Policy Officer, OVIC and Tessa Micucci, Lawyer, OVIC
Artificial Intelligence (AI) is a way for computers to perform tasks that would ordinarily require human intelligence, such as identifying objects, translating speech and making decisions. In some areas, such as synthesising large amounts of data, AI can easily accomplish tasks that humans cannot. AI systems can be particularly good at using data to infer information or make predications, but when such inferences produce data about identifiable individuals, privacy laws such the Privacy and Data Protection Act 2014 (PDP Act) can apply.
Organisations that infer personal information are collecting the inferred information (this process is sometimes referred to as ‘collection by creation’). Information Privacy Principle (IPP) 1.1 prohibits organisations from collecting unnecessary personal information. This means that organisations cannot infer personal information unless that information is actually necessary for their functions or activities.
In some cases, it may be unclear what information could be inferred or what its quality could be. This could potentially make it difficult for organisations to be certain that inferred information will be useful until after it has been examined. If an organisation expects that inferring personal information would be necessary, but the information turns out to be unnecessary, the organisation should destroy that information in line with IPP 4.2.
IPP 1 anticipates that personal information can be collected either from the person that the information is about, or from someone else. Inferring information about a person is not directly collecting information from that person. So, in our view, collecting personal information from an AI system is no different than collecting that information from a third party who is a natural person for the purposes of IPP 1. This means that an organisation that infers personal information is collecting that information from someone else (‘someone else’ could be, for example, an AI system used to produce information).
When weighing inferring personal information against directly collecting it from the individual, inference is an inherently less fair method. When information is collected directly from an individual, that individual is necessarily aware that the collection is taking place. For example, if a student tells a teacher that they are struggling with a particular topic in class, the student will know what they are sharing with the teacher. On the other hand, if an AI education system infers that a student is struggling with a topic, the student will not have an intrinstic awareness of this. Direct collection can also provide individuals with autonomy over the circumstances in which they divulge information about themselves, whereas inference does not require individuals to divulge information and therefore may afford individuals with less autonomy.
This is reflected in IPP 1.2, which prevents organisations from collecting personal information by unfair means. Therefore, if alternative collection methods that are fairer than inference are available, collection through inference may be inappropriate. IPP 1.4 states something similar: that, where practicable and reasonable, organisations must collect personal information directly from the individual to whom it relates. As inference is indirect collection, organisations are generally prohibited from inferring personal information if they could instead collect it directly.
IPP 1.5 requires organisations to take steps to ensure individuals are aware of a range of matters regarding the information inferred about them, unless doing so would threaten life or health. To that end, given individuals may not be inherently aware that AI is making inferences about them, the IPP 1.5 notification requirements are especially important for ensuring transparency.
AI is a field that has been receiving increasing amounts of attention from government, private organisations, regulators and society as a whole. In 2018 OVIC produced an issues paper on AI and privacy; in 2019 OVIC published the book Closer to the Machine: Technical, social, and legal aspects of AI; and this year OVIC is creating guidance material regarding how the Victorian public sector should approach AI from a privacy perspective.