Privacy Refresher – Collecting sensitive information
Popular search terms
Click each term for related articles
Asia Pacific
Data Protection & Privacy
Following a joint investigation with her UK counterpart, the Privacy Commissioner has found that Clearview AI interfered with Australians’ privacy and ordered that it stop scraping facial images and delete all its existing harvested images. This latest determination leaves no doubt as to the OAIC’s focus on collection of (and the illegality of scraping of) sensitive information from online sources. The determination and thus, this article, are also must-reads for all companies using (or thinking about) machine learning technologies.
In July 2020 the Office of the Australian Information Commissioner (OAIC) and the UK Information Commissioner’s Office (ICO) announced that they would work together to investigate Clearview AI, Inc. (Clearview AI).
Clearview AI provides a search tool that permits its users to upload an image of a person. The app then uses facial recognition technology to search its database of billions of scraped images and return other images of that person, including a link to where those images are located (sometimes with accompanying information that identifies the individual). The tool is marketed as being useful for law enforcement agencies seeking to generate ‘investigative leads’.
The OAIC announced the conclusion of the investigation on Wednesday, 3 November 2021, publishing its determination (Determination) that Clearview AI breached the Privacy Act 1988 (Cth) (Privacy Act) by:
The OAIC has ordered Clearview AI to stop scraping facial images of Australians from the web and to delete the images of Australians that it currently holds. The ICO is separately considering its next steps.
As covered in our recent privacy refresher, APP 3.3 generally prohibits the collection of sensitive information about an individual without their consent. The sensitive information must also be reasonably necessary for one or more ‘functions or activities’.
Clearview AI collected personal information from publicly available sources. While a commonly held belief, information available online is not there for the taking; the Privacy Act collection requirements still apply. Collection of personal information from publicly available sources (including by automated means, as with scraping) is still collection of personal information and the APPs apply. As such, scraping individuals’ sensitive information, including biometric information, therefore requires their informed consent.
The Determination emphasises that the kind the personal information involved in this case is especially sensitive. Photos of individuals’ faces constitute biometric identity information. In a statement on Wednesday the Privacy Commissioner remarked that 'covert collection of this kind of sensitive information is unreasonably intrusive and unfair' and 'carries significant risk of harm to individuals, including vulnerable groups … whose images can be searched on Clearview AI’s database'. Biometric identity information (i.e. in this case vectors descriptive of an individual’s own face) 'cannot be reissued or cancelled and may be replicated and used for identity theft'.
APP 3.5 prohibits the collection of personal information other than by fair means. Companies often gloss over this requirement without due consideration as to its application in practice. Assessing whether a means of collection is fair requires a balancing of competing interests. In the main, however, covert or surreptitious collection of personal information (e.g. through web scraping) is almost never fair.
The Privacy Commissioner notes that the collection of biometric information in circumstances to provide a commercial offering to law enforcement agencies 'carries significant risk of harm to individuals' including 'harms arising from misidentification of a person of interest by law enforcement' and 'risk of identity fraud that may flow from a data breach'. Given the 'commercial purposes, and the covert and indiscriminate method of collection' Clearview AI’s collection of personal information was found to be unreasonably intrusive and therefore unfair.
APP 5 requires an APP entity collecting personal information about an individual to take reasonable steps to notify the individual of certain matters (e.g. as set out in a privacy policy or collection statement). This must occur at or prior to the time of collection wherever practicable.
It is on this issue that Clearview AI made its most ‘interesting’ arguments including that: (a) it had a privacy policy accessible through its website; and (b) at certain times it offered Australian residents an online form to opt out from its search results. The Privacy Commissioner found that, not only was Clearview AI’s privacy policy deficient, Clearview AI failed to take reasonable steps to notify it to all relevant individuals on collection of their personal information (a similar problem to that discussed in the 7-Eleven determination).
Given the personal information was collected covertly, by definition individuals had no means of becoming aware of Clearview AI’s privacy policy. Furthermore, Clearview AI’s database included images of children and other individuals with particular needs (i.e. some of the most vulnerable online) and Clearview AI was therefore required, but failed, to ‘take more rigorous steps’ to ensure such individuals were notified of and understood its privacy policy.
APP 10.2 requires an APP entity to take reasonable steps to ensure that the personal information it discloses is accurate, up-to-date, complete and relevant. What constitutes ‘reasonable steps’ will vary depending on the circumstances. In these circumstances, Clearview AI disclosed personal information for the purpose of displaying image matches to its law enforcement customers in response to search requests. Those customers could make ‘serious decisions’ based on the use of the tool. Inaccurate results would lead to misidentification, from which ‘significant harm’ could result.
The Privacy Commissioner gave ‘little weight’ to Clearview AI’s claims that it did not guarantee the accuracy of the tool in its terms and conditions with customers and marketing collateral. Such claims did not detract from the statements on Clearview AI’s website and those made to prospective users, which the Privacy Commissioner found to ‘clearly indicate’ the purpose of the tool.
The standard for ‘reasonable steps’ as to accuracy given these circumstances and as elucidated in the Determination is extremely high. On the evidence provided to the Privacy Commissioner, the only step Clearview AI actually took to ensure the accuracy of personal information it disclosed (i.e. its results) was to submit the tool to one accuracy test, conducted in October 2019. The methodology for that single test was ‘adapted from a test designed for a different facial recognition technology’ which led to ‘material limitations in the testing methodology’.
The Determination provides invaluable insight for other organisations seeking to rely on the predictive capability of machine learning algorithms. Drawing reasonable inferences from the Determination, the following privacy considerations may need to be baked into the training and testing of an AI model in order to satisfy the APP 10.2 accuracy requirement:
An APP entity must take reasonable steps to implement practices, procedures and systems that will ensure the APP entity’s compliance with the APPs.
While many of the Privacy Commissioner’s comments in the Determination are fact-specific, she refers to the OAIC’s published guidance on how to determine whether to undertake a privacy impact assessment (PIA) and concludes that undertaking a PIA was a reasonable (i.e. essential) step for Clearview AI to take before deploying the tool. This conclusion is reached because:
Clearview AI did not conduct PIA or any similar systematic assessment and so was held to have breached APP 1.2 by omission.
The Determination stresses the importance (or perhaps essentialness) of conducting a PIA on any proposed AI model and/or new business model before you proceed. While certain practices, such as those the subject of the Determination, will clearly constitute interferences with individuals’ privacy, many are ‘borderline’ cases and you could comply with the Privacy Act by making minor adjustments. It is much more cost-effective to get good advice upfront and make those adjustments at the design or development phase than to mop up a compliance mess after the fact (which, in practice, may see the termination or significant restructuring of the activity).
The Determination also provides instructive guidance on:
If these issues are of interest to your organisation, the Determination should be the impetus to revisit them.
Clyde & Co has the largest dedicated cyber incident response and privacy advisory practice in Australia and New Zealand and has more 5-Star Cyber Lawyers than any other firm. Our experienced team has dealt with thousands of data breach and technology-related disputes in recent times, privacy reviews, assessments and solutions advices, including a number of major AI/ML projects.
From pre-incident readiness reviews, solutions and advice, new tech privacy strategies, breach response, through to defence of regulatory investigations and proceedings, as well as recovery actions against wrongdoers, we assist clients globally across the full cyber lifecycle. Our team is also highly regarded for their expertise and experience in relation to new technologies, financial services IT prudential requirements and managing all forms of disputes across multiple sectors including advising on some of the most newsworthy class actions commenced in Australia.
Our 24-hour cyber incident response hotline or email allows you to access our team directly around the clock. For more information, contact us on:
Australia: +61 2 9210 4464
New Zealand: +64 800 527 508
cyberbreach@clydeco.com
End