Rolling in the deepfakes: generative AI, privacy and regulation

  • Étude de marché 2 octobre 2023 2 octobre 2023
  • Asie-Pacifique, Amérique du Nord, Royaume-Uni et Europe

  • Protection des données et de la vie privée

Artificial intelligence (AI) is increasingly permeating all aspects of modern society and commerce: from use in automated decision-making processes to facial recognition and generative AI. At the same time, the rapid advancement in AI technology, such as deep learning, has led to an increased pervasiveness and proliferation of misinformation through deepfakes, a type of synthetic AI-generated media, which have become increasingly difficult to detect both by the human eye and by existing detection technologies. These developments have significantly increased privacy, cybersecurity and identity theft risks at an individual, enterprise and state level.

Below, we address some of the key challenges posed by this product of generative AI to privacy and consider some of the current proposed legislative responses to it in Australia and the European Union (EU) in its pro­posed AI Act, in particular as regards synthetic deepfake media. Finally, we suggest a possible way forward using a “non-synthetic” model of media authentication. In doing so, we urge that any regulation on the use of these technologies and any consequent deepfakes must be fit for purpose and balance the need for regulation with and protection of individuals fostering technological innova­tion and free speech.

What are deepfakes?

The word “deepfake”, also referred to as deep syn­thesis technology, is a combination of “deep learning”, a branch of machine learning which uses artificial neural networks to create synthetic media from existing image, audio or visual files and “fake”, indicating that the media produced is inauthentic (ie created by AI). In common usage, the term implies the intention of misleading or misinforming the consumer of that AI-created media (or deepfake) into believing that it is authentic and is juxtaposed with more elementary image manipulation techniques dubbed “cheapfakes”.

Deepfakes are created by various methods that utilise machine learning, one of which uses generative adversarial networks to develop two complementary data sets. One set is built by analysing the target subject’s appearance and voice in isolation to capture and encode their unique 

biometric characteristics. Often, such source material (eg photographs) are obtained from scraping the public web. For example, the way Clearview AI collected images of individuals’ faces for use in its facial recog­nition software marketed to law enforcement and private entities in Australia (and overseas). The other set comes from scanning other faces, voices and images (also often scraped from the public web) to feed the data set with facial features and other biometric information. Those two data sets are then used to train an AI neural network to combine the subject’s unique characteristics with the acquired knowledge of general human expression in order to then synthesise the target’s facial features, voice, mannerisms, etc to generate deepfake material at will.

As deepfakes become more advanced, detecting them with the human eye becomes increasingly difficult and therefore more likely to mislead or misinform. Detection technology is therefore being developed in parallel to detect synthetic AI media content such as deepfakes. An example of this is Intel’s “FakeCatcher” which analyses eye movement and uses photoplethysmography to detect changes in blood flow and has been reported to detect deepfakes with 96% accuracy. Additionally, projects such as “Detect Fakes” by the Massachusetts Institute of Technology (MIT)1 have also been created to educate individuals in detecting deepfake material. However, as the technology to create convincing deepfakes becomes ever more advanced, this essentially creates a “deepfakes arms race”2 between those using deepfake technology to perpetrate identity theft, commit crime and spread dis­information on one hand and, on the other, the develop­ment of technology used to detect such material.

Although outside the scope of this article, we also note that in the cybersecurity context, there are addi­tional chilling examples of use of deepfakes to imper­sonate government officials and other public figures, such as (more recently) Ukrainian President Volodymyr Zelensky.3

Regulation of generative AI

Australia

At the time of writing, the use of deepfake technol­ogy and the creation or use of deepfake images, audio or video is not specifically regulated in Australia, for 

example by specialised AI legislation. However, the collection, use and disclosure (collectively, processing) of personal information, including photographs and audio-visual material that records a person’s likeness and physiological attributes (“whether true or not”, ie a deepfake) constitute biometric information which is “sensitive information” regulated by the Privacy Act 1988 (Cth). As a matter of course, sensitive information requires a higher level of protection and requires consent under the Privacy Act to collect, use and disclose such. In the case of deepfakes, “collect” includes the creation of the material using personal and/or sensitive informa­tion.

Currently, entities subject to the Privacy Act includ­ing those offshore entities now subject to the Privacy Act based on their activities (Australian Privacy Principles (APP) entities) that collect biometric information directly from or indirectly about individuals located in Australia must implement practices to:

  • only collect such information where it is reason­ably necessary for an existing business function
  • only collect, use and disclose such with informed and voluntary consent and for a consented purpose which cannot be otherwise achieved without the collection of biometric information and
  • have appropriate information security measures in place with respect to that biometric information

As briefly noted above, an entity based overseas will be an APP entity and subject to the Privacy Act where there is an “Australian link”. For example, the overseas entity “carries on business in Australia” where it pro­cesses the biometric information of Australian located individuals (at the time the information is collected or created), even if it has not collected it directly from Australian located individuals or stored that information in Australia at any time. In practice, this means that certain overseas entities processing biometric informa­tion of Australian located individuals (eg deepfakes) will also be required to comply with the Privacy Act require­ments noted above. However, these privacy protections will likely not apply to deepfakes created by small businesses (where exempt from the Privacy Act) and individuals outside of a business context.

The Attorney General’s Privacy Act Review Report published in February 2023 proposed some 116 changes to the Privacy Act. These included broadening the definition of personal and sensitive information from “information or an opinion about an identified or iden­tifiable individual” to “information or an opinion that relates to an identified or identifiable individual,” essen­tially matching the definition of “personal data” in the EU’s General Data Protection Regulation (GDPR). This change is intended to clarify that information such as IP addresses, location data and other online identifiers fall within the description of “personal information”. This change will also confirm that deepfakes (and even cheapfakes) that clearly “relate to” an identified or identifiable individual located in Australia at the time they are created constitute sensitive information.

If this is (and many of the other 116 proposals are) implemented (as we suspect they will be), then individu­als’ rights with respect to the processing of their personal information (including biometric information or tem­plates, deepfakes and other sensitive information) will be significantly strengthened.

The Online Safety Act 2021 (Cth) regulates “cyber abuse material”, “non-consensual intimate images” and “material that depicts abhorrent violent conduct” online, including with respect to children. Crucially, “material” is defined as “material in any form” including text, data, speech or audio-visual images. In particular, material that an ordinary reasonable person would conclude was intended to have the effect of causing serious harm to a particular individual located in Australia or that is menacing, harassing or offensive constitutes cyber abuse material and is prohibited under the Online Safety Act.4 While there is no specific reference to material that is generated by an AI system in the Online Safety Act, the legislation is outcome-focused and worded broadly enough to include AI generated deepfakes where it “depicts a person”. Deepfakes used to depict persons for the purpose of spreading disinformation or intending to harass or be offensive (where a reasonable person would consider it to have that effect) in this context falls under the Online Safety Act, whether the act is by an indi­vidual or an organisation. In this way, the Online Safety Act goes part of the way in protecting against the creation and proliferation of deepfakes deemed to be cyber abuse material.

In June 2023, the Australian Government released a discussion paper on Safe and responsible AI in Austra-lia5 that builds on the National Science and Technology Council’s paper Rapid Response Information Report: Generative AI — Language models and multimodal foundation models6 and seeks industry feedback on how the Australian Government can mitigate the risks posed by all forms and uses of AI and support lawful and ethical AI practices. However, at the time of writing, no specific legislation governing the use of AI, generative AI, deepfakes or other AI technologies has been pro­posed by the Australian Government.

EU AI Act

On 21 April 2021, the European Commission pro­posed the Regulation of the European Parliament and of the Council laying down harmonised rules on AI (AI Act) to bolster the privacy protections in the GDPR with 

respect to AI. Rather than being sector-specific or AI technology-specific, the AI Act takes a holistic risk-based approach to the regulation of AI, characterising AI practices which fall within categories of unacceptable risk (and prohibited outright), high risk, limited risk or minimal to no risk.

The AI Act defines an “AI system” to include software that uses machine learning approaches, includ­ing supervised, unsupervised and reinforcement learn­ing, using methods such as deep learning that can, “for a given set of human-defined objectives, generate out­puts such as content, predictions, recommendations, or decisions influencing the environments they interact with”.7 Crucially, it imposes a prohibition on the use of an AI system that deploys subliminal techniques beyond a person’s awareness in order to distort their behaviour in a manner that causes or is likely to cause physical or psychological harm, whether to that person or to another.8

More broadly, the AI Act also imposes minimum transparency obligations on users of AI systems that “generate or manipulate images, audio or video content that appreciably resembles existing persons” and may appear to be authentic or truthful to disclose that the content has been artificially generated or manipulated.9 We note there is a carve-out from this requirement where it is necessary for law enforcement or to exercise the freedom of expression and the right to freedom of the arts and sciences.

Proposed penalties for non-compliance with the AI Act include administrative fines up to €30 million for individuals and the greater of €30 million or 6% of global income (for the preceding financial year) for organisations that have undertaken prohibited AI prac­tices and up to €20 million for individuals and the greater of €20 million or 4% of global income (for the preceding financial year) for organisations that other­wise breach the AI Act.

The AI Act establishes a European AI Board to act in an advisory capacity, issue opinions and guidance and to assist with the implementation of the AI Act. There is also a proposal to establish voluntary industry codes of conduct (including a Code of Practice on Disinforma­tion) and AI regulatory sandboxes to reduce the regula­tory burden of compliance with the AI Act. The draft regulation was approved by the European Parliament on 14 June 2023 and is currently in its discussion phase, before being passed into legislation. Once passed, it will become one of the world’s most comprehensive pieces of legislation regulating AI technology production, dis­tribution and use and set a standard for similar legisla­tion in jurisdictions outside the EU.

Privacy and cybersecurity challenges

Deepfakes pose unique challenges to many sectors of the economy. Most pervasive is the effect on privacy and cybersecurity. At an individual level however, it may be difficult to quantify the need for redress if there is no “harm” to a person. For example, “harm” caused may be difficult to quantify where there is no identity theft, cyber abuse material or non-consensual intimate images being shared (ie cyber pornography), as the case may be.

Another concern is that individuals may not be aware that their image, voice or other samples of their biomet-ric information located on public websites (for example, on social media) have been scraped from the internet and are being used to “feed” generative AI technology to create synthetic media depicting their likeness (ie deepfakes). Web scraping is generally not permitted under the Privacy Act, as illustrated by the investigation by the Office of the Australian Information Commis­sioner into Clearview AI (and the subsequent finding that Clearview AI breached the Privacy Act).10 How­ever, the Privacy Act is limited in that it does not currently apply to small businesses (including individu­als operating a business) unless the business has an annual turnover of more than AUD $3 million or one of the exceptions to the exemption apply. APP entities that are subject to the Privacy Act must obtain the voluntary and informed consent of all relevant individuals prior to the processing of their biometric information (whether with respect to a deepfake or not).

Joan is Awful: a cautionary tale

An extreme example (and very topical in the current landscape of Screen Actors Guild strikes) is illustrated by the episode “Joan is Awful” in Netflix’s series Black Mirror. The main character Joan, after discovering a show using her likeness and depicting her life (albeit with some artistic licence) is informed by her law firm that by agreeing to her streaming service’s (Streamberry) terms and conditions (ie a requirement to watch their content), she has given them the right to use her life and her likeness to create content for the Streamberry streaming platform. While this is a fictional and extremely dystopian narrative, it does raise difficult questions for users of social media and other internet services regard­ing their awareness of what personal information is being collected, for what purpose and the endless possibilities for generative AI and deepfakes.

Worldcoin

Chief Executive Officer of OpenAI Sam Altman’s current project “Worldcoin” which claims to create a “privacy preserving global identity network” by using an “orb” (facial recognition device) to scan individual’s faces, verify that they are a “unique human” and include them in a database is another concerning example.11 At the time of writing, Worldcoin has a database of more than 2 million people. Considering the current landscape of data breaches and recent investigation into OpenAI’s privacy and data security practices by the US Federal Trade Commission,12 it is not difficult to envisage the devastating cybersecurity implications following a suc­cessful hack of the biometric information contained in the Worldcoin database.

Approaches to regulating generative AI

Against the backdrop of convincing deepfakes and an open letter from global AI leaders to pause AI develop­ment that has amassed over 33,000 signatures to date,13 it is increasingly important for governments to ensure that there are legislative frameworks and protections in place (such as industry codes of conduct) for the ethical, lawful and privacy-protective development and use of AI. In particular, to ensure AI practices that enhance privacy and cybersecurity best practices.

We also note efforts in the UK to regulate tech companies through the Online Safety Bill (which at the time of writing is at the third reading stage) that, if passed, will impose duties of care with respect to content on technology platforms such as Facebook and Twitter, including a duty of care to keep children safe online.

Noting some of the current approaches to regulating AI, in practical terms rather than detecting deepfakes, perhaps a better approach is the parallel creation of a system of real-time authentication of real media through an image’s provenance and increasing the awareness of individuals to potential synthetic media used to perpe­trate disinformation, such as the work done by MIT in its “Detect Fakes” project. Another example of this is technology such as “Controlled Capture” which is soft­ware that can establish the provenance of images and verify critical metadata at the point of capture. In this way, the risk posed by the rapid ability of disinformation to spread would, in part, be ameliorated.

Conclusion

The rise in the technical capability and use of generative AI and the resultant deepfakes pose crucial challenges to privacy, consumer protection and cybersecurity at an individual, enterprise and state level.

Alongside legislative responses such as the proposed EU AI Act, it is crucial to explore parallel approaches, such as real-time authentication of media and increased awareness among individuals, to counter the spread of synthetic AI media. As we navigate this evolving land­scape, it is paramount that we equip individuals with the knowledge to detect and prevent deepfakes, ensuring a future where privacy and security can coexist with technological advancement and free speech. In our view, only through a comprehensive (both legislative and practical) and purpose-driven approach can we effec­tively address the challenges posed by generative AI and its increasing impact on our society and the digital economy.


Footnotes:

  1. Massachusetts Institute of Technology Media Lab, Detect Fakes, accessed 28 July 2023, https://detectfakes.media.mit. edu/.
  2. J I Grant “Fighting the deepfakes arms race” eSafety Commis­sioner 10 November 2019 www.esafety.gov.au/newsroom/blogs/ fighting-deepfakes-arms-race.
  3. J Wakefield “Deepfake presidents used in Russia-Ukraine war” BBC News 18 March 2022 www.bbc.com/news/technology-60780142.
  4. Online Safety Act 2021 (Cth), s 7.
  5. Department of Industry, Science and Resources Safe and responsible AI in Australia Discussion paper (2023).
  6. G Bell, J Burgess, J Thomas, and S Sadiq Rapid Response Information Report: Generative AI — Language models and multimodal foundation models (2023) www.chiefscientist.gov. au/sites/default/files/2023-06/Rapid%20Response%20Infor mation%20Report%20-%20Generative%20AI%20v1_1.pdf.
  7. European Commission Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (2021) Art 3(1).
  8. Above, Art 5(1)(a).
  9. Above n 7, Art 52(3).
  10. Office of the Australian Information Commissioner Commis­sioner initiated investigation into Clearview AI, Inc (Privacy) [2021] AICmr 54 (14 October 2021) (2021).
  11. Worldcoin, A new Identity and Financial Network, accessed 31 July 2023, https://whitepaper.worldcoin.org/.
  12. D Bartz and others “US FTC opens into OpenAI over misleading statements” Reuters 14 July 2023 www.reuters.com/ technology/us-ftc-opens-investigation-into-openai-washington-post-2023-07-13/.
  13. Future of Life Institute, Pause Giant AI Experiments: An Open Letter, accessed 30 July 2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments/.

*Published in LexisNexis Privacy Bulletin 2023 Vol 20 No 6

Fin

Restez au fait des nouvelles de Clyde & Cie

Inscrivez-vous pour recevoir de nos nouvelles par courriel (en anglais) directement dans votre boîte de réception!