Unravelling the AI privacy puzzle in Aotearoa
Market Insight 15 June 2023 15 June 2023
Data Protection & Privacy
Artificial intelligence (AI) is already playing a significant role in Kiwis’ lives, and its impact is set to continue. From customising online shopping to identifying fraud and diagnosing health issues, AI offers considerable benefits for improved productivity and efficiency. However, while AI has many benefits, several privacy risks are associated with this technology. This article briefly touches on these risks and the New Zealand regulatory landscape for organisations seeking to leverage AI in their business.
How does AI affect privacy?
The effects of AI on our privacy are not always apparent, as AI often uses aggregated data to perform its functions. However, AI uses personal information at new levels of power and speed, which comes with unique privacy issues. These include:
- Consent – AI tools collect and process mass amounts of data, including personal information, often without consent or awareness. This is particularly concerning where AI analyses data to create detailed profiles of individuals or tracks individuals’ movements to predict behaviour.
- Lawful purpose – personal information must be collected and used for a lawful purpose under privacy laws. AI is able to improve its performance by learning from data to recognise patterns and make predictions. This means it is difficult to know how AI will use data in the future – it may be for new or unexpected purposes.
- Data security – AI systems require vast amounts of data. It is not always clear who has access to personal information collected by AI or how securely it is stored due to its dependency on a broad range of networks and databases. This increases the surface area for potential cyber attacks.
- Transparency – AI systems can be difficult to understand or interpret, making it challenging to determine how data is being used, or decisions are being made. This lack of transparency can present privacy risks, particularly if sensitive personal information is being used or the systems are biased based on the data on which they are trained.
- Cyber attacks – threat actors exploit AI tools to generate sophisticated phishing emails and malicious codes. Further, AI systems can be tricked or deceived by being fed malicious or misleading data, undermining the accuracy or reliability of AI systems. This could have significant consequences in areas such as medical diagnosis, financial trading or autonomous vehicles.
How does New Zealand’s privacy regime regulate AI?
AI using personal information is primarily regulated through the Privacy Act 2020 (Privacy Act) in New Zealand. The Privacy Act sets out how personal information can be collected and used, including information processed by AI.
In addition, the Māori Data Sovereignty Principles are values-based directives that also apply to AI to the extent that it processes information relating to Māori people, language, culture, resources or environments.
AI’s advancements challenge the existing rights and protections provided by the Privacy Act and Māori Data Sovereignty Principles. New Zealand’s Privacy Commissioner recently acknowledged that privacy regulators urgently need to consider how best to regulate AI in a way that protects privacy rights without stifling innovation.
New Zealand Developments
The New Zealand Government is yet to develop a formal national strategy for regulating AI. We expect to see one soon, though, based on the Government’s recent interest in AI regulation. For example, the New Zealand Government has:
- adopted the OECD Principles on Artificial Intelligence, which include public policy and strategy recommendations for governments to ensure ethical and responsible AI;
- established the ‘Algorithm Charter for Aotearoa New Zealand’ – a tool that government agencies can use to assess the ethical and legal implications of using algorithms in their decision-making processes;
- partnered with the World Economic Forum’s Centre for the Fourth Industrial Revolution to develop a roadmap to guide policymakers in regulating AI; and
- entered into the Digital Economy Partnership Agreement with Singapore and Chile – this establishes new rules and guidance on digital trade and emerging issues such as AI.
Without a formal national strategy, industry organisations have developed their own guidelines, reports, and frameworks seeking to regulate AI. While many of these remain non-binding and ‘best practice’ rather than law, they take positive steps towards ensuring ethical and responsible AI. Industry initiatives include:
- ‘Trustworthy AI in Aotearoa: AI Principles’ – these voluntary principles provide ethical and legal guidance for developers and users of AI in New Zealand. The principles were developed by the AI Forum of New Zealand, a not-for-profit organisation that engages in discussions around the future of AI in New Zealand;
- the ‘Artificial Intelligence and Law in New Zealand’ Project – a three-year project evaluating legal and policy implications of AI for New Zealand, led by the New Zealand Law Foundation and University of Otago’s Centre for Artificial Intelligence and Public Policy (CAIPP); and
- industry working groups and advisory bodies – for example, the AI Forum’s ‘Te Kāhui Māori Atamai Iahiko’ (a Māori Advisory Panel) and the CAIPP’s advisory work to New Zealand government ministries.
Around the world, countries are taking varying approaches to AI regulation, from voluntary guidelines and industry self-regulation to more stringent laws and regulations.
The European Union (EU) is leading the way, with a draft Artificial Intelligence Act (Act) having passed a final parliamentary vote on 14 June 2023. A final version of the law is expected to be passed later this year.
Systems deemed to pose an ‘Unacceptable Risk’ (e.g. government social scoring and real-time biometric identification systems in public spaces) are prohibited with little exception. ‘High risk’ AI systems (e.g. autonomous vehicles, medical devices, and critical infrastructure machinery) will be required to comply with rigorous testing, data quality, and accountability rules. AI systems with ‘Limited and Minimal risk’ (e.g. spam filters or video games) are allowed to be used with little requirements other than transparency obligations.
In early June 2023, the Australian Federal Government announced its intention to impose legislation regulating AI, such as a three-tiered risk classification system like the EU. At present, the Federal Government has developed only a voluntary AI Ethics Framework for businesses and governments using AI.
Canada has proposed an Artificial Intelligence and Data Act, which would require entities and individuals ‘responsible for’ AI systems to assess and mitigate systems’ risks, such as causing harm or producing biased outputs.
China released draft AI regulations in April 2023. If adopted, the regulations will clarify how certain privacy protections apply to AI, mandate steps to prevent algorithmic bias and prohibit discriminatory content generation, and require AI products to undergo security assessments before being publicly offered.
In April 2023, the US government announced that it is officially seeking public comments on potential accountability measures for AI.
What should you consider when using AI?
While law makers formulate their approach to regulating AI, we recommend considering the following when using AI technologies:
- Confidentiality – take precautions when handing over business, employee, client, or other stakeholders’ personal information to AI tools. Any information handed over to AI tools may become part of the public domain, which risks breaching your professional or client confidentiality obligations.
- Purpose and consent – consider whether collecting customer, employee or other stakeholder data is necessary to achieve your business purpose, comply with privacy laws and whether individuals are aware of your use of AI systems.
- Accuracy – confirm the accuracy of AI-generated information – just recently, librarians at the NZ Law Society were given citations by lawyers who had used ChatGPT to do their legal research. The citations were entirely made up and the cases didn’t exist.
- Third-party security risk – review the privacy policies of the AI software and tools you use to understand what security safeguards and practices they have in place.
- Ethical and cultural considerations – implement a framework for assessing the ethical and cultural implications of your use of AI systems. For example, consider whether the information generated by AI respects or stigmatises Māori communities, groups and individuals in line with the Māori Data Sovereignty Principles.
How can we help?
Clyde & Co’s Technology & Media Team has unparalleled and specialised expertise across the privacy, cyber and broader technology and media practice areas. It also houses the largest dedicated and market leading privacy and cyber incident response practice across Australia and New Zealand.
The firm’s tech, cyber, privacy and media practice provides an end-to-end risk solution for clients. From advice, strategy, transactions, innovations, cyber and privacy pre-incident readiness, incident response and post-incident remediation through to regulatory investigations, dispute resolution, recoveries and third-party claims, the team assists its clients, inclusive of corporate clients, insurers, insureds and brokers across the full spectrum of legal services within this core practice area.
 Reference to the Māori Data Sovereignty Principles 1.2, 5.1 and 5.2