DIFC enacts landmark regulation for autonomous and semi-autonomous systems

  • Legal Development 28 September 2023 28 September 2023
  • Middle East

  • Data Protection & Privacy

On 7 September 2023, the Dubai International Financial Centre (DIFC) announced that it had enacted amendments to its Data Protection Regulations, introducing new requirements on the processing of personal data via autonomous and semi-autonomous systems, such as artificial intelligence (AI). The amended Regulations are in force from 1 September 2023 and these are among the first legislative provisions to reference AI in the UAE.

The DIFC, a global financial centre in the Emirate of Dubai, United Arab Emirates, enacted amendments to the DIFC Data Protection Regulations, which supplement the DIFC Data Protection Law No. 5 of 2020 (the DIFC DPL). The amendments include new requirements on deployers and operators of autonomous and semi-autonomous systems, such as AI or generative, machine learning technology. The Regulations are one of the first pieces of legislation regulating AI in the Middle East as DIFC seeks to further establish its credentials as a hub for fintech and innovation.

What are autonomous and semi-autonomous systems? 

Article 10 of the DIFC Data Protection Regulations (Regulation 10) regulates the processing of personal data that is conducted through “autonomous and semi-autonomous systems” (Systems). Systems refer to “any machine-based system operating in an autonomous or semi-autonomous manner” which can process personal data for purposes that are human-defined (i.e. determined by human involvement) or by the System itself, and that “generate output as a result of or on the basis of such” processing. The definition has been derived from OECD guidelines and international legislation such as the EU AI Act to include a broad range of systems capable of operations that involve minimal human input (such as predictive and prescription AI systems, as well as generative, machine-learning technology).

Unlike other AI laws around the world, DIFC is not seeking to establish principles to regulate the content of algorithms directly but instead it establishes boundaries for organisations that deploy, operate or provide AI systems to the extent they process personal data. The guidance to Regulation 10 further clarifies that the definition of “Personal Data” in the DIFC DPL can be interpreted to include identification of virtual personas or similar virtual criteria that identify an individual, e.g. personas created for the metaverse.

Who is affected by Regulation 10?

The DIFC DPL regulates “controllers” and “processors” of personal data. Since “control” over personal data may not be established when using a System, Regulation 10 has instead assigned responsibilities on the persons or entities that authorise or benefit from the operation of the System and any output that the System produces – i.e. the “deployers” and “operators” of the Systems. The “deployer” is generally accountable for compliance with Regulation 10, similar to a “controller” under the DIFC DPL. “Operators” capture technical service providers that develop the Systems on the instructions and for the benefit of a “deployer”, similar to a “processor” under the DIFC DPL.

What are the key requirements of Regulation 10?

AI and machine-learning systems often rely on the analysis and processing of vast quantities of personal data. Regulation 10 introduce actions and concepts that must be applied by deployers and operators when processing personal data via the Systems.

Notice

A clear and explicit notice must be provided upon the initial use or access to the System, which alerts users to any underlying technology and processes comprising the System that may process personal data that is not “human-initiated, controlled or directed”, as well as the impact of the use of the System on the exercise of individual rights provided under the DIFC DPL.

The notice should also contain a comprehensive, true and plain description of purposes, principles and outputs of the System, including as follows:

  • Purposes: The human-defined purposes for which personal data is processed by the System, as well as all human-defined principles (and limits) that the System uses as a base to define further purposes for processing personal data. 
  • Principles: The principles on which the System has been developed and designed to operate, as well as any safeguards built into the System by design to ensure compliance of personal data processing by the System with the DIFC DPL and Regulation 10.  
  • Codes and certifications: Any codes, certifications and principles upon which the System is designed or developed which may be developed by other authorities, such as Dubai Digital Authority, the Organisation for Economic Cooperation and Development (OECD), the National Institute of Standards and Technology (NIST) AI Framework, or sector-specific guidelines such as those issued by the UAE Central Bank for financial institutions adopting “Enabling Technologies”. 
  • Evidence: When requested by an affected party, evidence of the System’s compliance with any applicable audits or certifications, and of any algorithms that cause the System to seek human intervention when personal data processing could result in an unfair or discriminatory impact on data subjects or if any personal data must be accessed by, or on behalf of, competent government authorities (e.g. law enforcement).

In addition to a notice, deployers and operators must also provide a register, when requested, that lists specific information about the System, including the necessity and proportionality of processing activities, how information in the System can be accessed by data subjects, whether the System is used solely to make automated decisions, and where third parties or regulatory authorities that process personal data used in the Systems are located and safeguards applied for data exports.

Concepts

Systems must be designed in accordance with specific concepts set out in Regulation 10, which are based on fundamental principles that reflect best practices in AI design. These are:

  1. Ethical: Algorithmic decisions and the associated flow of data through a System should be unbiased. This is in line with the principles of fairness and transparency in the DIFC DPL and helps to reduce the risk of discrimination and prejudices. 
  2. Fairness: Systems must be designed in such a way to treat all individuals equally and fairly, regardless of their race, gender or other subjective factors. Systems should be designed to avoid potential biases and to mitigate bias that could lead to unfair outcomes. This goes hand-in-hand with the ethical principle. 
  3. Transparent: The processing of personal data through a System must be explained to data subjects and other stakeholders in non-technical terms and with appropriate supporting evidence. 
  4. Secure: A System should keep personal data protected, confidential and secure. It should be designed in such a way to prevent data breaches that could cause reputational, psychological, financial, professional or other types of harm. 
  5. Accountability: A System must have mechanisms in place to ensure responsibility and accountability for enabling the System and the outcomes. For example, by including internal governance and control frameworks for monitoring the System, such as auditing processes that regularly enable the assessment of algorithms, data and design processes.

High Risk Processing Activities

The DIFC DPL makes specific to “High Risk Processing Activities”, including any processing that includes the adoption of new technology that creates a materially increased risk to data subject rights. Regulation 10 prohibits the commercial use of Systems to engage in such high risk processing unless the DIFC Commissioner of Data Protection has established corresponding audit and certification requirements, the System is compliant with all such requirements, it processes personal data solely for human-defined or human-approved purposes, and the deployer or operator has appointed an autonomous systems officer (ASO). The ASO is a new function that would perform a substantially similar to a Data Protection Officer (DPO) and would be tasked with monitoring compliance and cooperating with the Commissioner.

What does this mean for DIFC companies?

Organisations operating in the DIFC that are considering the use of AI systems or machine-learning models should ensure that they can comply with the requirements set out in Regulation 10. Technology providers – including many of the fintech companies operating in DIFC – will need to ensure that the concepts and requirements set out in Regulation 10 are incorporated into the systems that they develop. Compliance may require the appointment of an autonomous systems officer by either deployers or operators.

Breaches of Regulation 10 could lead to administrative fines set out in Schedule 2 of the DIFC DPL. Additionally, data subjects whose personal data are processed by way of the Systems, may submit complaints challenging the outcome of processing personal data in accordance with the DIFC DPL.

If you would like further information, please contact our team of data privacy, cybersecurity and AI specialists. Clyde & Co has a dedicated cross-practice international AI working group that is focused on supporting clients with the management of risks and compliance relating to their adoption and deployment of AI. Our multi-disciplinary global team can:

  • advise on AI regulation around the world
  • conduct an impact assessment or audit to help clients understand the risks of particular AI solutions or use cases
  • advise on liability and insurance positions in the context of AI projects
  • support on AI-related disputes
  • prepare internal frameworks and guidelines for the use of AI

End

Stay up to date with Clyde & Co

Sign up to receive email updates straight to your inbox!

You might be interested in...