Smart AI and smarter compliance: a blueprint for businesses to navigate the legal and privacy challenges of implementing internal AI

  • Insight Article 14 July 2025 14 July 2025
  • Asia Pacific

  • Tech & AI evolution

Artificial intelligence (AI) has moved from the margins of innovation to the centre of enterprise operations. No longer a futuristic aspiration or technological experiment, AI now reshapes how internal decisions are made, resources are allocated and services are delivered. The question posed to business today is no longer whether they use AI – but rather whether they "own" it.

In an environment where businesses are facing increased pressures surrounding data security, regulatory compliance and reputational risk, some businesses are pivoting away from plug-and-play third-party AI models and instead opting to build and implement internal AI systems. These proprietary solutions promise greater control, customisation and competitive advantage, but on the other hand require a deeper understanding of the accompanying legal and privacy challenges.

In eagerness to innovate and automate, businesses risk implementing internal AI without first having a blueprint to help navigate the legal and privacy challenges that are before them. These challenges include data privacy and security, algorithmic transparency and bias, accountability and liability and intellectual property (IP). This article overviews why some businesses are implementing internal AI, AI frameworks in Australia, the legal and privacy challenges of implementing internal AI and the solutions to these challenges.

Why businesses are implementing internal AI

The migration away from using readily available third-party AI models to building internal AI infrastructure is driven by a combination of strategic, operational and legal considerations. In the current climate, businesses are motivated by more than just the allure of efficiency – they are responding to a host of demands from regulators, shareholders, customers and suppliers.

Data sovereignty and security

One of these demands is data sovereignty and security, which is a primary driver for businesses developing internal AI systems rather than relying on its open-source counterpart. Data sovereignty refers to the concept that data collected or stored within a country’s jurisdiction is subject to that country’s laws and regulations. This becomes an issue in circumstances where third-party AI models often process data through external servers located in a different jurisdiction to where a business is operating from.

For example, data being inputted into an open-source AI platform by an employee of an Australian business could be transferred and stored in the US, potentially making it subject to its surveillance laws (e.g. the US CLOUD Act). This would raise significant concerns around data sovereignty and regulatory risk for that business. In contrast, internal AI would allow that business to retain full control over the jurisdiction in which the inputted data is stored and accessed. This would be particularly critical in sectors where sensitive personal information is collected and stored such as healthcare, financial and government.

Beyond jurisdictional compliance, data security and confidentiality pose another major concern for businesses. Many organisations manage proprietary datasets such as customer records, financial information and intellectual property that are critical to operations. Exposing such sensitive and confidential data to third-party AI platforms introduces risks related to data breaches, unauthorised access and regulatory non-compliance. On the other hand, internal AI systems allow businesses to have a tighter grip on data flows and to comply more effectively with regulations such as the European Union’s General Data Protection Regulation (GDPR), reducing potential legal and reputational consequences.

Customisation and competitive differentiation

Generic AI models, while powerful, are designed to serve a broad range of use cases. Internal AI, by contrast, can be finely tuned to reflect a business’ specific processes, language, risk appetite and goals. By developing proprietary models, businesses can encode their values, workflows and unique datasets into AI systems, creating a strategic asset that offers a defensible competitive edge relative to other competitors in the market.

Regulatory compliance and risk management

By using third-party AI systems, businesses are putting their faith in that third-party’s internal teams or external vendors to maintain compliance – a risky proposition in the current and future legal environment. Internal AI allows businesses to build compliance into their system architecture from the ground up, including implementing technical and organisation measures to protect personal information, algorithmic accountability and ethical safeguards. It also simplifies conducting audits, responding to regulatory inquiries and demonstrating due diligence. With growing emphasis on responsible AI, many businesses face pressure to ensure their AI systems are transparent, explainable and free from harmful biases. Developing AI in-house allows for greater scrutiny of training data, model decisions and ethical considerations. In contrast, the opacity of many third-party models, particularly those trained on undisclosed data sources, can hinder regulatory compliance and public trust.

Intellectual property ownership

When AI models generate insights, designs, or products, the question of ownership becomes relevant. Proprietary internal AI (with clear internal IP ownership policies in place) reduces ambiguity over IP rights by ensuring that both the model and the outputs remain under the control of the business. This is particularly important in industries primarily driven by innovation such as pharmaceuticals, media and manufacturing.

Cost efficiency and long-term investment

Finally, while open-source AI tools may offer initial cost savings, the total cost of integration, customisation, monitoring and risk mitigation can be significant over time. Businesses may find that investing in internal AI capabilities (despite higher upfront costs) yields greater returns in the long term. Proprietary systems may circumnavigate costly licensing fees and reduce dependency on external vendors, offering more predictable cost structures and improved budget control.

AI frameworks in Australia

There is currently no specific regulation governing the use of AI for private organisations in Australia. In September 2024, the Digital Transformation Agency released the Policy for the responsible use of AI in government,1 which requires compliance with specific frameworks and processes for AI usage in the public sector. Notwithstanding, we are yet to see regulations or a mandatory framework governing the use of AI tools by private organisations in Australia. In the absence of that, the Department of Industry, Science and Resources (Department of Industry) has released the Voluntary AI Safety Standard2 which provides “guardrails” to be followed when implementing AI tools, providing some guidance to organisations developing their own internal AI frameworks.

The 10 voluntary guardrails put forward by the Department of Industry draw on Australia’s AI Ethics Principles,3 which aim to ensure the safety, security and reliability of AI development. One of these key Ethics Principles is the need for a human-centred approach to AI use, with a view to ensuring the protection of individual rights and interests. As such, the Voluntary AI Safety Standard addresses the need to inform end-users regarding AI enabled decisions, to establish a process for individuals to challenge AI system outcomes which may impact them, and to engage with stakeholders with a view to safety, diversity, inclusion and fairness.

Further to this, the guardrails laid out in the Voluntary AI Safety Standard encourage the implementation of accountability, governance and risk management processes, the protection of AI systems to manage data quality, testing of AI systems to evaluate performance, and record-keeping which allows for compliance assessments.

While these guardrails are voluntary and therefore provide only guidance for organisations on how to responsibly implement AI, they also provide the foundations of a comprehensive and well-rounded AI framework for organisations to use as a starting point for the development of their own internal AI frameworks and governance processes. While AI has not yet been formally regulated in Australia, it should be noted that given the increasing prevalence of AI tools in sectors like healthcare, organisations should be aware that AI tools may also need to comply with non-AI related regulations, such as medical device regulations (under the Therapeutic Goods Act 1989 (Cth)4 for software-based medical devices which incorporate AI.

In September 2024, the Department of Industry released a proposal paper for the introduction of mandatory guardrails for AI in high-risk settings.5 Although the outcome of the proposal paper has not yet been published, it suggests that regulations for AI use in the private sector will be introduced in the future. The proposed mandatory guardrails closely reflect those already included in the Voluntary AI Safety Standard, further suggesting that compliance with the existing voluntary guardrails will ensure organisations will be well-placed when mandatory ones “are” implemented.

Legal and privacy challenges of implementing internal AI

In addition to staying apprised of the developing regulatory framework and emerging policy discussions around responsible AI, businesses will also need to navigate the raft of legal and privacy challenges that come with implementing internal AI.

Data privacy and security

One of these challenges is ensuring data privacy and security. It is critical for businesses to be cognisant of the ways they handle personal information when using and training internal AI. Guidance issued by the Office of the Australian Information Commissioner (OAIC) has stressed that privacy obligations apply to both personal information input into an AI system, as well as the output data generated by AI (provided it contains personal information, which includes inferred, incorrect or artificially generated information such as deepfakes where it pertains to an identified or reasonably identifiable individual).6

Against this backdrop, businesses building and implementing their own internal AI systems require the input of accurate and comprehensive volumes of datasets to train AI for quality output. However, this practice is largely incompatible with Australian Privacy Principles 3 and 6, which emphasise data minimisation and purposive limitations.7 Accordingly, where an organisation wishes to train their internal AI using data it already holds, it should carefully examine the resulting privacy obligations.

Where the organisation’s existing dataset was not initially collected for AI training purposes, the organisation must establish that there was consent for a secondary AI-related purpose. Failing this, the organisation must prove that this secondary AI-training use of the existing dataset would have been reasonably expected by the individual at the time of collection and that it is related (or directly related, for sensitive information) to the primary purpose(s). The first limb of the test requires consideration of the circumstances at the time of collection and is often difficult to establish, especially against the rising level of public anxieties about privacy, data security and risks associated with AI use.

Even where organisations train their internal AI system only using publicly available data, the proliferation of data breaches – such as those involving Optus, Medibank and the New South Wales Online Registry in Australia, and globally, Ticketmaster and Oracle Cloud – pose a risk that information readily accessible on the web may not have been shared with proper consent. With that said, organisations should revisit their privacy policies and ensure consent to use data for AI-training purposes is sought at the time of data collection and that individuals are provided with an informed opportunity to opt out.

Algorithmic bias and transparency

The Australian AI ethics framework requires AI systems to be designed to be inclusive and avoid unfair discrimination against individuals, communities or groups.8 However, bias in the training dataset can lead to discriminatory models as AI systems identify and replicate discriminatory patterns. As organisations increasingly use AI-led automated decision-making (ADM) tools to streamline business operations, the risk of unintentionally breaching both state and federal laws, and consequently facing public and regulatory scrutiny, rises in parallel. One of these laws is the Privacy and Other Legislation Amendment Act 2024 (Cth), which recently amended the Privacy Act 1988 (Cth) to introduce an obligation on organisations using ADM to disclose that in their privacy policy.9

Accountability and liability

Another legal challenge is determining accountability when AI systems cause harm or make incorrect decisions. In sectors such as healthcare, autonomous vehicles and financial services, AI systems may be responsible for life-altering decisions. For example, a health practitioner’s over-reliance on AI diagnostic outputs as opposed to independent and evidence-based clinical judgment can introduce significant diagnostic risks. Medical practices developing their own internal ADM tools therefore face heightened “hallucination” risks where an algorithm may generate outputs that are entirely inaccurate.

In the context of this example, establishing liability for AI-induced diagnostic error or delay can be complex due to the lack of clear legal authority. It would be unclear whether liability would fall on the medical practitioner who relied on the AI software, the third-party company that developed it, or the engineer tasked with monitoring and maintaining the system.

Intellectual property

The use of AI to create new products or services also raises complex intellectual property (IP) issues, particularly with respect to ownership of AI-generated inventions. In some cases, AI may develop innovations without human intervention, prompting questions about whether the developer, the user or the AI itself owns the intellectual property rights to those inventions. The Copyright Act 1968 (Cth) as it currently stands only accounts for products or services with independent intellectual effort input by a human author.10 This creates significant challenges for companies investing in AI and AI-generated inventions.

Solutions

Adopting a privacy-by-design approach

To address data privacy and security challenges in internal AI implementations, businesses should adopt a privacy-by-design approach. This involves integrating privacy protections from the outset of AI system development, rather than as an afterthought. Businesses should implement data minimisation strategies, ensuring that AI systems only process the minimum amount of personal data required for their functionality.

Where possible, data should be anonymised or pseudonymised to reduce identifiability. Moreover, businesses should establish robust access controls, end-to-end encryption and secure storage protocols to safeguard data at rest and in transit. To best position themselves to comply with regulations such as the GDPR, businesses should conduct Data Protection Impact Assessments, ethical risk assessments, map out AI decision-making matrixes and canvass potential harm vectors in addition to maintaining detailed records of data processing activities. Routine audits and penetration testing are also helpful measures to identify and mitigate vulnerabilities in AI infrastructure.

Implementing a robust internal AI governance framework

Ensuring accountability and managing legal liability in internal AI development involves creating clear governance frameworks that delineate roles and responsibilities. Businesses should establish AI accountability protocols, including documenting decision-making processes, design assumptions, data sources and model limitations. One effective approach is to maintain an AI model register, which records the lifecycle history of each deployed model, including updates, retraining events and performance evaluations.

To further establish liability pathways, businesses should implement internal audit trails and ensure humanin-the-loop (HITL) systems are in place, especially for high-risk decisions. Businesses should also work with in house or external counsel to draft liability clauses and risk management policies that define how harm or error is assessed and attributed, particularly if internal AI systems were ever to interface with clients or the public. Incorporating algorithmic impact assessments (AIAs) provides a proactive mechanism for identifying and mitigating potential harms before system deployment. In practice, appointing an AI governance lead, forming an AI ethics committee and integrating AI governance into existing risk, legal and compliances functions can be effective ways to oversee the implementation of these steps and to ensure regular audits and training are conducted.

To mitigate the risks associated with intellectual property ownership in the context of internal AI-generated outputs, businesses should implement clear internal IP ownership policies, which define how rights to AI assisted or AI-generated outputs are allocated within the business. For example, the policies could specify that all AI-generated content – whether code, designs, models or strategic insights – are the property of the business and produced under the scope of employment or contractual obligations. This helps reduce ambiguity over ownership, particularly in collaborative or cross-functional environments where AI tools are used by multiple stakeholders

Investing in explainable AI practices and expert advice

Algorithmic transparency and bias mitigation require a multifaceted approach rooted in both technical and organisational practices. A key strategy is the development of explainable AI (known as XAI) models that allow stakeholders to interpret how outputs are generated. Techniques such as SHAP (Shapley Additive Explanations) or LIME (Local Interpretable Model-Agnostic Explanations) can be used to make complex models more transparent to both technical and non-technical users. To address bias, businesses should implement bias audits during both the training and deployment phases. This includes assessing input datasets for historical biases, under-representation or skewed distributions, and correcting these through techniques like re-sampling, re-weighting or fairness-aware learning algorithms. In addition, interdisciplinary oversight committees comprised of internal and external legal and technical experts should be tasked with ongoing evaluation of AI fairness and transparency ensuring the system aligns with the business’ risk posture, ethical standards and social expectations.

Concluding remarks

As AI becomes increasingly embedded in core business functions, some organisations are re-evaluating their reliance on external AI providers and moving toward the development and deployment of their internal AI models. This pivot is driven by a combination of factors including greater control, customisation and competitive advantage. In the Australian context, this transition unfolds within a developing regulatory framework and emerging policy discussions around responsible AI use. While internal AI offers significant benefits, it also introduces complex legal and privacy challenges –particularly in relation to data privacy and security, algorithmic transparency, accountability and intellectual property ownership. These challenges are not insurmountable, but they do require deliberate and proactive solutions such as adopting a privacy-by-design approach, implementing a robust internal AI governance framework and investing in explainable AI practices and expert advice. Ultimately, businesses that invest in these will be better positioned to harness the full potential of AI while navigating its risks responsibly.

This article was first published in the LexisNexis Privacy Law Bulletin, Issue 22.3, 2025.


1Australian Government Digital Transformation Agency

2Australian Government Department of Industry, Science and Resources (Department of Industry)

3Department of Industry, Australia's AI Ethics Principles, accessed 22 May 2025

4Therapeutic Goods Act 1989 (Cth)

5Department of Industry

6Australian Government Office of the Australian Information Commissioner, Guidance on privacy and developing and training generative AI tools, 23 October 2024, accessed 22 May 2025

7Privacy Act 1998 (Cth), Sch 1

8Department of Industry, Australia's Ethics Principles, accessed 22 May 2025

9Privacy and other Legislation Amendment Act 2024 (Cth), Sch 1 Pt 15

10Copyright Act 1968 (Cth)

End

Stay up to date with Clyde & Co

Sign up to receive email updates straight to your inbox!