How AI is Reshaping Maritime HR: Legal Risks, Real Impacts and How Employers Should Respond

  • Insight Article 12 May 2026 12 May 2026
  • People dynamics

Artificial intelligence is increasingly being used by maritime employers to support recruitment, workforce management and people decision making. While AI can bring efficiency and consistency, it also presents significant legal and employee relations risks. These risks are already being tested in courts and tribunals globally.

1. Bias and discrimination: still the biggest risk

AI can bring speed and efficiency to HR decision making, but it also carries a heightened risk of discrimination. It is well established now that AI is susceptible to bias, given it is trained by humans and historic data – which risks perpetuating societal disadvantage.

Examples cited in recent cases include:

  • CV screening tools penalising references to women’s activities or women only educational institutions
  • Facial recognition systems performing significantly better on white male faces than on non white or female faces
  • Algorithm driven job advertising disproportionately targeting candidates based on identity rather than qualifications

For global maritime employers operating diverse and multinational workforces, these risks are magnified. Recruitment, scheduling, promotion and access to work systems that appear neutral on their face may, in practice, place certain groups at a disadvantage, exposing employers to discrimination claims across multiple jurisdictions.

2. The “black box” problem and tribunal risk

A recurring challenge for employers is the difficulty in explaining how AI systems reach particular outcomes. Algorithms can be complex and opaque, making it hard to evidence decision making processes. This has become a key issue in employment litigation. Where an employer cannot explain how an AI supported decision was reached, tribunals may infer discrimination. This risk exists even where the AI tool itself is not inherently discriminatory. Recent claims involving automated facial recognition and algorithmic workforce management demonstrate the difficulty employers face when defending decisions they cannot fully explain.

Practical point

Employers must be able to give clear, credible evidence explaining how AI tools operate and how decisions are reached. Human oversight remains critical.

3. A fragmented global regulatory landscape

AI regulation is developing at pace, but approaches differ significantly across jurisdictions.

  • European Union: The EU AI Act introduces a prescriptive regime focused on pre emptive risk mitigation. CV sifting and recruitment tools are classified as “high risk” AI systems. The Act has extraterritorial reach and will apply to non EU employers where AI affects EU based workers or candidates from August 2026.
  • United Kingdom: There is currently no AI specific employment legislation, but existing laws such as the Equality Act 2010 and UK GDPR apply. Regulators, including the Equality and Human Rights Commission and the ICO, have identified AI as a strategic priority.
  • United States: Regulation is developing at state level, with some jurisdictions imposing obligations around bias audits, transparency and notification.

Practical point

Maritime employers with cross border workforces should assume that AI compliance is a global issue and avoid relying on a single jurisdiction approach.

4. AI driven change, restructuring and employee relations

AI is also reshaping roles, skills and performance expectations. As tasks become automated, employers are increasingly reviewing job content, which can trigger restructures or redundancies. In Europe and the UK, this raises issues around information and consultation obligations and fair selection processes, particularly where AI informs workforce planning.

Separately, employees are increasingly using AI chatbots to draft grievances, understand their rights and even guide their responses in meetings. There are also risks associated with managers using AI tools to assist decision making, including the creation of disclosable evidence and inadvertent disclosure of confidential information.

Practical point

Employers should clearly define acceptable use of AI at work, update policies and ensure managers and HR teams are trained on the risks.

How Clyde & Co can help maritime employers

Clyde & Co works with maritime employers globally to help them harness the benefits of AI while managing risk. Drawing on our employment, regulatory and data protection expertise, we support clients to:

  • Assess whether and how AI is being used across HR functions
  • Conduct legally privileged bias and discrimination risk assessments
  • Implement appropriate human oversight and decision making safeguards
  • Navigate international AI regulation
  • Update AI, grievance and data retention policies
  • Manage restructures and workforce change driven by automation
  • Train HR teams and managers on the use of AI

As AI becomes embedded in maritime operations, proactive governance and informed HR strategy will be critical. We look forward to exploring these issues further during the panel discussion.

End

Stay up to date with Clyde & Co

Sign up to receive email updates straight to your inbox!