NEWS & INSIGHTS

AI: Opportunities, risks, and responsible adoption in recruitment

Marsh Commercial

AI is increasingly used throughout recruitment processes to improve efficiency, scalability, and consistency. Key applications include:

  • Sourcing: AI tools help identify candidates with specific skills and experience, such as LinkedIn’s talent search features.
  • Screening: Algorithms sift through CVs and applications, filtering candidates based on keywords and criteria.
  • Psychometric testing: AI-driven gamification software assesses personality traits and generates feedback.
  • Interviewing: Some tools analyse facial expressions and voice patterns to evaluate candidate responses.

Real-world examples of AI bias

Despite these benefits, AI recruitment tools have demonstrated significant risks of bias and discrimination:

  • A candidate was initially rejected but passed screening after altering their birthdate to appear younger, highlighting age bias³.
  • In the US, an AI screener awarded higher scores to candidates listing typically male hobbies (baseball, basketball) and lower scores for female sports (softball)⁴.
  • AI tools analysing voice and body language have been shown to score candidates differently based on gender, race, religious dress (e.g., scarfs), and even camera brightness, raising concerns about discrimination against minority groups⁵.

These examples underscore the risk that AI may perpetuate existing inequalities or exclude qualified candidates due to minor discrepancies or biased data.

Legal context and compliance

The Equality Act 2010 protects individuals from discrimination based on protected characteristics such as age, sex, disability, race, religion, and more. Employers must avoid both direct and indirect discrimination throughout recruitment, including advertising, screening, interviewing, and hiring decisions. Additionally, the UK General Data Protection Regulation (GDPR) mandates that personal data processing be lawful, fair, and transparent⁶.

Recommendations for responsible AI use in recruitment

  • Human oversight: AI should support, not replace, human decision-making. Final hiring decisions must involve human judgement to ensure fairness.
  • Bias testing: Conduct regular testing of AI tools for fairness and accuracy, using standards such as the ‘four fifths rule’ (selection rates for any group should be at least 80% of the highest group’s rate).
  • Data Protection Impact Assessments (DPIA): Complete DPIAs to identify and mitigate data protection risks.
  • Transparency: Inform candidates when AI is used in decision-making and provide mechanisms to challenge automated decisions.
  • Reasonable adjustments: Provide adjustments for candidates with disabilities, such as text-to-speech software, or remove AI tools if adjustments are not feasible.
  • Documentation: Maintain detailed records of recruitment decisions to defend against potential discrimination claims.

 

Cybersecurity risks in an AI-driven world

While AI offers enhanced capabilities, it also empowers cybercriminals to launch more sophisticated attacks:

  • Unauthorised AI use: Employees may use AI tools without IT oversight, risking data leaks and bypassing security controls.
  • Vulnerabilities in AI development: Rapid experimentation with AI services can introduce malicious code or expand attack surfaces.
  • AI-Powered attacks: Cybercriminals use AI for advanced social engineering, deepfake impersonations, and rapid malware evolution.

Mitigation strategies

  • Establish clear policies defining legitimate AI use within your business.
  • Restrict local admin rights and harden security settings to prevent unauthorised AI tool downloads.
  • Create isolated environments for AI development and conduct thorough supplier due diligence.
  • Update security frameworks to monitor AI-related processes.
  • Increase frequency and realism of cyberattack simulations.
  • Strengthen authentication and implement layered defence mechanisms.
  • Conduct regular expert security assessments to stay ahead of evolving threats.

Risks and challenges

  • Job displacement: AI could replace millions of roles globally, raising workforce concerns.
  • Cybersecurity: AI enhances cybercriminal capabilities but can also bolster defences.
  • Algorithmic bias: AI-driven decisions risk perpetuating discrimination.
  • Reputational risk: AI provided information should always be carefully reviewed by a human counterpart to avoid errors and the impact they can have on your business.

Risk mitigation and insurance solutions

  • Governments are developing legislation like the EU AI Act and UK-US bilateral agreements to regulate AI safety.
  • Organisations must implement robust cybersecurity measures and update policies to address AI risks.
  • Insurance products such as cybersecurity, data liability, and professional indemnity insurance provide financial protection against AI-related losses.
  • Companies should seek comprehensive risk assessments and maintain human oversight in AI decision-making.

 

Balancing innovation with responsibility

AI presents unprecedented opportunities to enhance productivity, efficiency, and customer experience across recruitment and insurance. However, it also introduces significant risks—bias, discrimination, cybersecurity threats, and accountability challenges—that organisations must address proactively. Responsible AI adoption requires a balanced approach: leveraging AI’s strengths while ensuring human judgement, legal compliance, transparency, and robust risk management remain central. By doing so, organisations can harness AI’s potential to drive innovation and growth while safeguarding fairness, security, and trust.

If your business is using or considering AI, you should:

  • Review and update policies to govern AI use.
  • Conduct bias and cybersecurity risk assessments.
  • Ensure human oversight in AI-driven decisions.
  • Consult with legal and insurance experts to manage emerging risks.

For tailored advice on AI risk management and insurance solutions, please contact: Matthew Tattler, Marsh Commercial via Matthew.Tattler@marshcommercial.co.uk or 07392123289

 

References

¹ gov.uk/a-pro-innovation-approach-to-ai-regulation-government-response
² theguardian.com/a-day-in-the-life-of-ai
³ bbc.com/news/technology-64576225 (Age bias example)
⁴ bbc.com/news/technology-64576225 (Gender bias in hobbies)
⁵ bbc.com/news/technology-64576225 (Voice and body language bias)
⁶ uk.gov.uk/data-protection

Marsh Commercial is a trading name of Marsh Ltd. Marsh Ltd is authorised and regulated by the Financial Conduct Authority for General Insurance Distribution and Credit Broking (Firm Reference No. 307511). Copyright © 2026 Marsh Ltd. Registered in England and Wales Number: 1507274, Registered office: 1 Tower Place West, Tower Place, London EC3R 5BU. All rights reserved.

About the author