AI Recruitment Tools “Pseudoscience”, Researchers say

AI recruitment tools

Diversity and Inclusion have been hot topics in the world of recruitment and HR for a little while now. To bypass human bias, AI recruitment tools have been resorted to as a solution to the issue, with companies employing the use of these platforms to help with finding what they hope to be the best talent available. However, a new paper, published in the journal Philosophy and Technology by the University of Cambridge’s Centre for Gender Studies has warned against them.

More and more businesses are turning to AI-powered software to go through applications and analyse candidate interviews with the aim of finding better culture fits and boosting diversity. Advocates for artificial intelligence algorithms trained to predict emotional intelligence and analyse the body language of applicants believe that the software gives way to a fairer way to asses talent if it doesn’t consider race and gender. These tools are marketed as being able to remove human bias and help companies hire a more diverse and inclusive workforce by hiring people from underrepresented groups.

However, the study from the University of Cambridge has warned against these systems as they can be easily affected by details such as the lighting in the room people interview from, the clothes they wear and even what can be seen in the background. Not only that, but these tools may also not be able to improve diversity due to the fact that they are based on previous company data and hence may be promoting talent that’s most similar to current workers.

Dr Eleanor Drage, one of the study’s co-authors and a researcher with the University of Cambridge Centre of Gender Studies said that there is “little accountability for how these products are built or tested”, adding that the technology could be a “dangerous” source of “misinformation about how recruitment can be ‘de-biased’ and made fairer”.

She continues, “We are concerned that some vendors are wrapping ‘snake oil’ products in a shiny package and selling them to unsuspecting customers.”

“By claiming that racism, sexism and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world. While companies may not be acting in bad faith, there is little accountability for how these products are built or tested.”

Kerry Mackereth, a postdoctoral research associate at the University of Cambridge’s Centre for Gender Studies said that even though the European Union AI Act classified these recruitment tools as “high risk”, it’s still not clear what regulations are being enforced to reduce the risks.  “We think that there needs to be much more serious scrutiny of these tools and the marketing claims which are made about these products, and that the regulation of AI-powered HR tools should play a much more prominent role in the AI policy agenda.”

“While the harms of AI-powered hiring tools appear to be far more latent and insidious than more high-profile instances of algorithmic discrimination, they possess the potential to have long-lasting effects on employment and socioeconomic mobility,” she concluded.

Are you looking for D365 or Salesforce talent? Click here to find out how we can help.