Recruitment and the deceptive appeal of Large Language Models

large language models

For hiring managers, using Large Language Models (LLMs) to rank CVs seems to offer the perfect, speedy solution to a traditionally laborious stage of the recruitment process. The appeal of LLMs lies in their apparent ability to deliver quick, intuitive results, narrowing the candidate pool for each role accurately and fast. But, as this Bloomberg article explores, behind the huge appeal lie hidden and significant risks. 

These invisible risks include:

  • biases that are difficult to identify and could expose you to legal complications 
  • being unable to defend elimination choices 
  • an increased probability of missing the best candidates.

What’s driving these risks?

The Bloomberg report exposes the high risk of bias when LLMs are involved, due to the potential for underlying bias in the types of data used to train them. This can result in candidates being excluded from particular jobs, or ranking more highly because of irrelevant criteria such as their names, type of education, or even sports interests. What makes it more complex is that the bias is invisible when looking at one example; it can only be identified after running hundreds or thousands of tests, and most organisations lack the combination of data science talent, time and funds to invest in this kind of testing. 

large language models
Behind the huge appeal of LLMs lie hidden and significant risks.

Is there a way to harness the advantages of AI without risking the bias?

The answer is to use a transparent and interrogatable platform like Kalido, which removes the mystery from candidate ranking. Kalido shows you which skills, roles, locations, and other criteria, are a match. The system evaluates each dimension, so how a specific score was calculated is made visible and interrogable. This makes it simple to explain why a candidate was chosen or rejected, or why a candidate appears higher or lower in the ranking for a role than another. 

With this skill-matching approach, the AI is constrained to a clear decision such as, “Is this skill closely related to this other skill?” So it is unable to introduce common hiring biases against unevaluated criteria such as candidate names, photos, places of study, etc.

To learn more about how Kalido can bring you fast, genuinely intuitive results without the risk of bias, get in touch for a demo at sales@kalido.me.

Comments are closed.