22/02/2024


As a company, you would like to hire the very best people.How do you actually discover if someone is good?Looking through motivation letters and CVs is a difficult and time-consuming activity.Amazon came up with a solution for this: it developed a machine learning algorithm that scores people based on the text in their CVs.Such an algorithm should of course be fair, with equal opportunities for men and women.However, this is where Amazon went wrong.In this blog we show what caused this problem and how we can fix it.

 

(This blog is also available in Dutch on this page.)
 

Amazon developed an algorithm that does not use a person's gender. Sounds good, but the problem already existed in the dataset they used: a dataset of hiring choices for potential employees in previous Amazon job applications. These choices were not fair; men were hired more often than women with the same capabilities. The resulting algorithm adopted this trend, using typically feminine and masculine traits. For example, it rejected candidates from all-women's universities, as that works well on the dataset used!

Frankly, from a moral and ethical perspective, we think using an algorithm for hiring choices is a bad idea. However, suppose we wanted to solve this specific problem, is that possible? The answer is yes: with Judea Pearl's theory of causality, we can adjust the algorithm to ensure it cannot be influenced by a person's gender.

 

Everything starts with understanding your data

The first step is to create a cause-and-effect diagram that corresponds to your data generation process. The diagram below captures the essence of the problem. The gender of a candidate influenced the hiring decision, but also school, study choice and hobbies, which in turn influenced the decision. Therefore, Amazon's approach of ignoring gender does not match the data. In their approach, the influence of gender can no longer flow along the red line, but will nevertheless flow via the purple line to the other candidate properties and thus still influence the decision. Statisticains call this phenomenon a confounder; it occurs often when working with data.

  

The next step is to develop an algorithm that uses all characteristics, including gender. This algorithm is therefore not fair: gender influences the choices of the algorithm. However, this allows us to determine the real influence of candidate characteristics such as school, study choice and hobbies.

 

From historic data to desired future

Now we want to consider the situation where we do not judge prospective candidates on their gender. In the diagram, the red arrows are undesirable, only the blue arrows are relevant. Judea Pearl's theory of causality tells us how to accomplish this. Here it is simple: we lie to the algorithm and always say that a candidate is a man (or woman, or take the average of the choices). It seems like a trick, but this guarantees that a person's gender has no influence on the outcome of the algorithm. Not even indirectly, as happened at Amazon. That’s all? Yes, but remember that the machine learning experts at Amazon had not thought of this.

Still, there are many more problems with using an algorithm for your hiring choices, so we would strongly advise against its use. Nevertheless, this example does show how understanding and using causality helps us rule out unwanted influence when analyzing data and making predictions. That is why causality is always in the back of our minds when we are working at CQM!

 

Symposium

CQM is organizing a symposium on causality: "Beyond coincidence: understanding causality" on Thursday afternoon, March 21, 2024. For everyone who collects, analyzes, and/or visualizes data... If you are interested in attending the symposium (note: we are approaching maximum capacity), please contact: Martijn Gijsbers.

 

 

Matthijs Tijink
Let Matthijs Tijink help you Contact us