Welcome to AccelPro Employment Law, where we provide expert interviews and coaching to accelerate your professional development. Today, we are featuring a discussion with Mike Selmi, an employment law professor and expert in algorithmic decision making as it applies to hiring practices.
An algorithm is only as good as the people who program it and the data it analyzes. With the rise of algorithmic decision making in the hiring process has come concern in the legal community that those algorithms may be biased.
In reviewing what has been written on this topic, Selmi found plenty of concern to that effect. But then he analyzed what was actually happening, and that’s not what he found. “I haven’t seen anything that would suggest that algorithms have been more biased than humans. I suppose it could happen,” he says.
In fact, he says, “Between a computer and humans, trying to figure out which is going to be less biased, I would go with the computer.”
In this interview, Selmi walks us through the challenges of deciding which data to use, how something as simple as a name can throw off equity and how he didn’t choose employment law but is grateful it was chosen for him. The supplemental materials and episode transcript are available below.
AccelPro’s expert interviews and coaching accelerate your professional development. Join AccelPro Employment Law now for a free trial of everything we offer to members.
Interview References:
Mike Selmi’s faculty page
2:23 | Selmi, Michael. (2021). Algorithms, Discrimination and the Law. Ohio State Journal of Criminal Law, 82(4), 611-651.
5:07 | Title VII of the Civil Rights Act of 1964, § 7, 42 U.S.C. § 2000e et seq (1964). US Equal Employment Opportunity Commission.
12:27 | Skrentny, John. (1996). The Ironies of Affirmative Action: Politics, Culture, and Justice in America. University of Chicago Press.
Supplemental Materials:
Selmi, Michael. (2021). The Paradox of Implicit Bias and a Plea for a New Narr x of Implicit Bias and a Plea for a New Narrative. George Washington University Law School.
Selmi, Michael. (2020). Bending Towards Justice: An Essay in Honor of Charles Sullivan. Seton Hall Law Review, 50(1), 1465-1494.
TRANSCRIPT
I. ALGORITHMS RELY ON DATA
Matt Crossman, Host: Let’s start with defining our terms: algorithmic-decision making. That’s a mouthful. In the context of employment law, what does that mean?
Mike Selmi: It’s a question that’s harder to answer than it might appear. It should be something simple. But people define algorithmic decision making in lots of different ways, and oftentimes it is interchangeably treated as artificial intelligence, although they’re not the same.
Artificial intelligence I think has this different connotation for a lot of people. I think the area probably that is most considered algorithm decision making is resume searching. So most applications these days are done electronically. They have been for a long time.
A human doesn’t review those resumes. An algorithm does, searching for certain keywords and different minimum qualifications and the like and narrows down the pool. That’s another form of algorithmic decision making that has attracted a lot of attention even though it’s been around for a long time.
MC: You wrote that many critics say algorithms perhaps cause more problems because they are biased. I want to read a quote from you, from your article. “The real question of interest is not whether algorithms can produce discriminatory results - they can - but whether those results are likely to be more discriminatory than our existing systems.”
This is a question every HR exec and employment lawyer listening to this wants to know the answer to. Are algorithmic decision-making tools, in fact, more likely to produce discriminatory results than our existing systems?
MS: This is the reason I kind of jumped into this space over the last five or six years. There’s been a lot of criticism of the move toward algorithmic decision making in hiring in particular, although it’s used in other areas of employment, with this notion that algorithms can be biased, which is true.
I also said in that piece, I was sort of surprised that people were surprised by that, that they thought that this was a novel discovery. Any algorithm’s only going to be as good as the data that it relies on and the way it’s programmed and the like. It seems highly unlikely, although it’s certainly possible, that somebody programming would do it intentionally to discriminate.
Most employers are seeking to use algorithms for two reasons. One is for efficiency. The volume of applications makes human review hard. But the other reason that a lot of employers are doing this is to try and achieve greater diversity, less discriminatory processes in hiring. So it was surprising to me that folks were so focused on the potential for discrimination in algorithms.
From my research, it’s clear that algorithms can be biased, and there’s lots of different examples of that. I haven’t seen anything that would suggest that algorithms have been more biased than humans. I suppose it could happen. As biased? That’s certainly possible. More biased? It’s highly unlikely, and I haven’t seen any evidence to suggest that.
There’s lots of different things programmers can do to reduce bias and to be aware of that. There’s really been much more development toward equitable algorithms. Programmers are much more aware. There’s lots of different things that they can do to reduce bias, which is much harder to do with humans.
We’ve been trying to reduce human bias for generations, and we actually have made surprisingly little progress on that. As I say in the article, between a computer and humans, trying to figure out which is going to be less biased, I would go with the computer.
MC: That leads right into one of my questions. You used the terms “equitable” and “only as good as the data it relies on,” and that’s one of the key issues: what exactly is the data being used.
Listen to this episode with a 7-day free trial
Subscribe to AccelPro | Employment & Labor Law to listen to this post and get 7 days of free access to the full post archives.