AccelPro | Employment & Labor Law
AccelPro | Employment Law
On Racial Bias in Algorithmic Decision Making
0:00
-23:48
On Racial Bias in Algorithmic Decision Making
With Michael Selmi, Professor at Sandra Day O'Connor College of Law, Arizona State University | Interviewed by Matt Crossman
Transcript

No transcript...

Listen on Apple Podcasts and Spotify.

Welcome to AccelPro Employment Law, where we provide expert interviews and coaching to accelerate your professional development. Today, we are featuring a discussion with Mike Selmi, an employment law professor and expert in algorithmic decision making as it applies to hiring practices.

An algorithm is only as good as the people who program it and the data it analyzes. With the rise of algorithmic decision making in the hiring process has come concern in the legal community that those algorithms may be biased.

In reviewing what has been written on this topic, Selmi found plenty of concern to that effect. But then he analyzed what was actually happening, and that’s not what he found. “I haven’t seen anything that would suggest that algorithms have been more biased than humans. I suppose it could happen,” he says.

In fact, he says, “Between a computer and humans, trying to figure out which is going to be less biased, I would go with the computer.”

In this interview, Selmi walks us through the challenges of deciding which data to use, how something as simple as a name can throw off equity and how he didn’t choose employment law but is grateful it was chosen for him. The supplemental materials and episode transcript are available below.

Listen on Apple Podcasts and Spotify.


Interview References:

  • Mike Selmi’s faculty page

  • 2:23 | Selmi, Michael. (2021). Algorithms, Discrimination and the Law. Ohio State Journal of Criminal Law, 82(4), 611-651.

  • 5:07 | Title VII of the Civil Rights Act of 1964,  § 7, 42 U.S.C. § 2000e et seq (1964). US Equal Employment Opportunity Commission.

  • 12:27 | Skrentny, John. (1996). The Ironies of Affirmative Action: Politics, Culture, and Justice in America. University of Chicago Press. 

Supplemental Materials:


TRANSCRIPT

I. ALGORITHMS RELY ON DATA

Matt Crossman, Host: Let’s start with defining our terms: algorithmic-decision making. That’s a mouthful. In the context of employment law, what does that mean? 

Mike Selmi: It’s a question that’s harder to answer than it might appear. It should be something simple. But people define algorithmic decision making in lots of different ways, and oftentimes it is interchangeably treated as artificial intelligence, although they’re not the same.

Artificial intelligence I think has this different connotation for a lot of people. I think the area probably that is most considered algorithm decision making is resume searching. So most applications these days are done electronically. They have been for a long time. 

A human doesn’t review those resumes. An algorithm does, searching for certain keywords and different minimum qualifications and the like and narrows down the pool. That’s another form of algorithmic decision making that has attracted a lot of attention even though it’s been around for a long time.

MC: You wrote that many critics say algorithms perhaps cause more problems because they are biased. I want to read a quote from you, from your article. “The real question of interest is not whether algorithms can produce discriminatory results - they can - but whether those results are likely to be more discriminatory than our existing systems.”

This is a question every HR exec and employment lawyer listening to this wants to know the answer to. Are algorithmic decision-making tools, in fact, more likely to produce discriminatory results than our existing systems? 

MS: This is the reason I kind of jumped into this space over the last five or six years. There’s been a lot of criticism of the move toward algorithmic decision making in hiring in particular, although it’s used in other areas of employment, with this notion that algorithms can be biased, which is true.

I also said in that piece, I was sort of surprised that people were surprised by that, that they thought that this was a novel discovery. Any algorithm’s only going to be as good as the data that it relies on and the way it’s programmed and the like. It seems highly unlikely, although it’s certainly possible, that somebody programming would do it intentionally to discriminate.

Most employers are seeking to use algorithms for two reasons. One is for efficiency. The volume of applications makes human review hard. But the other reason that a lot of employers are doing this is to try and achieve greater diversity, less discriminatory processes in hiring. So it was surprising to me that folks were so focused on the potential for discrimination in algorithms.

From my research, it’s clear that algorithms can be biased, and there’s lots of different examples of that. I haven’t seen anything that would suggest that algorithms have been more biased than humans. I suppose it could happen. As biased? That’s certainly possible. More biased? It’s highly unlikely, and I haven’t seen any evidence to suggest that.

There’s lots of different things programmers can do to reduce bias and to be aware of that. There’s really been much more development toward equitable algorithms. Programmers are much more aware. There’s lots of different things that they can do to reduce bias, which is much harder to do with humans.

We’ve been trying to reduce human bias for generations, and we actually have made surprisingly little progress on that. As I say in the article, between a computer and humans, trying to figure out which is going to be less biased, I would go with the computer. 

MC: That leads right into one of my questions. You used the terms “equitable” and “only as good as the data it relies on,” and that’s one of the key issues: what exactly is the data being used.

MS: There’s no specific statutory restriction on what an algorithm can include. Algorithms used in the employment process will be subject to Title VII or, or state laws.

There’s been more activity on the state level than on the federal level on these lines, but there’s been lots of concern. There’s a number of different things that traditionally are not being included in algorithms. 

One concern for critics about discrimination with algorithms is what they call proxy variables. I believe it’s the case that in general, employers are trying not to include race or gender, disability, national origin, the traditional factors that Title VII prohibits, although there’s also some good reasons to include those.

And that’s actually true in general with a lot of employers. When they’re reviewing resumes, they strip that kind of defining information from the resumes typically. 

Another area where there’s been concern is things like zip code. We have so much residential segregation in the United States that zip codes could be revealing. There was also an interesting study a number of years ago about names. Sometimes names can reveal race or ethnicity, and an algorithm can take out the names as well.

If you’re worried about family discrimination - women who might have children - you could strip that information from resumes and exclude it, too. But I think the area that’s achieved the most attention is zip codes. And again, I found it surprising that people thought of this as a new issue.

With residential segregation in hiring, there are cases going back to the 1970s. And the idea that focusing on residency for jobs or even looking at that in terms of resumes, there’s been a long history of discrimination litigation over that, so it’s actually not a new issue.

II. DE-BIASING ALGORITHMS

MC: You mentioned some cases where we might include some information like race or gender. What would be an example of why we might include that in the decision making?

MS: This is a controversial area that the law would likely struggle with, too. But it’s the same reason a lot of employers currently want to know about the race or ethnicity or gender of applicants in order to diversify, not necessarily in order to engage in affirmative action, although lots of employers do engage in affirmative action and for hiring and promotion decisions.

But more because that’s an important value for employers, which it is for a lot of employers, not all by any means. They feel like they need to take that into account. And I think there is some space in the law to allow employers to do that.

And part of this whole area is whether there’s a trust that employers are going to use the information in a way that will increase equity as opposed to decrease equity in the workplace. And that’s a difficult issue. It’s often difficult for plaintiffs, those of us concerned with equity issues, to trust employers in that area.

But this is one area where it may work for that. There’s other reasons, too. In some workplaces, there won’t be very many African Americans or women in the workplace, and often for them to succeed, they have to be better than everyone else. They have to perform at a higher level just because of all the factors that play a role in the workplace.

And it is possible for an employer to focus on that person, say an African American, who is the person that is really doing well and say, get me more people like that. This is one we’re saying, Hey, I want more good employees. And if that person turns out to be African American, that information would probably be necessary to include in the algorithm in order to increase hiring of people who perform like that person. 

MC: That sounds like a great challenge. If I’m a human resources executive, what are some guardrails I can put up to make sure that I’m doing this the right way? 

MS: I think it’s evolving, and to me it’s not evolving quickly. I think this is a slow march in, even though the studies say that 25 to 40 percent of employers are using artificial intelligence in their hiring process, that seems really high unless you just focus on the resume analyzing, which is pretty common. But more sophisticated use is less common. 

So employers would almost certainly at this point not be able to do this in-house. They’re far more likely to use a company. Some companies even guarantee that their process will not be discriminatory. I think some might guarantee that they’ll increase diversity.

They’re not particularly good at revealing how they’re doing that, which I can understand. Sometimes they’ll claim trade secrets and the like. This is a tricky issue. If there was a problem, you might be able to shift liability to that company. 

But I think there’s a growing awareness of both the potential for algorithms to be discriminatory and ways in which you can address or reduce that bias. It is called de-biasing the algorithms typically. And I think that area is really growing, and I think all the companies will fold those concerns into their products. 

MC: Would de-biasing algorithms be a way to minimize disparate impact? And then can you also define “disparate impact” for me?

MS: My expertise for the most part is on disparate impact analysis, which is a form of liability that arises from Title VII and other statutes.

A disparate impact is a neutral process that has an adverse effect on a protected group. So the most common over time, going back to the 1970s, is an employer uses a written examination. When they get the results, African Americans perform significantly less well than whites on that test for reasons that aren’t always obvious or even relevant in terms of the analysis. 

That starts the case, it doesn’t finish the case. That’s actually often a misconception about disparate impact. So if you can establish that the neutral process—in this instance an algorithm—would have a disparate effect upon a protected group, then the employer has an opportunity to justify that practice. And this is where the controversy comes in.

The employer, in those written examination cases, typically would say, “well, here’s why we’re using the written examination. It actually does give us information about who’s going to be a good employee and who’s not. These are really vital skills, and so we think we have a business need or business justification for this practice.”

There’s a whole body of law that explains whether that justification is going to be successful or not. And then there’s another step that if the employer is able to justify the practice, the plaintiff has an opportunity to show that there might be an alternative that would serve the employer’s needs just as well but with less disparate impact. 

So de-biasing the algorithm can certainly reduce the adverse impact and then there would be no claim. This is where the affirmative action component comes in, and it’s not clear exactly how the law would work on that, whether the law would permit somebody to take race or gender into account as a way of reducing bias.

There’s a really interesting book that came out a few years ago by John Skrentny at the University of California at San Diego, demonstrating that lots of employers engage in affirmative action that would not pass scrutiny under the law, but there’s almost never any lawsuits.

III. NOVEL APPROACHES BY AMAZON AND ORCHESTRAS

MC: A valuable lesson from the Amazon example is to test run your algorithm first, right? Don’t just start using it. You have to see if it’s going to give you the results that you expect. So tell me what happened with Amazon and what HR execs could learn from it.

MS: The Amazon example’s a great story in part because it’s Amazon, right? You would think they could get something like this, right. And it’s attracted a lot of attention.

They were getting so many applications. Hiring in general is really inefficient. There’s so much churn. They thought, we should be able to create an algorithm that tells us what kind of employees we want, and then they were going to actually go out and find them, rather than relying on applications.

They thought. we will survey all our employees and we’ll collect all the data we have on them, and we will then identify what factors are relevant and we will then go out and find people, whether it’s on LinkedIn or wherever, rather than relying on applications, which they found really inefficient.

So not surprisingly, when the results came out, almost all the people that the program identified for hire were men. According to the stories, there was some penalty for having female identified markers on a resume, like a woman’s lacrosse club or women’s chess club. That was a penalty, and that would push you back in the line.

So it was very gendered and not serving what they wanted. And this gets a lot of attention, that’s why I mentioned it. One of the key factors was if your name was Jared and you played lacrosse in college, you were at the top of the heat. And they should have been able to predict this because their workforce that they analyzed was overwhelmingly male.

All that program did was reproduce their existing workforce. Again, they should not have been surprised at that, and that’s why it’s interesting that Amazon hasn’t really explained why it went wrong. I think it would be very helpful for them to explain what went wrong.

But the point that you mentioned is really important. Amazon never used it for hiring. And this is a really important factor for algorithms that is a huge advantage over human decision making. You can run an algorithm many times. You can try different things that you really can’t do as a human. You can’t bring in a bunch of people, see how it goes, and then let them go.

This is an opportunity to sort of see what’s produced. 

MC: My final question on this hiring process comes from one of the footnotes in your article, which I found fascinating. It references orchestras holding auditions behind partitions. That seems like the ultimate way to hire somebody. You can play or you can’t.

Assuming nobody looks behind the curtain, right? Obviously no algorithm can do exactly that, but is that in a sense what the goal is - just to remove any factor other than, can this individual do the task that we want them to do?

MS: That was a study that got a lot of attention and seemed to be successful. Orchestras remain overwhelmingly male for reasons that aren’t always obvious, but it was exactly the kind of thing that most of those of us who work in employment discrimination think employers should move toward. Now there might be, although not necessarily in an orchestra, issues of collegiality and the like, although that’s an area where discrimination factors in all the time.

And I can see in an orchestra you might have different sounds from different people that you need to match up. So it’s not just purely blind, but you would listen to them play. You wouldn’t need to see them anyway. 

And it’s like The Voice TV show, where you just hear right? Same idea. One of the things that’s interesting that I would mention, because this is related to the human-decision making component, one thing that they found early on is that the conductor and whoever else was doing the choosing would hear different heels. And they might hear heels that sounded like women’s shoes. So they put a carpet down to make sure they couldn’t hear it because they were even factoring in that, or it seemed like they were. That’s one of the reasons it is so hard to move away from discretion and discrimination in human-decision making.

The blind audition is something that an algorithm can move toward, similar to the orchestra and something that I think most employers would agree is a good way of trying to move forward.

IV. DISCRETION BEGETS THE STUDY OF DISCRETION

MC: Now I want to pivot and ask you some questions about how you got to this point in your career. Since the earliest days of your career when you litigated cases at the Lawyers Committee for Civil Rights in DC and in the Civil Rights Division of the United States Department of Justice, you have focused on employment discrimination.

How did you get so interested in that topic in the first place?

MS: This is actually something that I share with my students regularly because it wasn’t even my choice. In law school, I took a class that touched on employment, but I was actually in law school at the time when employment law was just developing.

I was very interested in civil rights issues. I had a clerkship, and then I was hired in the Justice Department through their honors program. At the time, and it may still be true, there were I think five or six divisions in the Justice Department that you could put a bid in for, but they would ultimately assign you to a group.

I remember specifically I put down voting rights, education and criminal. I did not put down employment. I was actually a little concerned about going there because this was in the Bush administration. There were some affirmative action concerns. 

They assigned me to employment discrimination, and I have mostly done employment discrimination the rest of my career. It was a perfect fit for me as it turns out.

They were doing human decision making. They weren’t using algorithms, but maybe they saw that I would fit better. I enjoyed it from the day I got there. I’m so glad I got assigned to it because it’s a very different area from the ones I was thinking I wanted to go into. 

MC: Is it ironic that somebody used discretion to put you on a career path in which you have since studied discretion?

MS: Sure. That’s fair. The other thing I’ve always told my students, discretion is always good when it’s in the hands of your friends. Or people who are trying to do good. Discretion then is fine. It goes back to that trust.

The thing I liked about it so much was that at the Justice Department we got our own cases from Day One. So I would have 20 individual cases of my own. There’s no other place you could do that. I argued a summary judgment motion in a class-action case on remand from the Supreme Court three weeks into my time at the Justice Department. And we won.

And the thing now that I really like is, it’s always changing. And as I tell my students too, with employment law, I’ve been teaching it for more than 20 years, and I have never taught the same course twice. And I think all the people I know, both defense and plaintiff side who do employment law, I think they would also say this is one of those key things that makes employment law so interesting.

Like algorithms - the algorithm stuff is all new and it’s something everybody’s got to focus on at some point.

MC: Another thing that I hear a lot is that because we all have jobs, in basically every single solitary case, you are helping somebody. Whether you ultimately successfully help them is perhaps a different question, but you are attempting to help somebody resolve a challenging situation in their life, and I imagine that attracted you as well.

MS: Yes, and that was also true from the beginning, especially with those individual cases. Virtually every case involves people on both sides. Even when you’re defending a company, you’re working with people directly. Emotions run high in it. What I found early on as a baby attorney, I realized what power you had to make a difference in someone’s life. And that’s on both sides, too. 

One of my early cases, I got a settlement for somebody. Just that thank you from her, that connection - she made it clear that she wouldn’t have gotten anything without my assistance. That always stays with you. And you are influencing people. And that’s true when you do things with employers, too.

When you’re working with HR or somebody who’s been accused of discrimination and you might be able to defeat that claim, that makes a huge difference in someone’s life. So it really is on both sides. 

MC: Why did you make the jump to academia? 

MS: It fits my personality best. I will say the thing I didn’t like about litigation was the discovery disputes. At the lawyers committee we did exclusively class-action litigation. We would be arguing endlessly about nothing, it seemed. 

I loved law school. I loved school. It was sort of my sanctuary growing up. And when I graduated from law school, I planned to become a professor. That was my goal. And in law school I worked for three different professors. But I also thought I should practice and I’m really glad I did.

And at the time when I decided to transition into academia, I thought hard about it. I think I would’ve been perfectly happy with a career as an employment discrimination civil rights attorney, but academia really did fit. 

I like to be able to spend time on problems more. I really like writing and publishing, and the best part of academia is working with young folks. It’s always new every six months. You got new classes, new students. I really like to see the changes with young folks and what they’re moving on to do. So it was a combination of things, and it’s worked out great.

This AccelPro audio transcript has been edited and organized for clarity. This interview was recorded on April 24, 2023.

AccelPro’s interviews and products accelerate your professional development. Our mission is to improve your day-to-day job performance and make your career goals achievable.

JOIN NOW

Send your comments and career questions to questions@joinaccelpro.com. You can also call us at 614-642-2235.

If your colleagues in any sector of the employment law field might be interested, please let them know about AccelPro. As our community grows, it grows more useful for its members.

By signing up, you agree to AccelPro’s Terms of Service and Privacy Policy.
AccelPro | Employment & Labor Law

AccelPro | Employment Law

AccelPro’s interviews and peer-to-peer products accelerate your professional development. Our mission is to improve your everyday job performance and make your career goals achievable.