AccelPro is built for motivated professionals and innovative companies. Sign up now, and your annual membership will include a free trial and a discount of more than 40%.
Welcome to AccelPro Employment Law, where we provide expert interviews and coaching to accelerate your professional development. Today, we are featuring a discussion about artificial intelligence and bias with Rachel See, Senior Counsel and Vice Chair, Emerging Technology and People Analytics at Seyfarth.
With AI and analytics, you can either get lost in the details of the technology, or you can get concerned that poor data or tech will reproduce the very biases you are trying to avoid.
For human resource executives, hiring managers and employment attorneys, See — who formerly served as the Equal Employment Opportunity Commission’s Senior Counsel for AI and Algorithmic Bias — offers practical and actionable advice in navigating these confusing issues.
She proposes three questions to ask before using AI: What’s the business purpose? How much risk are you comfortable with? And who is responsible for that risk? Once you have answers to those questions, the path forward becomes more clear, if not necessarily easy.
Today’s sample is below. To listen to or read the full AccelPro Employment Law interview, join our paid membership program.
Most AccelPro members expense their memberships or take a tax deduction. Please let us know if your employer has any questions about our community.
Basic Membership: One full interview each month, along with samples of the weekly podcast and occasional events.
Monthly or Annual Membership:
The weekly podcast, available in the player of your choice
Transcripts and supplemental materials for each interview
Private group events inspired by your requests
Curated collections from the archive, including Worker Classification and Discrimination
CLE credit based on the interviews is coming soon
If you sign up now, your annual membership will include a free trial and a discount of more than 40%.
Interview References:
Rachel See’s Seyfarth profile.
3:56 | The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. (May 2022). US Equal Employment Opportunity Commission.
4:04 | Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964. (May 2023). US Equal Employment Opportunity Commission.
5:45 | Automated Employment Decision Tools (AEDT), Local Law 144 of 2021. New York City Council.
TRANSCRIPT
Matt Crossman, Host: You have spent your entire career at the intersection of law and technology. What did you do at the EEOC and what do you do now for Seyfarth?
Rachel See: You could ask me what didn’t I do at the EEOC. I was in the federal government for 12 years. I spent time at the National Labor Relations Board, and I started at the EEOC in January of 2017. I spent three years as Assistant General Counsel for Technology on the enforcement side, helping the Office of General Counsel with technology issues in their litigation and enforcement efforts.
Then I worked as a Special Assistant to the Chair of the EEOC. I did a stint as acting Executive Officer of the Commission before working for one of the commissioners as a Senior Counsel for AI and Algorithmic Bias doing constituent stakeholder outreach. I did a lot of educating, helping the commissioner spread the message that our existing civil rights laws apply to the use of AI.
I heard a lot of concern from the civil rights community as well as employers and industry groups and tech people about where we were on both innovation and competitiveness as well as real concerns about fairness and bias in the state of regulation.
MC: Educate us on what the regulatory environment is like now. What’s it like on the ground?
RS: The EEOC put out technical assistance in May, 2022 about the Americans with Disabilities Act and another technical assistance document in May, 2023, about Title VII and the applicability of these existing statutes to the use of AI and machine learning.
What the technical assistance says, very uncontroversially, I think, is that these existing laws apply to the use of technology. For that first TA document with the Americans with Disabilities Act, it’s saying, If you have a screening tool and someone asks for a reasonable accommodation, your obligations under the ADA apply, whether you’re doing a written test, whether you’re doing just one of these traditional assessments, or if it’s something AI powered.
And likewise, the May 2023 TA talks about the applicability of Title VII and the principles that we have under Title VII on nondiscrimination. There’s also a mention of the 1978 uniform guidelines and just reminding people that, that’s the guidance on selection tools and that rule of thumb, and it is a rule of thumb that according to the EEOC, applies.
Besides the EEOC putting out the TA in the documents in the past few years, we have a really crowded regulatory space. In the federal government, we have things coming over the horizon.
I don’t want to have us rooted too much in today, but there’s all of these efforts at the federal level, as well as state and even local, both proposals and in the case of New York City, an actual law that people are trying to comply with. On top of that, federal, state and local regulatory environments are all tinkering around with AI and obligations for using AI and results of AI.
And we have all of these global regulatory efforts, some mature, some about to come to fruition, maybe others being proposed with all of these overlapping concepts and obligations, and sometimes conflicting definitions.
So it’s both an exciting time and a very crowded time right now.
MC: So I’m an HR executive. I’ve started to use AI. What’s the first question I should ask myself?
RS: If you’ve started using AI, or if you’re thinking about it, you have to ask yourself first, what’s the business case for using any particular AI application? It’s easy to want to be caught up in the promise about how AI is going to change our lives.
And I’ve been there. I’ve been one of the people saying that, yes, this is amazing technology. It has all this potential, but if you’re an HR person looking at a specific tool and a specific application, you have to say, here’s the benefit that I think I’m going to get out of it.
And once you have the benefit in mind and a clear understanding of why you’re using it and what you’re getting out of it, you want to be able to answer or at least think through how you explain how this technology works. And how are you going to explain, especially for HR applications that matter to people and their livelihoods, not just how does this model work, but how do you know? How does the vendor or developer know or prove or validate that the model isn’t unlawfully biased?
And so there’s some element of doing your work on developing the vocabulary for talking about what the tool does and how it’s been validated.
But you also have to think about being questioned by stakeholders, customers, employees, candidates, litigants or regulators, and how you are going to explain all of these issues to a non-technical person?
Probe what the vendor is able or willing to do to answer those questions and how satisfied you are and how you’re able to assess what you’re hearing from the vendor. Or if you’re developing the tool in-house, what confidence do you have in your development processes and your peoples’ ability to talk about these processes in an adversarial setting?
MC: Do you have to be able to explain the tech?
Listen to this episode with a 7-day free trial
Subscribe to AccelPro | Employment & Labor Law to listen to this post and get 7 days of free access to the full post archives.