Using AI in Employment Decisions

Podcast Episode Transcript

Host: Hello, and welcome to Prevention and Protection, the United Educators (UE) risk management podcast. Today, Heather Salko, UE Manager of Risk Research, and Alyssa Keehan, Director of Risk Management Research and Consulting at UE, will discuss the topic of AI and employment. Before we begin, a quick reminder to listeners that you can find other UE podcasts, as well as risk management resources, on our website, www.ue.org. Our podcasts are also available on Apple Podcasts and Spotify. Now here’s Alyssa.

Alyssa Keehan: Well, thank you. And welcome, everyone. As you know with the release of ChatGPT in 2022, the potential of generative artificial intelligence, or gen AI, to revolutionize many aspects of our lives has been the focus of much buzz. Today, we will dive deeper into the fascinating and complex world of AI and employment at higher education institutions. Specifically, how are colleges and universities using AI in the employment context? Maybe in ways we don’t even realize. What are the dangers, the potential benefits, and where are we headed? These are just some of the big issues we’re going to explore in today’s program with my colleague and podcast speaker, Heather Salko. Heather, thank you so much for joining us today.

Heather Salko: Alyssa, you are most welcome, and I’m so glad to be here.

Keehan: Heather, I know you’ve done a lot of research on the topic of AI and employment at educational institutions. In fact, you’ve authored a few great resources on the topic that I really want to urge our listeners to check out. And we’ll be sure to link to those resources on the podcast landing page for all of you. One is titled Using Artificial Intelligence in the HR Lifecycle, and I know you’ve also written about the need for colleges to have an AI policy or guidelines.

Heather, while everyone is talking about AI and its potential these days, how is it being used in HR right now?

Salko: First, Alyssa, let me say thank you again for having me on to speak about this important topic. It’s one that’s very interesting to me, and there are so many ways our conversation can go from here, but we’ll just try and keep this to the most relevant things for people. I will reiterate what you said earlier about the resources that we do have on this topic, including the HR publication you mentioned and that we’ve linked to on the landing page, and note that it has been updated recently. Second, I want to just issue a caveat to everyone listening that the technology in this area is changing rapidly and expanding, so I’m going to just speak to some broad principles today so that the specifics aren’t quickly outdated.

As to your question, AI is showing up in HR systems in so many ways, and it may not always be obvious to those who are using those HR systems. Human Resources has, for a very long time, had a relationship with AI tools, including what has long been known as people analytics. So this history is there.

There are currently hundreds of software programs and tools that are options out there for you and your institution, and again, you may already be using them as part of your process. Some of these include features that are automatically integrated into your current talent management systems. For example, automatic resume screeners that match, as a first cut, any submitted resumes, looking for keywords that you’ve put in your job posting. Other common examples include interviewing tools, including those that look at facial recognition or emotional responses of interviewees. They can help screen candidates during their interview for certain personality traits you may be looking for. You may be using chat bots, you may be using automatic skills tests. You may also be using some AI recommendations in your promotion decision-making tools, or even maybe using a tool that can determine if someone should be let go from their position.

Keehan: Heather, do you have any concerns about the rush to adopt AI in any and all ways, especially in the HR space?

Salko: Yes, Alyssa, I do. There are some overarching concerns that people deploying these AI-based or AI-infused tools should be thinking about them. I’ll highlight two for you.

The first is the training data set that’s been used. So you need to know what data set was used to train the tool you are using in your system. For AI to work, it needs to be trained to look for certain characteristics and then to make predictions about new information based on that training data. For example, in ChatGPT, a general AI tool that many people are very familiar with, that tool combed the internet to train itself and used public databases for its training data. So all of its output is based on the material that it was trained on. So there are certain limitations in that any answers it may give to you as you do a search is limited by its own training data. And of course, there’s always a concern that the underlying training data has some problems or is not fully representative of what the tool is being used for. It is important to remember that these tools and their capabilities are advancing and changing every day, as I said earlier.

And number two would be bias. This, of course, is closely related to the training data issue in that bias and outputs from some of the tools may not be accurate or they may reinforce stereotypes that are in the training data itself. There are many examples from the media that I won’t get into, but there’s one in particular that’s well known, in that a large tech company wanted to hire people who were similar to those workers [it] had already been determined to be successful at the company. So the company decided to feed some of the traits of its most successful employees into a database and then screen candidates against what it had determined would be successful characteristics.

Well, the company needed to cease this process and do so pretty quickly, because it was actually screening out to a large degree people of color and women, and only suggesting specifically men as viable hiring candidates. And that was because the tool had been screening for things like college attendance, college major, relevant experience, but also things like hobbies in order to find the best match to make a cohesive employment group. So what it was doing, they were finding was that it was actually creating a very homogenous group and actually not giving opportunities to other people outside of that group. So they said very quickly, they stopped using this tool in the training phase and did not develop it any further. And again, the bias goes closely with what we talked about a moment ago relating to using that training data, you should also just remember that humans do have their own biases, so it’s not only the AI tool that may be biased. And those can be manifested in the data that you choose to train on, as well as outputs that you decide to rely on.

Keehan: Yeah, those are such great points. Heather, are there any other concerns you think we should discuss today?

Salko: Well, yes, of course. We could talk for a long time, but I just want to highlight something that is a concern whenever you’re using an automated system or working with personal information, and that is, of course, privacy and data security, they go hand in hand. On the issue of privacy, especially data privacy and how the data is secured, inserting training data into your tool can compromise personally identifiable information, or PII, on your employees. And then you should know then who has access to the data that you’ve entered into your AI-based tool. Can your vendor access that information for training purposes? Where’s the data stored? Questions like that that you might ask about any private information.

And I’ll just point out, even if you are working on what is being called a closed system, meaning accessed only by your institution under a license, you should still think about data security. And again, closely linked to the privacy, think about how the data you’ve entered or your outputs will be protected and who will be responsible if there’s an episode that compromises security. Of course, any tool may also be subject to legal or compliance oversight depending on what the tool is and its use or purpose. Consider, for example, the confidentiality of medical records that people may provide if they’re seeking an accommodation through your HR system.

I do want to point out a resource to consider reviewing, and that is the MIT AI Risk Repository, which you can find at AIrisk.mit.edu, and we will link to it on the homepage. It’s a great interactive database of risks that you might want to review and consider. You can search by topic or by risk. You can search by domain, for example, privacy or misinformation. Or you can search the risk repository by causal risk, which is, is it an AI risk, or is it a human risk? So it’s a short database, but it’s definitely worth looking at.

Keehan: OK. So given the different ways that AI is being leveraged in the employment context, can you talk about how decision-making tools are being used?

Salko: Sure, Alyssa. These decision-making tools are really some of what we discussed earlier, resume screeners, tools that analyze performance and recommend promotions or terminations. All of these are only as good as the training data set they were developed with, as we discussed earlier. This type of decision-making tool is behind the drive to regulate AI in many states.

It’s when you’re using these types of tools, you need to think not only about the starting training data, as we keep saying, but also about the data that’s being fed into the tool on an ongoing basis. There is concern with data drift over time, which is when new information begins to alter the algorithm and bias can start to creep in. Some people ask how these tools are any different or worse than human bias that may exist in the hiring management and firing circumstances or process. And much of the concern about AI tools stems from the lack of clarity as to how the decisions are being made in the black box, if you will, of the algorithm, especially as these programs become more and more sophisticated and may be factoring in multiple data points, which, of course, is fine if you have small applicant pool or a small workforce that you’re managing with them. But they become potentially more problematic as the data sets grow.

Another concern is that some of these tools may be inherently biased against certain classes of workers, such as women or those people with disabilities who may have resume gaps or may not be performing to the training “norm” of some of the either interviewing evaluation tools or performance management tools that you’re using, especially those that may monitor an applicant’s emotional response or affect or the time it takes for them to complete a task. This is one reason that states are looking to regulate the use of these tools, often through notice or opt-out requirements or through requiring routine bias audits. I’ll just note that there’s no comprehensive federal law governing AI and that some prior guidance from the EEOC and other government agencies that was issued under the Biden administration has since been withdrawn.

Keehan: Wow. There’s a lot there to digest. So what are some ways that educational institutions can manage the different risks with these tools you’ve identified?

Salko: Yes. It’s important to think about this. There are a few things you can do to help manage these tools and, of course, to use them appropriately.

First, understand who has the responsibility for using them, who monitors any changes that the vendor may make, who’s making decisions about the data you’re going to input, and when and how to deploy the tools themselves. This decision should be made by a few people and revisited periodically.

Second, I would say avoid over reliance on the tools. The features may be constantly being updated and added to, and, in a busy world, there is such temptation to put more and more workload onto an automated system to free up people’s time. But the more you put on these systems, the more you’re losing control of your HR process.

Third, be transparent. This may become a legal requirement in your state, as I mentioned earlier, so be prepared to be upfront about when and how these tools are being used in your process and make sure that someone, back to that number one, owns that responsibility.

Fourth, I think you should engage in a continuous feedback loop. Are these tools accomplishing what you want them to do? What improvements can be made? I think you need to constantly be testing for bias. There are tools available to test the AI algorithms and consider engaging those tools periodically to make sure your systems are functioning the way you want them to.

And finally, of course, it’s always important to retain human oversight. Yes, these are programs, but you also need to keep a human in the loop in the decision-making process, and you should have someone review the work and the outcomes and be making the final decision about any of your HR actions. So keeping people involved in as many phases of the process as possible and using these tools not as a substitute, but as something that benefits and augments your work.

I also want to note that we are talking today about AEDTs, or automated employment decision-making tools, but people may also be using large language models, or LLMs, to make decisions which might be problematic depending on how they’re being used. Some institutions are using them to evaluate, for example, writing samples or other written responses depending on what the job is being screened for. A recent paper in Nature that was written by some Stanford researchers found that LLMs were consistently rating people who wrote in what was termed African-American English lower and were less likely to match them with any job. So I would just say be mindful of the way in which you are using AI-based tools, because there are many, many ways in which problematic bias could come up.

Keehan: With these various and problematic uses of AI in the employment context, what does the legal landscape look like? I mean, are there any legal developments that institutions should be aware of?

Salko: Alyssa, I mentioned state law efforts before. These are constantly changing, so it is important to work with your counsel, who’s based in your jurisdiction, to understand, for example, your state’s legislative efforts. There are a number of efforts that have been undertaken. And some are being passed. And some are dying in the legislative process. So it’s important to keep track of where things stand in your state. You’ll also want to track your county or your city, again, your municipality. For example, New York City has a law regulating the use of AEDTs in employment decisions. So you should be keeping an eye out as well for those types of developments, as well as any federal guidance that may be coming.

Keehan: Well, you’ve given us a lot to think about with respect to AI’s use in HR at colleges. I mean, really, it’s a lot. To help our listeners prioritize their efforts in this area, any final words or recommendations for institutions that might just be getting started with their process of getting a handle on the use of AI and managing their workforce?

Salko: I think I would urge people to just understand how your institution is using AI in its HR process or what tools might be being deployed. Don’t be afraid to ask questions to understand this area more.

Keehan: This has been such a great discussion on a fascinating emerging topic. Heather, thank you so much for taking the time to cover the concerns raised by AI in the employment context, as well as giving us some helpful mitigation strategies.

Salko: Oh, Alyssa, thanks for having me on. And again, I’ll just point people back to the landing page, which links to some of these AI resources we’ve discussed, but also to keep checking back at www.ue.org for other risk management materials.

Host: From United Educators Insurance, this is the Prevention and Protection Podcast. For additional episodes and other risk management resources, please visit our website at www.ue.org.

1 of 3 documents are ready for download

The document "Long document name goes right here" is ready. Downloads expire after 14 days. Your remaining documents will be ready in a few minutes. Lorem ipsum dolor, sit amet consectetur adipisicing elit. Quod deserunt temporibus qui nostrum aliquid error cupiditate praesentium! In, voluptatibus minima?

Go to the Document Center