AI in the Job Market
Title: AI in the Job Market
[MUSIC]
Brigid: It's The Brian Lehrer Show on WNYC. Good morning again, everyone. I'm Brigid Bergin, sitting in for Brian today. When you apply for a job today, chances are the first set of eyes on your resume isn't human. It's a piece of software scanning for keywords. Maybe later, a chatbot will interview you or an algorithm will evaluate your facial expressions. Good luck for those of you with no poker face. Employers say these tools save time and money, helping them sort through the flood of applications, but there's mounting evidence they also screen out qualified candidates, reproduce old biases, and turn hiring into an arms race where both sides are relying on AI.
Hilke Schellmann has been tracking this story. She's an investigative reporter, a journalism professor at NYU, and author of The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired, And Why We Need To Fight Back. She joins us to talk about how algorithms are remaking the hiring process, what is working, what isn't, and what it all means for the job market. Hilke, welcome back to WNYC.
Hilke: Thank you so much for having me, Brigid.
Brigid: Listeners, have you encountered AI in your own job search? Were you interviewed by a chatbot, or are you using AI to aid your job search in any way? For you, employers or recruiters, tell us how you're using AI in the hiring process and why. Job seekers and employers, help us report this story about how AI is remaking the job search right now. What have you encountered? 212-433-WNYC. That's 212-433-9692. You can call or text with those stories. Hilke, when someone applies for a job online today, what usually happens to their resume right after they hit send?
Hilke: We all know these big job platforms, like LinkedIn, ZipRecruiter, Monster, all of those. They all use AI because often, for jobs, some employers get thousands of applications, and they don't want just a folder with thousands of resumes. They would like to have a ranking. What we see is AI or algorithmic tools will parse the resumes for keywords, maybe compare them to the job description. Sometimes we see tools that are built on folks who are currently successful in the job. It will make an assessment and a prediction how much you're qualified for the job, and basically generate a yes and no pile for recruiters and hiring managers because they are overwhelmed.
We already saw this before the advent of generative AI, that companies like Google said they get over 3 million applications a year, IBM gets over 5 million or so. Now we see even more of those because candidates can now use ChatGPT and other large language models to aid them with their resumes. We see an overwhelmingness and a lot of companies turning to AI. It's already built into the job platforms. We also see it built into applicant tracking systems that a lot of large companies use. They used to be a glorified spreadsheet where you check, "Okay, this person applied. We interviewed them, or we rejected them." You would track applicants. Now these tools also have mostly AI built in, and so it's used for that as well.
Brigid: Hilke, when you describe this, you mentioned Google, you mentioned IBM. Is this really something that's just being used in the tech space? How widespread is AI use in hiring, and where are the most common points where applicants will be encountering it?
Hilke: We see it's very widespread. We know from surveys that even a couple of years ago, 99% of Fortune 500 companies were using this, this kind of technology different ways. We see this usually at very large companies, companies that have to hire a lot, have high turnover, retail, large chains. We also see this in investment banking and other sectors. Then we also see it even for midsize and smaller companies, because the job platforms now cater to them. Like LinkedIn Recruiter and others, they use this kind of technology to cater to smaller places.
If you apply to small nonprofit art company that has three employees, they're probably not going to use this. This is for large companies to get a volume of applicants. Also, it's often used for recent college graduates because you have often an influx of people applying. They don't have a big work history, they don't have 10 years of experience. A lot of companies used to use AI there as well.
Brigid: Your reporting found that algorithms seem to reward resumes that mention different words that have different connotations, like the word baseball is rewarded, but the word softball is penalized. Go ahead.
Hilke: Those were really disturbing findings. When I set out, I thought, "Oh, AI is moving into HR. This is probably a good thing." Then I started talking to folks like lawyers and psychologists who are at the table when AI vendors pitch a tool, and companies may want to buy a tool. Some of them do their due diligence and bring in outside counsel. What they have found, one lawyer told me all of the tools had a deficit. Another said 1 in 4. They found different keywords that some of these resume parsers or screeners use.
In one example, if you had the word baseball on your resume, you got more points, and if you had the word softball on your resume, you got fewer points, which probably points to gender bias because in the US, more women play softball and men play baseball or have that as a hobby. That doesn't come from parsers that just look at the job description because this wasn't a baseball or softball job. This was just a generic desk job. I think what happened here, and that we see, too, is some companies use resumes from current employees to build some of these AI tools, and the AI tool probably did what it does best.
It does a pattern analysis, does a statistical analysis. I guess in this company, maybe it had hired more men in the past, they were into baseball. It found this was a predictive keyword and used that. Unfortunately, we have to think about how AI isn't like a moral entity. It doesn't understand these things that we should be looking for in the job search. Obviously, we shouldn't look at your hobbies because they can say about more who you are or your gender than actually your skills or capabilities.
Other examples were, like in one case, African-American was used as a keyword. Another case were locations, like if you had the word Syria or Canada on your resume. Obviously, that might be discrimination based on national origin. Another case was the word Thomas was a predictor for success-
Brigid: Interesting.
Hilke: -if you had that word on your resume. That probably points to more hiring bias in the past, that some companies we know, especially in tech jobs, we have a larger gender disparity. That might be hinting at that. Then if you use that data to build these algorithms, obviously, that bias that is built into the data will be amplified here, and I think that's the concern that I saw over and over again.
Brigid: I want to credit my producer with this insight from what you've told us so far. It sounds like we should all change our names to Thomas Baseball, and our resumes will do better as we're applying for some of these open positions.
Hilke: Yes, I know. In some cases, this is a problem as well, that often the AI tools are calibrated for different companies or for the different sections within a company. Some of the advice I can give, it's not always going to be the same thing. Maybe baseball will help, but I'm not sure if it will help all the time.
Brigid: Hilke, I want to bring in some of our callers who I know have questions and stories. Let's go to Carla in Garfield, New Jersey. Karla, you're on WNYC.
Carla: Hi. [inaudible 00:08:58]. Sorry. I didn't have a job early this year, and I applied to so many jobs. It didn't work. It was a couple of months I didn't have a job. Then my husband's friends advised me to use these AI tools that he gave me. Once I started using, then it will match the job description with my experience. Sometimes words can be said in different ways, and because they match it, after I fixed my resume, I got more interviews, and I got a job after a couple of weeks. Definitely, that made a difference for me to find a job.
Brigid: Carla, that's a very promising story. Can I ask you what field you were looking for work in?
Carla: Medical device.
Brigid: Okay, great. Hilke, it sounds like for some of the concerns that have been raised in terms of potentially replicating biases that in building some of these AI tools, there may also be some opportunity for job seekers to be able to improve their odds, game the system, however you want to say it.
Hilke: Oh, yes, absolutely. There is tools out there for job seekers that help you. I always suggest that to job seekers, find those tools, load up the job description, load up your resume, and it will tell you how much overlap probably most AI tools will predict that you have with the job. Aim for like 80%, 90% of overlap. Don't aim for 100% because some AI tools will throw you out because they think you just copied the job description. Definitely use those to your advantage.
Also, we see a lot of companies turning to skills-based hiring, looking less at your credentials, what school you went to, if you have a bachelor's degree. Generally, that's a good thing. You want to make sure that on your resume, you have a skills section clearly labeled and have all the skills, including soft skills, on there so that a parser can actually find that information and parse it in the right field. Definitely, I think it can be absolutely helpful to folks.
I've heard from so many job applicants, like Carla, that they apply, apply, apply, and it's like hundreds of applications. I think you can use AI to streamline and try to help you, but often it's really not you. This is just a numbers game, and the numbers are against you. Some folks joke that they feel like companies use AI, now candidates use AI. It's AI against AI, may the better AI win. I think that's a little sad, but that's what we see right now. It's this AI race on both sides.
Brigid: I want to bring in Dante from Brooklyn. Dante, you're on WNYC.
Dante: Hey, how are you guys doing? Thank you for having me on. I was recently interviewed by an AI HR rep. To use your term, it was a sadly pretty pleasant experience. What I mean is that the AI has all the time in the world. It's not a real person. It asked me very probing, analytical questions about my resume that I think a real person who has a real agenda and a real time crunch wouldn't ask.
For instance, I am a writer and a TV producer, and usually I get interviewed for my TV producing roles. The HR AI asked me about some of the short essays and short stories I've published that I've listed on my resume, which was actually a weirdly very touching, and let me talk for five, seven minutes at a time. I think it was probably looking for my face, how my face interacts on screen and the economy of words that I used as well. It was definitely not a bad experience, just markedly different experience.
Brigid: Wow, Dante, thank you so much for that story. It's really fascinating. It feels like almost sci-fi to me since I haven't experienced it yet, Hilke. He does raise this idea that some of these tools are being used to investigate emotional analysis, to try to read candidates' facial expressions. How widely used is that kind of technology?
Hilke: That, luckily, is not totally widespread use, but we have seen this by one of the largest providers a few years back, a company that all my students know because they're so ubiquitous in the world of one-way video interviews, where, like Dante has this kind of new experience, that you have an avatar talking to you, an AI tool that is asking you questions. What is, I think, a little bit more ubiquitous is where you have no one on the other side. You get a link and you log in, and then maybe a video pops up and saying, "Hey, thank you for applying to company B. We have some questions. Why are you excited about this job?" Then you get a couple of minutes to prepare, and then you basically record yourself answering.
What we've seen, some companies use emotional expression analysis. They would check your facial expressions. If you were smiling, the tool would presume you're happy. If you're frowning, it would presume maybe you're upset. It would also compare it to people who have taken that same job interview earlier for that company, and they were now hired. Those are the successful employees, and it would compare you to that.
I was first totally blown away when I saw that, and I was like, "Wow, what an interesting new way to hire." Then I started talking to experts who said, "Wait a second. There's no science here. We don't know what facial expressions are predictive of your success in a job interview. You're not even on the job.
Brigid: Interesting.
Hilke: There are all kinds of problems. The largest company in the space did drop the technology, but it's a little bit like a whack-a-mole because now I've seen, again, other companies crop up that use this. They also use intonation of voice analysis. Same problem. It's actually not predictive. Probably a lot of listeners can relate to this. I have also, in job interviews, smiled a lot, even though I wasn't happy. A computer vision technology would assume I'm happy if I'm smiling. It would detect that, but obviously, I'm not happy.
You can also see that there's so many false positives here. Dante, I'm really glad to hear that you had such a positive experience. I think one of the good things that AI and hiring has brought to the table is what experts call structured interviews, that everyone gets asked the same question, because the problem is, and I'm not advocating to go back to the good old ways of traditional hiring with humans involved, because humans have a lot of bias, too. We know that humans don't spend a whole lot of time on resumes. They have no time prepared for job interviews.
We often chat with them, "Oh, we went to the same school. That's so cool." That's totally, "I get it." You want to make a human connection with an interviewer, but the interviewer, as soon as you went to the same school or you have something in common, sees you differently and doesn't always look just at your skills, capabilities, your experience to really judge you for the job. I think AI can be hugely helpful. I just found out that some of the ways that it's been applied, it's just automated old, problematic things to begin with, and that has led to these biases and consequences. I'm so glad, Dante, you had a great experience.
Brigid: I'm glad that you also kind of picked up on his enthusiasm because I think, as someone who is learning about AI, as many of us are, I think the default is to have some skepticism and to be concerned about the potential consequences, particularly for workers who already face some disadvantages. I'm thinking of older applicants, people of color, people with disabilities. Are there detrimental effects to those types of applicants when these two tools are so widely used?
Hilke: Yes. I think the clearest way is when you think about folks with disability. I did so many one-way video interviews and tested all this kind of technology. One of the things that employers send to often recent graduates is games to understand their personality. They don't have a long work history, so let's find their personality if they're a good fit here. You play the advertiser of video games. It's more like a '70s or 80s, very early on aesthetic. One of the things is I had to blow up balloons and collect money.
In one of the games, I had to choose a hard task and an easy task. When I chose a hard task, I had like 12 or 15 minutes to just hit the spacebar as fast as possible. I was like, "What does it have to do with the job?" When I played the game with Henry Claypool, who is quadriplegic, he was really concerned, like, "What if people have a motor disability? They may not be able to hit the spacebar as fast as possible." They might be absolutely qualified for the job, but they wouldn't be included if they started playing the game.
I think we see this again and again, that, for example, people with disabilities are already underrepresented in the workforce. Often, these tools are trained with the data of current employees. They're already underrepresented. Even if you have somebody with autism in the data, there might be somebody who's applying who's also autistic, but their disability comes out very differently. So much individual expression of disabilities that a system that's built on statistics and pattern recognition may not be able to catch all those people. That's a real, real concern here that it amplifies those biases and pushes people who are already on the margins further to the margins.
Brigid: Let's sneak another call in. Let's go to Henry in Brooklyn. Henry, you're on WNYC.
Henry: Hi. I'm trying to help my recent college graduate son apply for a job and so forth. Traditionally, the recommendation was keep your resume to one page, but does that even matter anymore, since it's all done virtually and via AI and all the rest of it?
Hilke: I can pick that up. Thank you, Henry, for calling. You're totally right. You don't actually have to keep it to one page. The AI will read all the pages. We used to give people the advice of like stand out, maybe colors, two rows, none of that. We have to now think about make your resume machine-readable, short, concise sentences, anything that can be quantified. Don't say that you saved your company millions of dollars, exact amount. Be very clear and descriptive so an AI tool can pick up on that. I think those are some of the tips that I can give.
Some people feel like, "Oh, now it's like this resume geared towards machines, a list of skills. My personality doesn't shine through." You can always bring it further down in the hiring funnel at the beginning. We see a lot of AI tools that are being used for rejection, but usually, a human still does get involved at the end. When they're like 5, 10 candidates left, a human will still do a job interview for that interview. You can bring your beautiful human "resume" and your one-pager where you have everything with you. Before that, you have to target it towards machines because that's what you're most likely to encounter in the early stages of hiring.
Brigid: I think you're sort of answering it, but just to underscore it, are there some steps that you recommend to people to avoid being unfairly filtered out in those early stages?
Hilke: Yes. I think one thing that I was really surprised by, there is a large survey of over 2,000 employers that a Harvard business professor, Joe Fuller, did. He found out that when companies use AI and algorithms, almost 50% of programs that if you have a longer than six months break in employment, they will immediately reject you. You could be the most qualified candidate if you have that break in your work experience. The tool will throw you out.
Brigid: Wow.
Hilke: I think that's really concerning because it obviously has nothing to do with your capabilities or your skills, anything like that. I always suggest to people maybe you were freelancing during that time, that you were taking care of your parent. Maybe try to close that gap. I think also these AI tools that I've mentioned online can be really helpful to make sure that the keywords of the job description are found in your resume. Don't be cute and try to use synonyms. You may encounter a very basic AI tool that cannot decipher that.
Use the exact keywords from the job description. That's probably what they're looking for. Also, put in all of your skills and put in soft skills, too. We see people looking for team efforts, high-stress environments, all of that can be really helpful. There's no limit. The AI is not going to discriminate against you if you have a lot of words on your resume, versus a human hiring manager who doesn't have enough time to go through it. An AI will read everything.
Brigid: Hilke, I'm thinking back to our previous caller, Carla, who talked about how she revamped her resume and was able to find more opportunities. We're glad that she found a position in medical sales. When people are trying to do that, are there specific tools that you recommend for job seekers to use? I know you noted you shouldn't try to match things 100%. What specifically are the types of tools that people should consider? Are we talking ChatGPT, or are there things that are more targeted for this type of job seeker?
Hilke: There's a couple of companies. One is called Jobscan, where you upload the job description, and your resume will tell you how much overlap there is. I would definitely recommend that. You can also use a chatbot, of course. I think those are really, really helpful to help you streamline your resume in the age of AI. I think also other things that are helpful-- I've heard from others who had the same problem with color. They didn't get any traction. I know this is very controversial for recruiters, but when they knew this is recruiter A, they would contact them on LinkedIn and send them a message and send them their resume, and that's how they found a job.
They said they were a software developer that 146 applications. That's how the only time they got traction is hitting up recruiters through an email message on LinkedIn, and that's how they got hired. There are different strategies here that you can use. I've also heard now that companies are so overwhelmed with intake from LinkedIn and other job platforms that they also shift more towards referrals. If you know somebody in the company who can refer you, obviously, that carries the potential of bias and problems. You might hire the same people again and again because people bring their friends to the company. You might hire more people in the same and there might be more bias than that.
Brigid: But it might be helpful for the job seeker.
Hilke: I understand employers, that they are just reeling under the deluge of resumes. Anyone can apply. They also get people who have nothing to do with the job. We also see a lot of the rise of fake applicants, deepfakes applying for jobs, and they have a lot of struggle.
Brigid: We didn't even get to touch on that today. We know that that is a big part of this, but we have to leave it there for today. Hilke Schellmann is an investigative reporter, assistant professor of journalism at New York University, and author of The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired, And Why We Need to Fight Back.
Hilke, I'm just going to read you a brief part of a text a listener wrote in from a tech lawyer, AI skeptic that called your book brilliant. "She writes so that non-techies can understand complex technical issues clearly, and she combines that with a nuanced discussion of bias and social justice. I've recommended this book to everyone who wants to understand the important issues AI raises, and always included in AI presentations and panels. Bravo." Hilke, thank you so much for joining me. I know. I thought I'd leave you on a good note.
Hilke: Thank you, Anonymous text messenger. That is the best take on the book. I found people who are already affected by this, and I wanted to tell the technology through their eyes because I think that's what really gets us. It's not only about the technology, it's about the effect on all of us.
Brigid: Thank you so much, Hilke. We appreciate it.
Hilke: Thank you.
Copyright © 2025 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
