A recruiting revolution: why did NYC delay its landmark AI bias law?
Dec 26, 2022
8 mins
Journalist and translator based in Paris, France.
January 1st, 2023 was supposed to mark the start date of a groundbreaking new law. In an era when technology evolves so fast that governments can barely regulate it, the New York City Council managed to approve the world’s first legislation cracking down on the artificial intelligence used in recruitment and HR. It marked a monumental step towards corporate transparency and responsible AI. But on December 13th, the law was delayed.
Now scheduled to come into force in April, Local Law 144 of 202 would bar companies from using any “automated employment decision tools” unless they pass an audit for bias. Any machine learning, statistical modeling, data analytics, or artificial intelligence-based software used to evaluate employees or potential recruits would have to undergo the scrutiny of an impartial, independent auditor to make sure they’re not producing discriminatory results. The audit would then have to be made publicly available on the company’s website. What’s more, the law mandates that both job candidates and employees be notified the tool will be used to assess them, and gives them the right to request an alternative option.
To find out why the legislation was pushed back, and what’s at stake if the current version changes, we interviewed corporate-recruiter-turned-AI-ethicist Merve Hickok. A former HR executive for Fortune 500 companies, she left the business world to focus on human rights. She is the founder of AIethicist.org, a platform that provides curated research and reports on AI ethics, and co-founded the Center for AI and Digital Policy, an independent think tank.
Involved since the very first draft, Hickok tells us why this law is so tricky to put in place, what she’s seen that’s worrying her, and how AI bias could have big consequences—good and bad—for equality in the job market.
This interview was conducted prior to December 23rd, 2022, when a new version of the legislation was proposed by NYC’s Department of Consumer and Worker Protection. The public hearing for this draft is now scheduled for January 23rd, 2023.
Could you tell me a bit about your professional background, what made you leave corporate recruiting for ethics, and what you’re working on now?
I was in Human Resources for Merrill Lynch and Bank of America for many years. I held a number of senior HR positions: at Bank of America, I was responsible for diversity recruitment, benchmarking different recruitment technologies, etc., for almost 30 countries.
As I was telling potential recruits about the different opportunities in investment banking, I was also hearing some of their obstacles when applying for these jobs. So that was kind of my ‘aha’ moment: “Okay, if you don’t do this stuff responsibly, you might actually be locking people out or putting extra obstacles in front of already disadvantaged communities.”
Video interviewing technologies were literally just coming up. I realized that, for example, if you ask a candidate to do a video interview and record themselves, they might not have access to broadband internet, or a quiet room, or a camera. As I left, AI products started to become more prevalent, and these issues became more crucial and ubiquitous.
How did bias in the recruitment process get worse with AI?
Some AI recruitment products are launched without any scientific basis. And then some of them are developed without safeguards. There are many ways the recruitment process might be impacted by biased AI systems. For example, some of these AI models are programmed to detect certain keywords or combinations of words in the resumes, or in answers to the interview questions. You might have a very sophisticated or complex answer to an interview question. However, if the provided answer doesn’t meet the requirements of whatever the AI has been developed to detect, then your response will not be considered good enough. Alternatively, the system may scan your resume and not find those keywords; or the algorithm may make incorrect correlations between what skills are relevant for a specific role; or if a voice system is integrated, it could judge the pitch or tone of your voice based on what has been judged as “normal” in its system.
I think if you went to an in-person interview and the recruiter told you they’re going to decide whether to hire you or not according to the pitch of your voice or the complexity of your words, you would have some issues with that. Such systems also don’t have good accuracy rates for people with accents, non-native speakers, or people with speech impairments … just like voice assistants.
Another problem is the use of computer vision systems to recognize faces. Some pseudoscientific AI systems look at your facial structure, physical responses, etc., and make inferences about your character. To add to the insult, they often don’t recognize darker skin or females accurately, or they pick up noise (i.e. other background images) in the video. So you have these really pseudoscientific systems that are making judgments about you.
Is AI bias more dangerous than a human recruiter’s bias?
Yes and no. The good thing with AI is that you’re standardizing a process. You code something, and then that rule applies to everyone. However, there’s a big chance that the data and the code, which was created by humans, will be influenced by their bias. And then the code makes that biased decision on thousands and thousands of people. When it’s just human bias, there’s only so much damage you can do. If it’s two recruiters assessing resumes all day long, maybe they go through 100 candidates a day. You can run thousands of candidates in an AI system in a matter of minutes, if not seconds. So the scale and speed are very different.
NYC’s new legislation was supposed to take effect on January 1st but has been moved to April 23rd. I’ve heard it’s because the legislation wasn’t clear enough for businesses to be able to comply with the law. In your experience, is this true?
There are things to develop and clarify further. However, it’s nothing to keep this legislation from being delayed past April.
There are many responsible vendors and employers. However, there are also a lot of businesses out there that do not want to be transparent about the specific outcomes of the AI systems that they use. One of the proposed rules was that employers need to publish the exact numbers of selection rates and impact ratios of different applicant groups. For example, this many Black vs. white candidates, or male vs female, have gone through the system, and how many people belonging to one of the groups were selected compared to another group. If the proposed rule went through, it would have meant that all these employers (those without a business necessity requiring them to select differently) would have to admit that they might not be hiring diversely. Or, if they were avoiding the questions until now about the AI systems they use, they now need to take action about it.
My biggest concern is that some businesses will put a lot of pressure on the New York City Council to one, narrow the scope and application of the legislation and two, remove some of the transparency requirements.
*You can read all the comments submitted to the New York City Department of Consumer and Worker Protection on the proposed law by advocates, businesses, and industry groups here.
What do you think are the most important updates this legislation needs?
Definitely making sure that, as proposed, the results are transparent. And second, that the audits be conducted independently. So, right now, because there isn’t any established audit ecosystem, a lot of the companies are going out to different vendors who identify themselves as auditors. Some of them are going to their internal audit department. Some of them are going to their lawyers. So that independence needs to be clear.
Are there enough independent auditors out there for these businesses to use?
No. Because there is no existing legislation on bias audits. However, disparate impact analysis [a calculation measuring if selection procedures disproportionately exclude persons based on race, color, religion, sex, or national origin] is not new. It’s been around for decades. It has always been something organizations conducted themselves.
If this legislation passes as it’s written right now, and all these companies suddenly need independent auditors, do you think it would spur the growth of a whole new industry?
Yes.
But is there, in New York City, a license or a permit for auditing bias in tech?
Right now, there is not. That’s why everyone is defining bias audit as it aligns with their perspectives—some very narrowly, and some vendors asking more in-depth, more responsible questions. There’s a great organization called For Humanity, which is trying to build that ecosystem of independent audits for AI systems and asking: “What should be the widely accepted audit criteria? What should the auditors’ certification look like? What should the independence criteria—in terms of remuneration, number of clients, conflict of interest, and code of ethics—look like? How should the auditor report?
In your opinion, what should constitute an audit for AI bias?
The data, the algorithms, and the organizational decisions that went into it.
First, there needs to be an audit making sure the data that you build your algorithm on is not biased to start with. And is reflective of the context that you’re deploying it in, and of the population that you’re applying it to. So, for example, if you build your algorithm using a population in, I don’t know, Texas, you cannot use that same training set in Belgium or Italy. Those labor forces are different.
The models [the way an algorithm is programmed] should also be audited. What decisions are made in the model? What is the accuracy rate? What design decisions went into it? Are the outcomes fairly distributed or does the model produce biased outputs for different groups?
The third piece is: who made the decisions? Was the team that developed the AI model diverse? How do employers notice and manage emerging risks in the system? Have the recruiters been trained on how to use the system properly, so they’re not just taking everything at face value?
Do you think most of the big AI recruiting software out there would pass your ideal audit?
No. A handful would. But there is a lot of benefiting from the vacuum of oversight and lack of regulation on this. If the scope of the law and transparency requirements are narrowed, that can create less-than-ideal audit practices, and that is not conducive to anyone—employers, vendors, candidates—in the long run. We need both employers and vendors to act responsibly and raise the bar for responsible innovation. This benefits employers and vendors immensely in the long run—with their diversity, profits, and brand.
What are some of the changes you think job seekers will feel if the proposed changes go into effect?
You can see what kind of employer you’re facing. What the hiring practices of your employer are. You can see the data, how they make decisions and, if the results are significantly biased, how are they mitigating that. As a candidate, you’ll have a better chance to prepare yourself for the recruitment process and be better informed about what decisions go into the process.
Some people say that, with audits, AI has the potential to get rid of human bias in the hiring process and make the whole recruiting system more fair. Do you think that’s true?
I don’t know at 100%, but it does provide that auditable trail and traceability. You can go back and look at what kind of rules were applied to assess candidates and fix that. You can go back, look at the audit trail, look at the model, and kind of reverse engineer it and fix it, make it less biased. So at least you have that audit trail and the decisions are more clear. You can’t really have that with humans to the same degree.
Obviously, this comment excludes pseudoscientific AI systems which should not exist in the first place.
Anything else you want to add?
I want to make something really clear: this New York City law is the first legislation of its kind. It’s the first jurisdiction mandating an AI bias audit in the world. There are a lot of states, cities, and countries looking at this as a possible example. If this works in New York City, it could be adopted in other jurisdictions. So if certain parties manage to narrow the scope of the law and water it down to a useless case, that could have a cascading impact on a bigger level. Yes, this law can be improved significantly but I think the New York City Council has a great, great opportunity to be a pioneer in the world and lead this effort.
Photo: Welcome to the Jungle
Follow Welcome to the Jungle on Facebook, LinkedIn, and Instagram, and subscribe to our newsletter to get our latest articles every day!
More inspiration: Job hunting in the digital age
Navigate the digital job search landscape with these tips on online job search, social media job search, and using job search apps and tools.
LinkedIn etiquette: the thin line between authenticity and TMI
Ever cringed at a LinkedIn post that went too far? Discover the secret to striking the perfect balance.
Jun 19, 2023
The rise of AI-assisted resume building
AI is reshaping the job hunt. Learn how to harness its power for crafting irresistible resumes, and stand out in the job market!
May 17, 2023
AI for (not so) good: are recruitment algorithms a threat to atypical profiles?
AI may not be able to detect the unique qualities of atypical candidates who don't meet the specified criteria.
Mar 22, 2023
ChatGPT: are AI-generated cover letters the way forward?
Thinking of using AI to speed up your job search? Think again.
Jan 26, 2023
Virtual job fairs: how to maximize your experience
Whether you're an introvert, have a jam-packed calendar, or want to try a new way to meet prospective employers, virtual job fairs could be for you.
Dec 27, 2022
The newsletter that does the job
Want to keep up with the latest articles? Twice a week you can receive stories, jobs, and tips in your inbox.
Looking for your next job?
Over 200,000 people have found a job with Welcome to the Jungle.
Explore jobs