Media Tip Sheet: NYC to Soon Implement New Law Requiring AI Hiring Tools to be Audited for Bias


April 18, 2023

New York City is getting closer to fully implementing a first-of-its-kind law that requires employers to audit artificial intelligence tools used in HR decisions for bias. Earlier this month, the New York City Department of Consumer and Worker Protection adopted its highly anticipated final rules to implement the new law, which will officially take effect this July.

GW's Vikram R. Bhargava

If you would like more context on this matter, please consider Vikram R. Bhargava, assistant professor of strategic management and public policy at the George Washington University School of Business. His research centers around topics including artificial intelligence, the future of work, technology addiction, mass social media outrage, autonomous vehicles, and other topics related to digital technology policy. 

Bhargava’s latest research, “Hiring, Algorithms, and Choice: Why Interviews Still Matter,” was recently published in the journal Business Ethics Quarterly.  His paper underscores the necessity of maintaining human choice in these HR processes rather than relying on AI alone.

In a recent interview with Marketplace about the NYC law, Bhargava explains that there are several ways in which problematic biases and patterns can still enter the workflow when using new generative AI technologies in the hiring and performance review process. He says this new law also doesn’t answer the question about whether AI systems should be used at all.  

“I think the thought is that it’s going to be third-party auditors. There’s a challenge of who is well-equipped to engage in these audits, there’s a challenge of whether the audits themselves can permissibly be done given the privacy issues related to client data. And I think that even if companies are able to satisfactorily pass, or pass with flying colors, this audit, it nevertheless doesn’t settle the question of whether that’s sufficient grounds to automate the process and then defer entirely to the output that the algorithm recommends because there could be something lost there. Namely, the value of us being able to choose whom we relate to in the workplace.”

If you would like to speak with Prof. Bhargava, please contact GW Media Relations Specialist Cate Douglass at [email protected].

-GW-