AI in hiring might do more harm than good


The use of artificial intelligence in the hiring process has increased in recent years with companies turning to automated assessments, digital interviews, and data analytics to parse through resumes and screen candidates. But as IT strives for better diversity, equity, and inclusion (DEI), it turns out AI can do more harm than help if companies aren’t strategic and thoughtful about how they implement the technology.

“The bias usually comes from the data. If you don’t have a representative data set, or any number of characteristics that you decide on, then of course you’re not going to be properly, finding and evaluating applicants,” says Jelena Kovačević, IEEE Fellow, William R. Berkley Professor, and Dean of the NYU Tandon School of Engineering.

She explained that without appreciable diversity in a data set, it’s impossible for an algorithm to know how individuals from underrepresented groups would have performed in the past.

"Instead, your algorithm will be biased toward what your data set represents and will compare all future candidates to that archetype," she said. “For example, if Black people were systematically excluded from the past, and if you had no women in the pipeline in the past, and you create an algorithm based on that, there is no way the future will be properly predicted. If you hire only from ‘Ivy League schools,’ then you really don’t know how an applicant from a lesser-known school will perform, so there are several layers of bias.”