Artificial intelligence is coming for hiring, and it might not be that bad
Artificial intelligence can make hiring an unbiased utopia, its advocates assert.
There’s certainly plenty of room for improvement. Employee referrals, a process that tends to leave underrepresented groups out, still make up a large part of companies’ hires. Recruiters and hiring managers also bring their own biases to the process, studies have found, often choosing people with the “right-sounding” names and educational backgrounds.
Across the pipeline, companies lack racial and gender diversity, with the ranks of underrepresented people thinning at the highest levels of the corporate ladder. Fewer than 5% of chief executives at Fortune 500 companies are women, and that number will shrink further in October when Pepsi CEO Indra Nooyi steps down. Racial diversity among Fortune 500 boards is almost as dismal, as 4 out of 5 new appointees to boards in 2016 were white. There are only three black CEOs in the same group.
“Identifying high-potential candidates is very subjective,” said Alan Todd, chief executive of CorpU, a technology platform for leadership development. “People pick who they like based on unconscious biases.”
AI advocates argue the technology can eliminate some of these biases. Instead of relying on people’s feelings to make hiring decisions, companies such as Entelo and Stella.ai use machine learning to detect the skills needed for certain jobs. The AI then matches candidates who have those skills with open positions. The companies claim not only to find better candidates but also to pinpoint those who may have previously gone unrecognized in the traditional process.
Stella’s algorithm only assesses candidates based on skills, said founder Rich Joffe. “The algorithm is only allowed to match based on the data we tell it to look at. It’s only allowed to look at skills, it’s only allowed to look at industries, it’s only allowed to look at tiers of companies.” That limits bias, he said.
Entelo recently released Unbiased Sourcing Mode, a tool that further anonymizes hiring. The software allows recruiters to hide names, photos, school, employment gaps and markers of someone’s age, as well as to replace gender-specific pronouns — all in the service of reducing various forms of discrimination.
AI is also being used to help develop internal talent. CorpU has formed a partnership with the University of Michigan’s Ross School of Business to build a 20-week online course that uses machine learning to identify high-potential employees. Those ranked highest aren’t usually the individuals who were already on the promotion track, Todd said, and often exhibit qualities that are overlooked during the recruitment process.
“Human decision-making is pretty awful,” said Solon Barocas, an assistant professor in Cornell’s information science department who studies fairness in machine learning. But we shouldn’t overestimate the neutrality of technology, either, he cautioned.
Barocas’ research has found that machine learning in hiring, much like its use in facial recognition, can result in unintentional discrimination. Algorithms can carry the implicit biases of those who programmed them. Or they can be skewed to favor certain qualities and skills that are overwhelmingly exhibited among a given data set.
“If the examples you’re using to train the system fail to include certain types of people, then the model you develop might be really bad at assessing those people,” Barocas explained.
Not all algorithms are created equal — and there’s disagreement among the AI community about which algorithms have the potential to make the hiring process more fair.
One type of machine learning relies on programmers to decide which qualities should be prioritized when looking at candidates. These “supervised” algorithms can be directed to scan for individuals who went to Ivy League universities or who exhibit certain qualities, such as extroversion.
“Unsupervised” algorithms determine on their own which data to prioritize. The machine makes its own inferences based on existing employees’ qualities and skills to determine those needed by future employees. If that sample only includes a homogeneous group of people, it won’t learn how to hire different types of individuals — even if they might do well in the job.
Companies can take measures to mitigate these forms of programmed bias. Pymetrics, an AI hiring start-up, has programmers audit its algorithm to see if it’s giving preference to any gender or ethnic group. Software that heavily considers ZIP Code, which strongly correlates with race, will likely have a bias against black candidates, for example. An audit can catch these prejudices and allow programmers to correct them.
Stella also has humans monitoring the quality of the AI. “While no algorithm is ever guaranteed to be foolproof, I believe it is vastly better than humans,” said founder Joffe.
Barocas agrees that hiring with the help of AI is better than the status quo. The most responsible companies, however, admit they can’t completely eliminate bias and tackle it head-on.
“We shouldn’t think of it as a silver bullet,” he cautioned.
Your guide to our clean energy future
Get our Boiling Point newsletter for the latest on the power sector, water wars and more — and what they mean for California.
You may occasionally receive promotional content from the Los Angeles Times.