My Blog
Business

In the job hiring process, most workers say they already sense AI


Claudenakagawa | Istock | Getty Images

The latest generation of artificial intelligence is officially old enough for human-centered business controversy, with companies like OpenAI and Microsoft embedded in the debate. Even with this so-called milestone, many industries remain hesitant to test the AI waters to their fullest capacity, including human resources.

For HR, fear of AI’s unknown repercussions — in practice and in regulatory pushback — is causing angst. That fear hasn’t halted the promises that various AI applications make, namely mitigating the longstanding bias that exists in the hiring framework.

Historical data is at the root of many AI hiring solutions, notes Jamie Viramontes, CEO and founder of Konnect who formerly headed HR for brands like Forever 21, Chipotle and St. Francis Medical Center.

“We know there’s bias in the way that we’ve done things historically,” said Viramontes. Replaying past mistakes in a novel form, he suggests, could make the woes of hiring worse. Not only can human bias impact algorithmic bias, but research also suggests that the reverse can be true. Two researchers from Deusto University in Spain, Lucía Vicente and Helena Matute, found that humans reproduce digitally enacted AI bias in the real world.

Despite the risks, Viramontes believes in the potential for AI-enabled hiring to reduce bias in the process, and he’s not alone.

“Although AI systems may be biased in certain ways because of the data that they’re trained on, they don’t have personalities that display explicit prejudice or subconscious bias learned over time,” said Arun Sundararajan, a professor at NYU Stern School of Business studying AI’s impact on myriad facets of life and a member of the Artificial Intelligence Governance Alliance for the World Economic Forum.

These human biases tend to pose significant barriers to equitable hiring, he says, but adding AI to the equation gives humans the opportunity to reflect. “It will give the human pause if there’s a machine decision that’s different from theirs, and it makes it more likely that the human, even if they are the final decision maker, will question their own motivations for why they prefer one candidate over another,” Sundararajan said.

Indeed's 2024 jobs and hiring trends reveal more workers looking for jobs in AI

According to the 2023 Talent Index from AI-powered talent lifecycle management platform Beamery, 59% of people looking for work say they’ve noticed AI being used during the recruitment process. Half of respondents said they’ve used it for recruitment themselves.

While AI usage in the hiring process remains elementary in many cases (in the form of chatbots and scheduling assistants, for example), there’s no ignoring its presence — and the potential for bias to improve or worsen as a result.

The 2023 American Staffing Association Workforce Monitor reports that 34% of people see AI hiring tools as more prone to bias than humans alone. There are measures companies can take, experts say, to fight against this, but it requires a concerted effort from policymakers and corporate decision makers alike.

The U.S. Equal Employment Opportunity Commission published a four-year strategic enforcement plan beginning in fiscal year 2024 that includes AI’s impact on bias. Recognizing employers’ increasing use of this technology, the EEOC is focusing on how it targets job advertisements, recruits applicants or makes or assists in hiring decisions “where such systems intentionally exclude or adversely impact protected groups.”

While government agencies continue on the road to firm directives, self-regulation is an interim solution. Though self-regulation lacks a level of powerful oversight that makes these policies broadly effective, it is, to some, better than a free-for-all that could further marginalize groups of people who are already on the outskirts of equity (or marginalize entirely different groups of people based on imperfect outcomes we have yet to witness).

Hiring the right AI

As organizations assess AI vendors to assist them in the hiring process, there are questions they can ask to ensure they’re signing on with a trustworthy, knowledgeable vendor that’s actively interested in equitable outcomes for all.

With every organization having distinct recruiting needs, it’s important to ask the vendor if the algorithm can be customized to take advantage of internal data, Sundararajan says.

Another question Sundararajan suggests asking is: What measures are you taking to debias your algorithm? Chances are the technology provider will put you in touch with someone who will take you through their process of debiasing and show you the extent to which they’re paying attention to this issue, he says.

“Machine learning systems can reconstruct these variables, even if you suppress them. Knowing that your vendor is sophisticated enough, for example, to even understand that problem and take care of it, that’s step one,” Sundararajan added. “But then, what are the more nuanced ways in which they’re debiasing their systems?”

Finally, he says to ask about the extent to which they use your data to improve the system’s performance for other clients. In other words, he said, “To what extent is your proprietary data proprietary to you?”

How HR should start experimenting

The machine learning process has three key components contributing to algorithmic bias, according to researcher Zhisheng Chen from the Nanjing University of Aeronautics and Astronautics in China: dataset construction, the engineer’s target formulation and feature selection. It’s in these components that algorithmic debiasing takes place, and technological tools like data blending, decoupling and differential testing can help the process.

On the human front, Viramontes said one key aspect is “having checks and balances from an HR perspective, but also having checks and balances between the HR team and the hiring manager.” This, he says, includes tools like unconscious bias training and proactive diversity, equity and inclusion efforts.

Sundararajan adds that when companies adopt AI tools, the recruiting team on the front lines needs to make material changes to how they hire. “Human beings have a tendency to trust their judgments over the judgment of a decision support system that’s been given to them,” he said. “How do we make sure that the top management’s vision of de-biasing hiring with the use of AI translates into actual process changes on the front lines so that the AI systems aren’t just acquired, they’re actually used?”

As HR professionals and hiring managers ease into AI with all of its risks and opportunities in mind, Viramontes suggests starting with lower-risk processes such as improving the health insurance enrollment process for employees.

“We know that there’s a huge reduction in turnover if employees are having a great experience with open enrollment,” said Viramontes. As teams have more wins in these areas, he adds, they may become more open to AI’s potential in hiring.

Related posts

China offers ‘no apology’ in first meeting after spy balloon incident, Blinken says

newsconquest

Top 10 best states to retire based on quality of life

newsconquest

This under-the-radar filtration stock could rally more than 50%, Goldman Sachs says

newsconquest

Leave a Comment