• (775) - 400 - 1322

  • info@hrindemand.com

  • 140 Washington Street, Suite #200

featured image showing AI concept

AI Is Not a Fail-Safe Solution to Hiring

Artificial intelligence (AI) is playing an ever-increasing role in the lives of ordinary Americans. This new technology has the potential to transform society like never before. But the potential has not been actualized. In fact, new ethical questions and corresponding regulations are emerging to slow down the use of AI in the hiring process.

AI For HR

Perhaps you’ve dabbled or are already using AI in your hiring process. If you aren’t, understand that there are new AI tools available that help streamlines the hiring process. Software with advanced algorithms scan through resumes, analyze the text in those resumes, find social media profiles, and rank applicants. Sounds great right? Yes, in theory, it’s great because it’s supposed to be unbiased. But is it?

As the argument goes, computers are cold and calculating, and will, therefore, be unbiased when making decisions. But this isn’t true, according to Rep. Suzanne Bonamici (D-OR). “A growing body of evidence suggests that left unchecked, digital tools can absorb and replicate systemic biases that are ingrained in the environment in which they are designed.”

Built-In Bias

Decisions made by algorithms are partially influenced by the people who designed those algorithms in the first place. If those designers have certain unconscious biases, then those biases could potentially be woven into the algorithms. One infamous example of this is the fact that many AIs, made primarily by white and Asian men, have difficulty recognizing dark faces.

AI decision making, especially in the case of HR departments selecting applicants for hire, represents serious questions for anti-discrimination laws. Certain systemic biases may be unknowingly built-in. But to make matters worse, researchers are discovering that it is difficult for humans to understand why an AI made a particular decision.

“Often thousands of data points have been analyzed to evaluate candidates from social media sites, words in resumes, and other available data. Many systems operate as a black box, meaning vendors of algorithmic systems do not disclose how inputs lead to a decision,” said Jenny Yang, senior fellow at the Urban Institute.

Regulation in the Pipeline

Researchers alike are waving red flags at the use of AI in the hiring process and the potential discrimination that could occur as a result. Manish Raghavan, a doctoral student in computer science at Cornell, warned in a paper that corporations are protected by intellectual property laws. These laws allow companies to obfuscate how their algorithms function, which could shield against any claims of discrimination.

Activists are also taking notice. The Electronic Privacy Information Center, a public interest research organization, filed a petition this month with the Federal Trade Commission (FTC). The petition asked the FTC to investigate and consider regulations for the use of AI, biometric data, and facial recognition technology in the hiring process.

Take Away

AI represents many new challenges for HR professionals. In the coming years, there will likely be new regulations introduced to prevent systematic discrimination against certain groups by AI. Companies will likely adopt some AI tools for the hiring process, but HR will need to vet these tools and ensure a fair hiring process for all applicants.

Melissa Marsh, SPHR, SHRM-SCP, is a human resources consultant and founder of HRinDemand, a human resources company in Reno, NV, offering expert guidance and easy-to-use tools to help small businesses with employment regulations, compliance, employee relations, and company growth.

Sources:

Scott, Robert C. “The FutureofWork: Protecting Workers’ Civil Rights in the Digital Age.” Committee on Education & Labor, Subcommittee on Civil Rights & Human ServicesHearing, 5 Feb. 2020, edlabor.house.gov/imo/media/doc/BonamiciOS020520201.pdf.

Simonite, Tom. “The Best Algorithms Still Struggle to Recognize Black Faces.” Wired, Conde Nast, 22 July 2019, www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/.

Yang, J. (2020, February 5). https://edlabor.house.gov/imo/media/doc/YangTestimony02052020.pdf. Retrieved February 18, 2020, from https://edlabor.house.gov/imo/media/doc/YangTestimony02052020.pdf

Leave a Reply