Legal Risks of Using Artificial Intelligence in Hiring

Kuei May 28, 2020 By: Jean Kuei and Meaghan Mixon

New AI-powered tools are streamlining applicant screening and selection, but algorithms may perpetuate unlawful biases and lead to discrimination in hiring. Employers can take steps to reap the benefits while mitigating the risk.

Companies are increasingly relying on artificial intelligence to transform the ways in which they do business in the 21st century. Innovative AI technologies have streamlined procedures and reduced costs associated with processes that can easily lend themselves to automation. The shift toward AI will likely increase as the technology takes on more human-like qualities.

As employers begin to use new AI technologies encompassing predictive analytics and machine-learning algorithms to manage human resources functions that require an elevated level of judgment, they must ensure that these tools do not replicate and perpetuate existing inequities.

Specifically, manufacturers of AI technologies—and the organizations that use them—must not run afoul of landmark civil rights legislation, including Title VII of the Equal Pay Act of 1963, Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act of 1967, Title I of the Americans with Disabilities Act of 1990, and the Genetic Information Nondiscrimination Act of 2008. Indeed, the Equal Employment Opportunity Commission (EEOC) has recently launched at least two investigations into claims that algorithms unlawfully discriminated and screened out individuals during the recruitment process.

The Risks

Algorithms based on biased data or models. During the evaluation and screening process, algorithms rely on data drawn from previously successful candidates and current employees to refine their processes and seek correlations between data points regardless of whether they are meaningful, job-related, or legally appropriate.

For example, consider a workforce primarily comprising young men. An algorithm may prioritize young, male applicants to reflect the employees in the organization’s current population. The algorithm’s attempts at mimicking the employer’s past selection habits may inadvertently perpetuate preexisting biases or disparities.

Likewise, there might be an algorithm that finds that the model employee is named Michael and is born in May.  Although this would suggest that there is a statistically significant correlation between those two data points and performance, such qualities are most likely not reliable predictors of future success in a particular job.

Further, removing protected categories such as race, gender, and age do not solve the problem. For instance, living closer to work may be correlated with retention, but geography and zip codes may be proxies for race given historical neighborhood segregation. Bias may also be perpetuated if an algorithm were to rely on an employee’s socio-economic status, as certain groups may be over- or underrepresented in those categories.

Inconsistent results. Algorithms are often unreliable because minor changes in the data may lead to dramatically different results. Additionally, the algorithm’s rapid processing of data often makes it difficult to understand how the screening decisions are made and to conduct manual reviews to refine the selection process and prevent bias-driven decisions.

Inaccurate AI methodologies. For employers who rely on video interviews or voice scans that measure a candidate’s facial expressions, voice tone, or word choice, AI technologies may improperly interpret certain regional and non-native accents or cultural expressions, potentially leading to bias in hiring decisions.

Mitigate the Risks

Despite the potential risks, introducing AI-powered technologies can be helpful so long as they are based on reliable data sets and models. They also must be carefully monitored and evaluated. These recommendations will help mitigate the risks.

Employer data sets should emphasize job-related qualities that are causally related to previous applicants’ success in the position, while ignoring or deemphasizing those that are unrelated.

Test against EEOC guidelines. The EEOC’s Uniform Guidelines on Employee Selection Procedures enumerate in detail the principles and criteria for assessing whether a test or screening process is discriminatory. The guidelines offer three key factors to evaluate the validity of a given test: criterion, content, and construct.

  • Criterion-related validity confirms that the selection procedure or test is predictive or significantly correlated with job performance.
  • Content validity confirms that the selection procedure or test includes tasks representative of the position in question.
  • Construct validity confirms that the selection procedure or test identifies characteristics relevant to job performance.

The efficacy of any AI or algorithmic-based tool should, at a minimum, be tested against these guidelines before being implemented.

Transparency, accountability, and testing. Employers should use robust and diverse data sets to build and test the algorithms they plan to use before deploying them. These data sets should emphasize job-related qualities that are causally related to previous applicants’ success in the position, while ignoring or deemphasizing those that are unrelated or would lend themselves to prohibited selections, like those based on age, gender, or race. Even after implementation, employers should continuously monitor and test their algorithms for bias or discriminatory patterns.

Seek legal counsel when implementing new AI. Because employers will ultimately be responsible for any discrimination in their hiring practices, they should engage legal counsel before implementing any AI in recruitment and hiring.

A Managed Approach

As the law continues to play catch-up with technology, employers often find themselves in unchartered territory when it comes to new AI-powered technologies, but this does not mean that they cannot reap the benefits associated with the lawful use of AI. When deployed well, AI can help streamline and eradicate the subjective decision-making in recruitment and hiring.

Employers should take a managed approach to using AI: Explore new technologies while also taking the time to learn about and test them to ensure that they are based on accurate, reliable, and valid job-related data and models.

Jean Kuei

Jean Kuei is a partner at Pillsbury Winthrop Shaw Pittman LLP in Washington, DC.

Meaghan Mixon

Meaghan Mixon is an associate at Pillsbury Winthrop Shaw Pittman LLP in Washington, DC.