On May 18, 2023, the U.S. Equal Employment Opportunity Commission (EEOC) issued a new technical assistance document titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”
This technical assistance is part of the EEOC’s 2021 agencywide initiative to ensure that the use of software such as artificial intelligence (AI), machine learning and other emerging technologies in hiring and other employment decisions complies with the federal civil rights laws enforced by the agency. The new guidance builds on the Uniform Guidelines in the Employee Selection Procedures (UGESP) adopted by the EEOC in 1978, as well as guidance issued last year addressing issues of using artificial intelligence in hiring within the context of the Americans with Disabilities Act.
The technical assistance addresses the potential discriminatory impact of using algorithmic decision-making tools, defined as the computer analysis of data that an employer relies on, either partly or in whole, when making decisions about employment. The guidance highlights the following examples of such software available to employers:
- resume scanners that prioritize applications using certain keywords;
- employee-monitoring software that rates employees on the basis of their keystrokes or other factors;
- virtual assistants or chatbots that ask job candidates about their qualifications and reject those who do not meet predefined requirements;
- video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and
- testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills or perceived “cultural fit” based on their performance on a game or on a more traditional test.
How can employers tell if their algorithmic decision-making tools are in danger of violating federal employment discrimination laws? According to the EEOC, any selection tools that create an adverse selection rate toward individuals of one or more protected characteristics can be indicative of discrimination. The technical guidance reminds employers that although AI systems have the appearance of objectivity, they are developed by humans and therefore are subject to the societal and personal biases that can create disparate outcomes in hiring.
The EEOC provides direction on how to evaluate the extent to which bias may permeate an employer’s automated process. The technical assistance directly states that the “four-fifths rule” can be applied to AI tools to help identify disparate impact. This test, described in detail in the UGESP, defines a selection rate for one group as “substantially” different than the selection rate of another group if their ratio is less than four-fifths (or 80%). For example, an employer’s hiring tool creates a selection rate of black applicants of 30%, while its selection rate of white applicants is 60%. Because the ratio of those two rates (30/60 or 50%) is lower than four-fifths, this selection rate for black applicants is substantially different than the selection rate for white applicants and could evidence discrimination against black applications.
The EEOC reiterates that the four-fifths rule is a good rule of thumb, but quickly dashes employers’ hopes of calculating their way into compliance with a simple formula. In some situations, the four-fifths rule will not be a reasonable substitute for a test of statistical significance — for example, where many selections are made, causing any ratio to be irreflective of the actual impact on different protected groups. As with traditional selection processes, employers should subject AI tools to holistic review; compliance with any one test cannot disprove discriminatory outcomes. The EEOC recommends that employers conduct self-analyses and audits on an ongoing basis. However, the EEOC makes it clear that employers need not discard their existing AI tools, but should make amendments to remedy discriminatory selection rates. Because algorithms can be adjusted, not doing so may open an employer up to liability.
Many employers may hope to circumvent these concerns by outsourcing AI hiring tools to third-party vendors. The technical assistance, however, states that employers may still be liable for their agents’ violations of federal employment discrimination laws. Employers therefore should take steps to determine if vendors or developers are building and auditing their AI tools for any discriminatory impact. The EEOC recommends asking vendors specifically if they relied on the four-fifths rule, or other court-approved standards like statistical significance, when auditing their product.
Tips and Takeaways
The technical assistance urges employers to take a hands-on approach to auditing AI usage in their hiring processes. The following tips may aid employers in that task:
- Maintain human oversight of AI tools. Employers should ensure automated hiring processes are subject to consistent review, not just to make sure these tools are providing accurate insights, but also to ensure that they are not reflecting existing biases of individuals building and maintaining the tools. Performing self-audits is crucial for employers to prevent discriminatory hiring practices.
- Do not delegate compliance to AI vendors. Employers should perform due diligence around which AI tools they implement by asking vendors pointed questions about testing and audit practices with a focus on disparate impact. Employers also should review their commercial contracts with AI vendors to ensure that indemnities and other contractual allocation of risk are properly addressed.
- Continue organizational bias training. Both implicit and explicit bias training are essential to identify potentially discriminatory practices and should form the foundation when building meaningful audit procedures for hiring practices, especially automated decision-making tools.
For questions about how artificial intelligence presents both risks and opportunities for employers, contact the authors of this article.