left-caret

Client Alert

EEOC Issues Guidance on Using AI in Compliance with the Americans with Disabilities Act

May 17, 2022

By Kenneth W. Gage,

Kenneth M. Willner,

& Dan Richards

Employer use of artificial intelligence continues to expand, as newer and more sophisticated tools enter the marketplace. Many offer great promise for more efficient and effective decision-making. Some, however, may present equal opportunity compliance risks. Because artificial intelligence tools often are used at scale, these risks can be significant.

On May 12, 2022, the Equal Employment Opportunity Commission (“EEOC”) issued a “technical assistance” document to help private employers comply with the Americans with Disabilities Act (“ADA”) when using software, algorithms, and artificial intelligence (collectively, “AI”) for hiring and other employment decisions. The Department of Justice issued similar guidance for state and local employers. The EEOC’s technical assistance is part of its broader Artificial Intelligence and Algorithmic Fairness Initiative aimed at ensuring employers’ use of AI complies with federal civil rights laws.

How the EEOC Says AI Can Allegedly Violate the ADA

AI tools can violate the ADA, according to the EEOC, if (i) the employer fails to provide a reasonable accommodation necessary for an individual to be rated fairly and accurately by the tool, (ii) the tool screens out an individual with a disability even though the individual can do the job with a reasonable accommodation, or (iii) the tool violates the ADA’s restrictions on disability-related inquiries and medical examinations.

AI and Reasonable Accommodations

Employers can proactively tell individuals that an evaluation process uses AI and ask whether they will need reasonable accommodations. If not, the EEOC says, and the individual tells the employer that a medical condition makes it difficult to participate in the process, the individual has requested a reasonable accommodation. In that case, the EEOC says the employer must promptly respond, but can request supporting medical documentation if the medical condition is not obvious or already known. If the documentation shows a disability might make the evaluation more difficult for the individual, then the employer must provide an “alternative testing format or a more accurate assessment,” unless doing so would involve “significant difficulty or expense,” i.e., an undue hardship, according to the EEOC. Employers must keep all medical information obtained in connection with the reasonable accommodation request and store it separately from the applicant or personnel file.

The EEOC clarifies that the employer will be responsible for an outside vendor that develops or administers AI on the employer’s behalf “in many cases.” Where a software vendor administers and scores a pre-employment test on an employer’s behalf and fails to provide an accommodation in response to an individual’s assertion that a medical condition made the test difficult, the EEOC says, “the employer likely would be responsible even if it was unaware.”

AI and Screening Out

According to the EEOC, screening out occurs where (i) an AI tool prevents an individual from meeting, or lowers their performance on, job selection criteria, (ii) they lose the job as a result, and (iii) they could have performed the job’s essential functions with a reasonable accommodation. Consider a chatbot that rejects all applicants who tell the chatbot they have significant gaps in their employment history, even those whose disability caused the gap.

The EEOC also asserts that, when outside vendors and employers take steps to eliminate bias in AI tools to prevent adverse impact on the basis of race, sex, national origin, color, or religion, those steps may not resolve the disability “screen out” problem. Even an AI tool that has been “validated”—and therefore meets professional standards reflecting that the tool accurately measures or predicts a trait or characteristic important for the job—might screen out individuals who can perform well on the job with a reasonable accommodation. For example, a validated “gamified” video game test that assesses applicants’ memories could screen out a blind applicant with a good memory, the EEOC says. Or a personality test might rate someone with PTSD poorly if they have difficulty ignoring distractions even though they can perform the job with a quiet workstation or noise-canceling headphones, according to the EEOC.

The EEOC suggests three inquiries for employers to make of outside vendors (or consider for their own AI tools):

  1. whether the vendor made the interface accessible to as many individuals with disabilities as possible;
  2. whether the vendor has alternative formats; and
  3. whether the vendor has determined that the tool does not disadvantage individuals with disabilities by ensuring the measured traits or characteristics are not correlated to certain disabilities.

The EEOC also recommends that employers use consulting experts. For example, an employer might consider consulting a psychologist to ensure a pre-employment test measuring personality traits does not screen out people with autism or cognitive, intellectual, or mental-health-related disabilities who could do the job with reasonable accommodations.

The EEOC further explains that employers can reduce the chances of screening out individuals with disabilities by telling participants reasonable accommodations are available to those with disabilities, providing clear instructions for requesting reasonable accommodations, and giving as much information as possible about the tool to participants. Finally, the EEOC says employers should select AI tools that measure only those abilities and qualifications that are truly necessary for the job, and that do so directly, not by inference. Consider an AI tool employed to select report writers, which the EEOC says should measure the ability to write reports, not applicants’ personalities. If the latter, the tool might screen out an applicant with a disability who is good at report writing even though their disability manifests in a different personality from successful report writers.

AI and Disability-Related Inquiries and Medical Examinations

The EEOC posits that an AI tool might also violate the ADA by posing disability-related inquiries or seeking information qualifying as a medical examination before the employer makes a conditional offer of employment. The EEOC says this is so even if the applicant has no disability. The EEOC cautions employers against using AI tools that ask individuals questions likely to elicit information about disabilities or seek information about physical or mental impairments or health.

The EEOC says, however, that not all AI tools asking for health-related information violate the ADA. For instance, a personality test asking whether an applicant is “described by friends as being ‘generally optimistic’” does not pose a disability-related inquiry even if that question could be related to some mental health diagnoses. Nevertheless, the EEOC says, if the AI tool goes the extra step of screening out an applicant with Major Depressive Disorder based on their response to that question, then it may violate the ADA.

The EEOC’s Promising Practices

Lastly, based on the foregoing, the EEOC concludes by offering nine “promising practices” employers should follow:

  1. Train staff to recognize and process requests for reasonable accommodations as quickly as possible;
  2. Develop or obtain alternative means of evaluating individuals in case the AI tool disadvantages those with disabilities;
  3. Require outside vendors to forward all requests for accommodations to the employer or agree to provide reasonable accommodations when required by the ADA;
  4. Use AI tools designed with as many different kinds of disabilities in mind and tools that have been user-tested;
  5. Inform participants that reasonable accommodations are available and explain how to request them;
  6. Describe the traits the AI tool assesses, the method it employs, and the variables or factors that might impact its ultimate rating;
  7. Ensure the AI tool only measures abilities or qualifications that are truly necessary for the job;
  8. Ensure that those abilities or qualifications are measured directly by the AI tool, rather than by way of characteristics or scores correlated with them; and
  9. Confirm the AI tool does not pose questions likely to elicit information about a disability or seek information about an individual’s physical or mental impairments or health (unless in relation to a request for a reasonable accommodation).

Conclusion

The technical assistance document is not law, and some of the positions articulated in the document are debatable under the law. Still, the document reflects the EEOC’s view of the law. Therefore, this is a reminder to all employers to well understand and manage risks associated with the use of artificial intelligence technology for employee recruitment and selection.

Click here for a PDF of the full text

Practice Areas

Employment Law

Employment Counseling and Preventive Advice


For More Information

Image: Kenneth W. Gage
Kenneth W. Gage

Partner, Employment Law Department

Image: Dan Richards
Dan Richards

Associate, Employment Law Department

Get In Touch With Us

Contact Us