New EEOC Guidance: The Use of Artificial Intelligence Can Discriminate Against Employees or Job Applicants with Disabilities – JD Supra

Posted: June 1, 2022 at 8:13 pm

As the use of artificial intelligence wedges its way into every side of business and culture, government regulation is (perhaps too slowly) moving to build legal boundaries around its use.

On May 12, 2022, the Equal Employment Opportunity Commission issued a new comprehensive technical assistance guidance entitled The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. The guidance, which covers a number of areas, defines algorithms and artificial intelligence (AI); gives examples of how AI is used by employers; answers the question of employer liability for use of vendor AI tools; requires reasonable accommodation in deploying AI in this context; addresses the screen out problem of AI rejecting candidates who would otherwise qualify for the job with reasonable accommodation; requires limitations to avoid asking disability-related and medical questions; promotes promising practices for employers, job applicants, and employees alike; and gives many specific examples of disability discrimination pitfalls in using AI tools.

Here are some primary takeaways from the new guidance:

Per the guidance: A disability could have this [screen out] effect by, for example, reducing the accuracy of the assessment, creating special circumstances that have not been taken into account, or preventing the individual from participating in the assessment altogether.

Per the guidance: An assessment includes disability-related inquiries if it asks job applicants or employees questions that are likely to elicit information about a disability or directly asks whether an applicant or employee is an individual with a disability. It qualifies as a medical examination if it seeks information about an individuals physical or mental impairments or health. An algorithmic decision-making tool that could be used to identify an applicants medical conditions would violate these restrictions if it were administered prior to a conditional offer of employment.

Per the guidance: [E]ven if a request for health-related information does not violate the ADAs restrictions on disability-related inquiries and medical examinations, it still might violate other parts of the ADA. For example, if a personality test asks questions about optimism, and if someone with Major Depressive Disorder (MDD) answers those questions negatively and loses an employment opportunity as a result, the test may screen out the applicant because of MDD.

There are a number of best practices employers can follow to manage the risk of using AI tools. The guidance calls them Promising Practices. Primary points:

Per the guidance: Examples of reasonable accommodations may include specialized equipment, alternative tests or testing formats, permission to work in a quiet setting, and exceptions to workplace policies.

With the increasing reliance on AI in the private employer sector, employers will have to expand their proactive risk management so as to control for the unintended consequences of this technology. The legal standards remain the same, but AI technology may push the envelope of compliance. In addition to making a best effort in that direction, employers should closely review other means of risk management such as vendor contract terms and insurance coverage.

This article was prepared with the assistance of 2022 summer associate Ayah Housini.

[View source.]

Read more from the original source:

New EEOC Guidance: The Use of Artificial Intelligence Can Discriminate Against Employees or Job Applicants with Disabilities - JD Supra

Related Posts