Machines are said to have artificial intelligence (AI) if they can interpret data, learn from it and use the knowledge to react and achieve specific goals.1 Among other things, this intelligence allows machines to complete mental tasks that only humans typically perform.
AI is all around us - we likely interact with it throughout the day without fully realizing it. Have you asked Siri to play your favorite music playlist as you settled in to read? Do you open your phone with Face ID? Have you researched a topic online lately and received an AI Overview of search results? Though AI has been around in one form or another for nearly 75 years, its influence in everyday life, particularly business operations and employment practices, is more prevalent than ever.
Over the last several years, absence management vendors and insurance carriers have started incorporating AI into their businesses to create efficiencies and support targeted absence management practices. Where many of these functions were previously considered back-of office, AI is now undeniably more visible through chatbots in vendor portals, one-way text messages and workflow-based automated approvals.
While absence administration and claims processes are still primarily executed by humans, vendors are exploring how they can use AI to execute repetitive processing tasks as part of overall claims management. Their goal is to improve efficiency and automate certain tasks so claim examiners can spend their time more effectively, such as making claims decisions, thinking critically and holistically about claims and interacting with claimants.
Common examples of how AI may be used by vendors in the claims process include the following:
Employers using AI to drive organizational efficiency and cost savings should consider legal risks associated with these tools. In July 2023, the U.S. Senate introduced the No Robot Bosses Act to establish “an interagency taskforce on employer surveillance and workplace technologies.” The goal is to “protect and empower workers by preventing employers from relying exclusively on [AI] or bots to make employment decisions.” The Exploitative Workplace Surveillance and Technologies Task Force Act3 was established to execute this initiative.
Several state laws, such as Colorado’s Concerning Consumer Protections in Interactions with [AI] Systems Act and New York City’s Automated Employment Decision Tools law, may impact employers’ use of automated decision tools. Given the evolving AI regulatory environment and a complex web of federal, state and local laws, employers must stay informed about this intricate legal landscape.
For example, employers cannot escape legal risks by using AI tools designed and/or administered by a third-party vendor. The Equal Employment Opportunity Commission (EEOC) has made it clear that “employers may be held responsible for the actions of their agents, which may include entities such as software vendors if the employer has given them authority to act on the employer’s behalf.”4
Unlawful discrimination resulting from AI tools that employers played no role in creating or administering does not shield them from liability. Employers could still be liable if the tool results in disparate impact or disparate treatment discrimination.5
In addition to its role in the claims process, AI is commonly leveraged in other aspects of employment, such as preemployment screenings. Employers could be liable for a third-party vendor’s failure to provide a reasonable accommodation for a disabled applicant when administering and scoring a pre-employment test.
The EEOC states that “if an applicant were to tell the vendor that a medical condition made it difficult to take the test, which qualifies as a request for a reasonable accommodation and the vendor did not provide an accommodation required under the ADA, the employer likely would be responsible even if it was unaware that the applicant reported a problem to the vendor.”4
The EEOC suggests that employers that rely on vendors to develop or administer an algorithmic decision-making tool should ask the following questions to support the evaluation:
Before using AI tools, employers should consider taking the following actions:
As the use of AI expands in human resources and claims management functions, employers must keep a watchful eye on litigation and regulatory activity and how it may affect their businesses.
1 PBS. Crash Course: Artificial Intelligence. Public Broadcasting Service, 9 Aug. 2019. Retrieved from www.pbs.org/video/
what-is-artificial-intelligence-1-hptal6/
2 IBM. What Is a Chatbot? Oct. 15, 2021. www.ibm.com/topics/chatbots
3 Congress.GOV. S.2440. Exploitative Workplace Surveillance and Technologies Task Force Act of 2023. Retrieved from
https://www.congress.gov/bill/118th-congress/senate-bill/2440
4 U.S. Equal Employment Opportunity Commission. Select Issues: Assessing Adverse Impact in Software, Algorithms, and
Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964. May 18, 2023.
Retrieved from www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial
5 U.S. Equal Employment Opportunity Commission. The Americans with Disabilities Act and the Use of Software,
Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. May 12, 2022. Retrieved from www.eeoc.
gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence
6 Business Law Today. American Bar Association. April 10, 2024. Navigating the AI Employment Bias Maze: Legal
Compliance Guidelines and Strategies. Retrieved from www.americanbar.org/groups/business_law/resources/businesslaw-today/2024-april/navigating-ai-employment-bias-maze/