
In days gone by, hiring new employees involved sifting through stacks of resumes and cover letters, conducting tedious, time-consuming interviews (initial and follow-up), plus traditional background and/or drug screens. The goal for distributors has remained the same — hiring the right candidate for an open position. Over the past decade, distributors consolidated these hiring practice inefficiencies by, for example, employing third-party resources, especially as to background checks.
Distributors then turned to social media to assist in hiring practices — for profiles and content within the background information review. Ultimately, the shift to artificial intelligence (AI) to streamline the search and hiring process is now becoming mainstream in making the right hiring choice, but such efficiency and data checks create a new environment ripe for potential legal challenges.
Just because an algorithm or other AI technology is doing the heavy hiring lifting does not mean that employers can ignore federal, state and local levels aimed at curbing discrimination in hiring and employment practices — including, especially, the use of AI in evaluating new hires. The AI proliferation and use of results that such technology can produce have set off alarm bells among legislators (and experts) concerned that AI tools and the algorithmic decision-making that defines them can lead to discriminatory effects and outcomes on all protected classes by improperly and illegally reinforcing biases and disparately excluding qualified candidates in hiring and other decisions.
Proliferation of State Legislation Governing Use of AI Hiring Technologies
As a result, a flurry of federal, state and municipal laws, regulations and ordinances focused specifically on establishing anti-discrimination protections and requiring more transparency regarding an employers’ use of AI in hiring and employment decisions have emerged. For example, Illinois, Colorado and Utah have recently followed New York’s lead in imposing new requirements and limitations on employers that use AI in hiring new recruits.
These state-level efforts accelerated after the federal Equal Employment Opportunity Commission (EEOC) issued extensive guidance on the use of AI hiring technologies in May 2023. That guidance, much of which has been codified in state or municipal legislation, makes it clear that employers are exposed to scrutiny for potential discrimination liability even if circumstantial: “Even where an employer does not mean to discriminate, its use of a hiring technology may still lead to unlawful discrimination” that violates Title VII of the Civil Rights Act, and other federal anti-discrimination laws such as the Americans with Disabilities Act (ADA) and the Age Discrimination Act of 1967, and equivalent state laws.
How AI Can Lead to Unlawful Employment Discrimination
Multiple AI touchpoints can lead to prohibited discriminatory direct or indirect, disparate impacts or outcomes during the hiring process:
Bias in Training Data
AI algorithms learn from historical data, and if that data contains biases, the AI will replicate and perpetuate them. For instance, if an organization’s historical hiring data shows a preference for male candidates for leadership roles, an AI model trained on that data may favor men over equally qualified women. This can result in discriminatory outcomes, even if the AI system appears neutral.
A well-documented example was Amazon’s AI recruiting tool that systematically downgraded resumes containing words like “women’s,” such as “women’s chess club captain.” This bias stemmed from the training data, which was dominated by resumes from male applicants over a decade, reflecting existing gender imbalances.
Algorithmic Discrimination
Even with unbiased training data, AI algorithms may independently develop patterns that result in discrimination. For instance, AI might correlate certain variables, such as a candidate’s zip code, with job performance. However, zip codes, for example, can serve as a proxy for race, age, disability, socioeconomic status and/or ethnicity due to residential segregation. As a result, an AI system might systematically disadvantage candidates from underrepresented or low-income communities or other impermissible factors, perpetuating structural inequalities.
Another problem with algorithmic-based decision-making tools is they often “screen out” candidates with disabilities before they can request, or the employer can provide, an ADA-required reasonable accommodation that would put them on an even footing when engaging in whatever assessments, tests or evaluations the employer is using.
Omission of Diverse Variables
AI hiring models often prioritize measurable qualities like education level, work experience or specific skill sets. While these factors seem neutral, their overemphasis can inadvertently exclude candidates from non-traditional backgrounds. For example, candidates who gained experience through unconventional means, such as freelancing or volunteering, may be undervalued compared to those with formal degrees or professional experience. This exclusionary bias disproportionately impacts individuals from marginalized groups who may lack access to traditional career paths.
Facial Recognition and Voice Analysis
AI tools incorporating facial recognition or voice analysis for assessing candidate suitability have also raised discrimination concerns. Studies show that facial recognition software often exhibits racial, gender and age or disability biases, such as inaccurately assessing individuals with darker skin tones, women or candidates it perceives as older or more expensive (in terms of benefit costs). Similarly, voice analysis tools may penalize candidates with accents or speech patterns that deviate from the majority group, disproportionately impacting non-native speakers or people from different ethnic backgrounds.
Lack of Transparency
AI-driven hiring systems are often opaque, making it difficult to identify when discrimination occurs. Distributors may not understand the decision-making processes of these tools, and candidates rarely have insight into why their applications were rejected. This lack of transparency can prevent affected individuals from challenging discriminatory practices and/or increase suspicion and send qualified but denied candidates to seek out legal counsel.
What’s Next for Distributors That Use AI in Hiring
As is so often the case with new and rapidly evolving technology, the law is playing a fragmented and piecemeal game of catch-up when it comes to the use of AI in hiring and employment practices. That creates a host of compliance and operational challenges for distributors. Distributors may need to invest in regular audits, bias testing and legal reviews of AI systems to ensure compliance, document how AI systems are trained, deployed and monitored, or otherwise adapt their hiring processes to accommodate new legal requirements, such as providing candidates with explanations of AI-driven decisions. Contract issues may emerge as well, such as indemnification by third parties who provide services using AI systems. Training key, top-level HR staff may well be warranted, especially to the extent that distributors utilize third parties or themselves rely more heavily on AI generated models of evaluating potential new-hire (or even promotional) candidates in the organization.
If you have questions or concerns about your company’s use of AI in its hiring and employment practices, or any other issues covered by this article, please contact me at 312-840-7004 or or [email protected].
The information contained in this article is provided for informational purposes only, and should not be construed as legal advice on any subject matter. The author expressly disclaims all liability in respect to actions taken or not taken based on any or all the contents of this article.