The Connection Between Algorithmic Bias and Employment Discrimination
Algorithms are mathematical equations and techniques based on logic. Computers use algorithms to make logical and math-based decisions, without the risk of human error or bias. However, computers also make predictions based on the data and algorithms they are provided – so when fed incorrect or incomplete data, they can make errors or biased decisions too.
In human resources and recruitment, hiring algorithms can be an efficient, quick and unprejudiced way to search for job candidates. However, just because a computer can make a decision faster, it does not necessarily mean it’s the fair one. When companies rely on an AI to perform human tasks, employees (or potential employees) may experience discrimination without their knowledge.
Back in 2018, Amazon found this out the hard way after developing an AI employment recruiting tool. After building an algorithm designed to search for top talent to recruit for the company, the project was scrapped after developers realized that the AI was downgrading and penalizing resumes that included the word “women’s.” It even downranked resumes containing references to two specific women’s colleges.
The reason? Algorithmic bias. According to Reuters, this happened because “Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.”
Although Amazon said the tool was never used by their recruiters to evaluate candidates, Reuters also reported that the company “did not dispute that recruiters looked at the recommendations generated by the recruiting engine.”
How does algorithmic bias happen?
According to Jenny R. Yang of the Urban Institute, bias can enter AI systems in one (or all) of three ways:
- Biased data, a good example of which is the Amazon recruiting tool. The only data it had from which to extrapolate was primarily men’s, which taught it to downvote women.
- Biased variables, which can identify developer blind spots. One example of this is using a zip code as a proxy for race.
- Biased decisions, where humans may misinterpret or misuse an AI’s decisions, leading to biased or discriminatory decision-making.
Often, the employer may not own the proprietary system making these decisions. The vendor of the algorithm software may not allow them to even have access to the code.
“Faster is not always better”
The Center For Democracy and Technology recently released a report discussing employers’ responsibilities when using algorithm-based hiring tools. They note that, for job applicants with disabilities, employers can face liability if they don’t provide accessibility for popular algorithm-based hiring tests like:
- Recorded video interviews
- Personality tests
- Gamified aptitude tests
Employers must provide all applicants with reasonable accommodations. The Center also specifies that employers must only use selection criteria that is relevant and necessary to the applicant’s essential job duties.
If you have experienced employment discrimination, the attorneys at McNicholas & McNicholas would like to hear from you. We protect the rights of clients in Los Angeles and across the state. To schedule your free case evaluation with an experienced attorney, call 310-474-1582, or reach out to us through our contact page to tell us your story.
Please note that this blog is not to be construed as legal advice. Because every case is fact-specific, you should consult directly with an attorney to obtain legal advice specific to your situation.
With more than 25 years’ experience as a trial lawyer, Partner Patrick McNicholas exclusively represents victims in personal injury, product liability, sexual assault and other consumer-oriented matters, such as civil rights, aviation disasters and class actions. Learn more about his professional background here.