Kindred AI Bias Is the Newest Hurdle for Black Job Applicants and Their Families Hiring bias isn't a new challenge for job seekers of color—but AI is and turns out its discrimination is automated, too. By Kimanzi Constable Updated on April 29, 2023 Fact checked by Karen Cilli Share Tweet Pin Email Photo: Getty Unemployment is at its lowest in 54 years but technological advances in hiring are still threatening Black applicants' ability to be competitive in the job market. For Black families, who have faced a history of bias and discrimination when applying for jobs, narrowing the wealth gap and chipping away at systemic inequality is closely tied to being competitive in the workforce. Hiring discrimination gets in the way of that. Data shows that those applying for employment with "Black" sounding names are still less likely to get interviews, according to Bloomberg. Bias in hiring is an issue that needs to be addressed and corporations throughout the U.S. have tried different approaches. Products like ChatGPT and self-driving cars are examples of the kinds of artificial intelligence that have taken over in recent years. Similarly, AI has become a prominent part of the hiring process. But even though AI recruitment tools were created to eliminate gender and racial bias, research from the University of Cambridge shows that AI recruitment tools may perpetuate gender and racial discrimination. The AI tools are picking up and mimicking the effects of systematic inequality and furthering the career and wealth gap for people of color. The question then becomes, when will Black and Brown job seekers have an equal opportunity if they're eliminated before they even start? AI Bias Mimics a History of Inequality "Bias in AI is not only possible, but it is one of the main challenges we have today. The key to solving the problem? Diversity," says Alberto Betella, co-founder at RSS.com. Betella holds a Ph.D. in Emotion AI and is the former CTO at Telefonica Alpha Moonshot Factory, a Fortune Global 500 leading technology and disruptive AI company. "The main reason behind an AI bias is that, like humans, AI needs to be trained. AI needs to be taught what to do with examples—these examples are called 'training sets.' The problem is that humans are the ones who select these training sets, so their bias will unconsciously be reflected in the selection process," says Betella. AI bias mimics a history of prejudice towards people of color. AI is programmed with this historical bias, and it is keeping good candidates of color from even having an opportunity to interview for jobs. Betella notes many examples of how bias in AI is a programmed behavior. "One of the most popular examples of AI bias was the Google Photos racist blunder, where a black couple was mistakenly labeled by the AI as being 'gorillas.' Many examples of similar incidents can be found," says Betella. Two professors at the University of Cambridge's Centre for Gender Studies studied several companies that offer AI-powered recruitment tools. These tools claim to eliminate bias by hiding candidates' names, genders, and other identifiers. The researchers argue that these tools may promote prejudice in hiring because they produce cultural biases of the "ideal candidate," which has historically been white or European males. The AI tools operate based on past company data and choose candidates most similar to current employees. The effects of systematic inequality are now being transferred to technology. An Ethical Alternative to Traditional AI "Bias in AI gave rise to a set of moral principles and practices often referred to as Ethical AI. One of the best practices that Ethical AI is promoting is diversity as the key to avoiding AI bias," says Betella. "Following this principle, a team working on cutting-edge AI should be as diverse as possible in terms of socio-demographic factors, including (but not limited to) age, gender, ethnicity, education, sexual orientation, etc. The lack of diversity will inevitably lead to bias. If the engineering team working on an AI algorithm has similar socio-demographic traits, they will create bias without being aware," says Betella. Betella notes that a diverse team is key to creating more inclusive training sets for the AI to learn the task at hand while drastically reducing and possibly removing AI bias. Diversity and inclusion are the most critical factors when developing next-generation AI tools. Companies should use Ethical AI principles and practices when programming and managing AI requirement tools. This can eliminate some bias and give candidates of color a fighting chance. Eliminating Bias in Hiring Affects Black Families In 2017, Amazon said it would stop using AI-recruiting tools to review resumes after finding that the AI strongly discriminated against women. The computer models the AI was programmed with relied on data from resumes submitted to Amazon over the past ten years. AI recruitment tools do what they're programmed to do; we need to eliminate the hiring bias. In a world full of overly qualified candidates of color, what we're looking for is an opportunity to showcase our skills and talents. This report from the University of Cambridge shows that a change is still needed. The systems that are supposed to eliminate bias and racism may further it, creating more systematic inequality for the African American community. The hope is this research brings a realization that we still have a long way to go in eliminating bias in the hiring process. Was this page helpful? Thanks for your feedback! Tell us why! Other Submit Sources Parents uses only high-quality sources, including peer-reviewed studies, to support the facts within our articles. Read our editorial process to learn more about how we fact-check and keep our content accurate, reliable, and trustworthy. Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference”. Philosophy & Technology. 2022.