The rise of artificial intelligence (AI) is ushering in a new era that holds a wealth of possibilities. Generative AI platforms like ChatGPT have captured the public’s fascination but with the growth of AI, there is also some cause for concern. The sinister side of AI deserves more interrogation and examination.
Anti-Blackness can be thought of as “beliefs, attitudes, actions, practices, and behaviors of individuals and institutions that devalue, minimize, and marginalize the full participation of Black people.” There are several documented examples of anti-blackness in both our algorithms and in our AI. Despite the vast potential, the anti-black bias baked into AI systems must be considered.
A 2021 report by the University of Pennsylvania Law School Policy Lab on AI and Implicit Bias sought to investigate the role that bias plays in recruiting and hiring platforms. The findings in the report indicated that among individuals aged 18-40 that were surveyed, there were considerable fears about stereotype threat on recruiting platforms. Stereotype threat can be thought of as the phenomenon where individuals feel like they are being stereotyped based on a unique identity they hold such as their race, age, or religion.
The aforementioned report identified several concerns that young professionals have when it comes to AI. For example, AI systems are programmed to identify certain keywords in a job candidate’s resume but oftentimes these keywords are designed with white applicants in mind. One respondent shared “if the AI’s concept of a ‘good’ resume is built using references to white resumes, folks who played golf or field hockey would have an advantage over those of us who preferred track or basketball.” Some respondents also worried that having a historically Black college or university (HBCU) listed on a resume or applicant profile could cause AI to disqualify a job candidate.
There has been a lot of conversation about the anti-blackness in facial recognition technology. Nijeer Parks was accused of stealing and attempting to hit a police officer with a car in 2019 after facial recognition software identified him as the culprit, even though he was 30 miles away at the time that the incident occurred. In 2022, Randal Reid spent almost a week in jail after facial recognition technology falsely identified him for stealing in a state he had never been to. Marketplace reported in April that of the five known cases of wrongful arrests made based on facial recognition technology, all of the victims have been Black men.
Artificial intelligence also comes in the form of our favorite beauty filters on social media. Many of the popular filters available on apps like Instagram and TikTok cater to white beauty standards, creating lighter colored eyes, lighter skin and thinner noses. “When I’ve used the darker filters, my pictures come out lighter,” shared professor Eli Joseph. A 2021 MIT Technology Review article indicated that these beauty filers can also perpetuate colorism.
There has been an increase in the usage of platforms like Lensa, DALL-E, and Canva’s Magic Edit tool to edit photos and generate new images for professional and personal use, but these trendy platforms are not without their issues. “I downloaded [an AI platform] to provide myself different options to post to LinkedIn [for] a recent panel I participated in. The AI app did fade out my braids with the filter around the top. My skin was smoothed, brightened and cleared…it would not recognize any of my selfies, which was very frustrating,” shared non-profit founder Nitiya Walker.
“I recently used Chat GPT and other forms of AI to do a deeper dive on existing literature and statistics of African American women and found a dearth in information,” shared Mea Boykins, who is a Ph.D. student at the University of the West Indies. “Of course, the AI forewarned that it could not provide specific results and would only share general responses.” Users should understand that information received on generative AI platforms must be taken with a grain of salt. A recently published study found that a few prompts could easily result in overtly racist results from ChatGPT. In the study, when researchers assigned ChatGPT a persona, like that of the late boxer Muhammad Ali, significantly toxic results were displayed, with outputs resulting in “incorrect stereotypes, harmful dialogue, and hurtful opinions.”
Equity and anti-racism advisor Hannah Naomi Jones shared a recent experience she had with AI in an Instagram post. “I am just going to say what I had been saying for 10 years. AI is anti-black…after several attempts, all my AI photos kept portraying me as a white woman and an Asian woman…all the photos lightened my skin and eyes and made my nose smaller. Racism in AI, bias, code and algorithms is inevitable…if we haven’t fixed our issue with race on a systemic and behavioral level and there’s a history of erasure then machines will learn what we have taught them. The only hope is equity in programming by human anti-racist scientist.”
To overcome the anti-blackness that is rampant within AI, we must ensure that those programming AI systems are operating from an anti-racist and anti-oppressive lens. Part of the problem may lie in the fact there are so few Black people in AI; one 2021 report indicated that only 2.4% of U.S. residents that graduated with a Ph.D. in AI were Black. The formation of more AI ethics committees may be a vital part of addressing the anti-blackness inherent in AI. As usage increases, the public must remain vigilant and understand the pitfalls that accompany the usage of these new technologies.
Read the full article here