AI could be the enemy of social progress

As artificial intelligence rises, we must be conscious of systemic human biases.

By EMI GUZMAN
(Lucy Chen / Daily Trojan)

As artificial intelligence becomes more developed, it holds the potential to become a powerful tool for enhancing daily life. But with power comes great responsibility. AI developers need to start considering the negative implications that AI software can have on social progress.

The term “AI Revolution” has been coined to describe the rise of AI technology, garnering comparisons to the Industrial Revolution. 

“There will be a big impact on jobs and that impact could be as big as the Industrial Revolution was,” said Lord Patrick Vallance, UK minister of state for science, to the House of Commons’ Science, Innovation and Technology committee last year.


Daily headlines, sent straight to your inbox.

Subscribe to our newsletter to keep up with the latest at and around USC.

The societal implications of AI are concerning because the proliferation of AI is the capitalist’s wet dream. The New Yorker author Gideon Lewis-Kraus cautions that Silicon Valley tech companies are consolidating power “on a scale and at a pace that is both unprecedented in human history.”

AI companies rely on underpaid gig workers like data labelers, delivery drivers and content moderators. AI researchers Adrienne Williams, Milagros Miceli and Timnit Gebru wrote in an essay published in Noema Magazine, “unlike the ‘AI researchers’ paid six-figure salaries in Silicon Valley corporations, these exploited workers are often recruited out of impoverished populations and paid as little as $1.46/hour after tax.”

In this era of radical technological growth, we must beware of the over-commodification of AI putting exponentially more money into the hands of fewer members of society.

Government use of AI is also a cause for concern. Across the United States, AI software is being used in courtrooms to brainstorm criminal sentencing. Correctional Offender Management Profiling for Alternative Sanctions is a software that develops predictions on recidivism or the likelihood of a person repeating a crime. 

A study led by Julia Angwin found that COMPAS is remarkably unreliable in forecasting violent crime: only 20% of the people predicted to commit violent crimes actually went on to do so.

In an interview with NPR, internet scholar and UCLA professor Safiya Noble explained how the information fed into AI creates inherent biases.

“What is used to determine these kinds of predictive AIs are things like histories of arrests in a certain zip code,” Noble said. “So if you live in a zip code that has been overpoliced historically, you are going to have overarresting. And we know that the overpolicing and the overarresting happens in Black and Latino communities.”

Data from Angwin’s study revealed that Black defendants were twice as likely as white defendants to be misclassified as a higher risk of violent recidivism, and white recidivists were misclassified as low risk 63.2% more often than black defendants.

If courts base sentencing decisions on inaccurate, racially-biased, AI-generated recidivism predictions, that is institutionalized racism. The use of these kinds of computational devices in institutional decisions poses an urgent need for regulation.

On an interpersonal level, AI products are becoming a normative part of life. 

One might use an AI tool to generate images for a presentation or try an AI filter that transforms ordinary photos into artistic portraits. While AI cat memes and trending AI Disney filters may seem harmless, it is important to consider that any image produced or modified by AI can reinforce problematic and noninclusive social narratives.

Popular AI-powered TikTok filters have been criticized for “whitewashing” facial features, removing same-gendered partners from photos and altering users’ bodies to look thinner.

AI-powered filters are changing the way that we look and who our partners are to conform to noninclusive narratives. Without intervention, AI has the power to teach problematic ideas about beauty, sexuality and more to impressionable audiences.

The Washington Post revealed that Stable Diffusion XL, an AI image generator, also succumbed to many tropes when prompted to create photos. When prompted to create photos of a person playing soccer, Stable Diffusion XL generated images of primarily darker-skinned male athletes. When prompted to create photos of a person cleaning, the program generated images of only women.

Inspired by The Post, I decided to personally test the DeepAI Image Generator. Without further specification, the prompt “Asian person” generated images of people only with East Asian features. Similarly, the prompt “couple” produced images of only heterosexual couples.

When AI generators cater to the appearance of dominant groups or fail to give us diverse, inclusive images, they teach stereotypes that are harmful to members of every community. 

AI technology is flawed because humankind is flawed. As AI inevitably penetrates all dimensions of modern life, we must remain cognizant of its relationship to social progress. AI developers, policymakers and community members must all advocate for an ethical future with AI.

© University of Southern California/Daily Trojan. All rights reserved.