USC student Aida Davani tackles hate speech through AI


 Picture of Aida Davani.
Doctoral student Aida Davani recently presented her work “Hatred is in the Eye of the Annotator: Hate Speech Classifiers Learn Human-like Social Stereotypes,” at the Cognitive Science Society in Aug. where she and her fellow co-authors presented their study on AI models reflecting creator social and human biases. (Photo courtesy of Aida Davani)

When it comes to tackling complex issues of social justice and online hate speech, Aida Davani knew it would take a novel approach. With a background in computer science and continuing education in that field as a doctoral student, Davani understood that if she wanted to make an impact, it was time to start asking tougher research questions and investigate how human biases can influence artificial intelligence identifiers of hate speech.

“There are usually some main questions, and there are different research groups that try to enhance the answer … [and] enhance the performance of a model,” Davani said. “But I was more generally interested in asking new questions and looking into different concepts that haven’t been explored.”

In her most recent work “Hatred is in the Eye of the Annotator: Hate Speech Classifiers Learn Human-like Social Stereotypes,” which she presented at the Cognitive Science Society conference in August, Davani and her coauthors found that AI models reflect the biases of the people who created and trained them to identify hate speech in a process called annotation.

“[We] figured out there are some kinds of social biases and human biases that lead to this kind of imbalanced data set and bias models,” Davani said. “There are a lot of disagreements annotating hate, and [we] want to know if it’s because different people have different views or different social biases about those groups.”

Discovering this phenomenon required a uniquely interdisciplinary approach. Leigh Yeh, a 2020 graduate of the master’s in computer science program who previously co-authored a similar paper with Davani, explained how the project demanded a variety of skill sets and perspectives — namely, psychology.

“We all work in social psychology, and we all work in social science and we’re all from a variety of backgrounds,” Yeh said. “I did not start off in computer science. My background is in cognitive science. But, I remembered I want to help my community. I want to learn about how people interact.”

Davani echoed this, explaining that she herself had to make adjustments in the way she took on these issues.

“We have different backgrounds, we have different questions that we want to answer,” she said. “So our knowledge is like different pieces of the puzzle — we have to trust each other … When it comes to, for example, social phenomena, it’s not my area of expertise, and I don’t like to go ahead and put my ideals which are just based on my observation but not based on some solid background.”

The issue of hate speech and AI has become more prevalent in recent years. In 2018, 53% of Americans reported facing online hate speech, with 37% reporting severe attacks such as stalking and sexual harassment, according to USA Today. Currently, there is no universal method of regulating hate speech on the internet.

This has led many computer scientists to devote more research into the problem. Morteza Dehghani, an associate professor of psychology and computer science and Davani’s mentor for the past four years, believes that her work is a key part of expanding upon this scientific literature.

“She’s bridging between the two fields in order to both bring in abilities that computer science models and analyses at large scale can tell us about human behavior, but then also complemented from top down theories that, for the past couple of decades, people have worked on and developed on how biases can lead to prejudice into these models,” Dehghani said. “She’s essentially found a very good sweet spot.”

While AI and its more technical aspects are often misunderstood by the public, Yeh believes that this research has the ability to make a real positive impact on people’s lives.

“There’s a lot of scary things being said about AI, and I would love the public to know that there is a lot of good that can come from AI,” Yeh said. “It just takes motivated, smart and caring and empathetic people to get in this field and really utilize these tools for the greater good.”

For now, Davani encourages people interested in the field to challenge themselves to be more creative and holistic with their approaches to their work and research.

“Maybe just spend more time and ask questions about the data that you have, or the main practices that are usual in your field,” she said. “Maybe there is something that is not being understood because there isn’t a collaboration or there’s other points of view to it. It’s very important to be able to ask questions before trying the common practice.”