USC researchers look to clean up social media

Researchers are analyzing harmful online content to inform future moderation.

By QUINTEN SEGHERS
Luca Luceri has co-authored two papers under the USC Information Sciences Institute and CAll for Regulation support In Social MediA. (Luca Luceri)

A team of researchers at USC’s Information Sciences Institute, collaborating with the CAll for Regulation support In Social MediA, is analyzing harmful content — misinformation, hate speech, spam, bots, trolls and more — on social media platforms to create a body of work that will inform social media companies, regulators and fellow academics how content moderation can be improved.

CARISMA staff hope to use this body of knowledge to create a content moderation simulator. This simulator will allow researchers to play around with different techniques and methods for content moderation, ranging from temporarily and permanently suspending users to flagging, demoting and removing content.


Daily headlines, sent straight to your inbox.

Subscribe to our newsletter to keep up with the latest at and around USC.

Luca Luceri, a co-author of two research papers under the ISI and CARISMA, is also working on the simulator. 

“We are building this big simulator with … several thousands of agents that try to replicate humans and human’s interaction,” Luceri said. “We want to see what happens if we limit the interaction of good legitimate users with these bad actors.” 

Lucia Perfetti, a freshman majoring in cinema and media studies, estimates she sees harmful content on social media roughly twice a week and she believes that hate speech has had a resurgence. 

“I’ll see lots of hate speech against specifically Jewish communities right now with the conflicts going on abroad,” Perfetti said. “At the [LGBTQ+ Student Center], we were at a meeting and we were looking through a Pride Month post and there were some hate speech comments and we reported them to administration.”

Together, the ISI and CARISMA have published several papers, one of which analyzed how X, formerly known as Twitter, responded to major geopolitical events. The study found that the X content moderation team is more proactive at suspending recently created accounts than older accounts and that suspended accounts engage and post more than other users. 

In a second joint study, researchers analyzed how moderated content — content that a social media company eventually takes down for violating community guidelines — spreads across different platforms. According to the study, harmful content on YouTube gets more engagement than non-moderated content from every other mainstream platforms.  

Luceri recalled being surprised by some of his findings. 

“We didn’t expect to find that moderated videos were getting so much traction and were shared far more than content from other social media platforms,” Luceri said. “These videos got traction quite soon in their lifespan, so that they could reach a wide audience before moderation.”

Angela Wang, a senior majoring in film and television production, said she most often sees hateful comments directed towards celebrities on Instagram. 

“Usually I don’t really comment on stuff like that because I feel like, ‘Why bother?’, because it’s other people’s dumb comments,’” Wang said. “But if something really pisses me off I’ll report it … [Controversial content] gets the views, they grab people’s attention, and people are gonna want to comment on it because they’re like, ‘This is so wrong.’”

Luceri’s team also found that during the 2020 United States presidential election, people who supported former President Trump were more likely to share YouTube videos on X that were eventually taken down than those who supported other candidates.  

Perfetti said she noticed a lot of pushback against Trump and his followers’ statements during the 2020 presidential election.

“[Trump’s] followers had a larger presence and thus there was probably a [higher] likelihood of their posts getting taken down,” Perfetti said. “There’s a lot of pressure on the company, which I think is great, like [the] public having a say in what companies regulate. But that can also be really damaging if companies get more biased against one side or the other.”

The type of research that Luceri does is largely possible through using tools known as application programming interfaces to collect and compile necessary data — such as millions of tweets. However, the lack of information available through YouTube’s and X’s API prevented Luceri’s team from expanding their research.  

Sometimes, a social media platform’s API only tells researchers half the story. 

“When an account [is] suspended on [X], we know only whether it was suspended or not, but we couldn’t get any information on when the account was suspended,” Luceri. “So we can speculate and try to go back in time and see what account was this user, but we have no proof, no signal or evidence from the API.” 

Luceri said that he’s also mindful of the balance at play between one’s freedom of speech and the need for content moderation. 

 “If we moderate a hateful comment or a hateful post, this goes against the freedom of speech,” Luceri said. “But on the other side, we need also to take into account that these spaces require some moderation. So if we allow everybody [to speak] in a certain way — violent, toxic or hateful … We needed a way to somehow polish these spaces.”

One of CARISMA’s objectives is to learn which interventions are most effective at mitigating harms like misinformation and hate speech. 

“As a researcher, I think I’m driven by this mission to make these spaces healthier and safer,” Luceri said.

© University of Southern California/Daily Trojan. All rights reserved.