USC adjusts to generative AI in academia

Child in front of laptop with text in background
(Illustration: Jonathan Park | Daily Trojan / Photo: Ahmed Hindawi | Unsplash)

The advent of recent breakthroughs in artificial intelligence has wrought unforeseen consequences for USC faculty. Students across the country have already begun using ChatGPT, a language artificial intelligence model, to complete their assignments, creating challenges for professors to adapt their classes.

In an update to the University’s academic guidelines Tuesday, the Academic Senate Committee on Information Services advised professors to either “embrace and enhance” their curriculum with the software, or “discourage and detect” students using it in lieu of submitting original work. 

“Many of the proven teaching and assessment techniques that worked in a pre-generative AI world still work in a world where any student … can create walls of academic text,” the CIS guidelines read, offering examples of assignments incorporating ChatGPT and discussing the strengths and flaws of generated-text detectors available online.

Mark Marino, a professor of writing at the Dornsife College of Letters, Arts and Sciences, is one of the professors leading the charge to incorporate ChatGPT into their curriculum. Marino uses ChatGPT to help students realize what the language model lacks, and builds writing prompts around what the limits of artificial intelligence are with the intent of developing his students’ critical thinking.

“It worked well, because [while] they were impressed at first with the surface-level abilities of ChatGPT, it helped them to realize what they bring to the table,” Marino said. “If I can help students to get ChatGPT to produce something that is interesting and really well written, at the end of the day, I’ve actually taught them quite a bit about writing.”

Marino said he was motivated to implement artificial intelligence in the classroom because he foresaw the ubiquity of the tool, and thought it would be an “interesting and effective” way to extend the learning that students were working on within the classroom rather than using the tool as a crutch. He said he felt trying to prohibit the tool completely would introduce unnecessary complications and would inhibit the learning process.

“As soon as any teacher gets sucked into that cat-and-mouse game … at a university level, I feel like they’re aiming in the wrong direction. It’s not winnable,” Marino said. “GPT-4 [a new OpenAI product set to release this year] comes out before the end of the year, and all the ways of catching students are going to be invalid.”

Fabrizio Marsano, a freshman majoring in computer science games, said he believes that artificial intelligence is, at the moment, clearly identifiable due to language models being in the early stages of development.

“Maybe later on, as it gets more advanced, I think they should certainly implement stricter anti-AI plagiarism rules, because it definitely will get more realistic and it definitely [will] be important to notice as time goes by,” Marsano said.

The use of artificial intelligence in academic spaces has risen dramatically since advanced language models like GPT-3 have been made more accessible. According to a study done in November 2022 by Intelligent, an organization that helps students develop study methods, 76% of college students in the United States said ChatGPT was “very” or “somewhat” popular. The same study found that three in four college students believe it is cheating, yet continue to use the tool regardless.

Yebon Lee, a freshman majoring in neuroscience, said she believes that the use of artificial intelligence in academic spaces will not help students adequately prepare for their careers and progress intellectually because of their dependence on such tools. 

“I know a lot of students who use AI for essays and math and later on they’re complaining about how they don’t know anything, and they blame their professors,” Lee said. “In reality, it’s just them not using their time and effort and instead relying on AI.”

If professors decide to discourage students from using artificial intelligence, then they must explicitly state the restriction in the syllabus for the course, according to the guidelines. The document also includes solution suggestions for professors, such as asking more nuanced questions in assignments and requiring students to complete assignments during class, among other things.

Michelle Lee, an exploratory freshman, said two of her professors are already cracking down on the use of AI in their classrooms. 

“Whenever it’s talked about in class, it’s never a good reason,” Lee said. “[They say] don’t use ChatGPT [and say] we will catch you somehow.”

Marsano’s professors, on the other hand, have not yet discussed the issue. 

“Most of my classes now are either not related to technology and if they are, AI is not the focus,” Marsano said.

OpenAI, the developers behind ChatGPT, recently released an AI classifier to determine whether a body of text was written by artificial intelligence or not. They advise the users of the classifier to take the judgment with a grain of salt, citing how it has classified human-written texts as artificial intelligence with full confidence and how it has a hard time with any body of text.

At the moment, the decision to embrace or reject artificial intelligence tools in classes ultimately rests in the hands of each professor. While some attempt to inhibit it, there are real strides in implementing them into the curriculum across the University without the fear of having to one day “[make] dinner for our robot overlords,” Marino said.