Vibecheck on AI in college: The Battle over B-quality work

Ramp up the integrative practice of new AI systems, listen to students and help keep your wise GE professor sane.

By VICTORIA FRANK
(Audrey Schreck / Daily Trojan)

I started my field research into this topic unsure about the significance of artificial intelligence on our campuses. I found that my confusion was quite universal. Professors are confounded by how to discern the influx of potential plagiarization, students are unsure about the longevity of their current studies and nobody is completely confident that a full-on AI-and-society immersion is a great idea, other than some extreme techno-determinists like Marc Andreessen (I was not aware of his manifesto before publically self-identifying as a techno-optimist). 

Universities have historically been the predominant setting for tackling gray areas. While we enter a new existential crisis — that of artificial intelligence — the question arises: How can we live and work with this new reality in a way that promotes good ethics and positive learning?


Daily headlines, sent straight to your inbox.

Subscribe to our newsletter to keep up with the latest at and around USC.

Dean Willow Bay of the Annenberg School of Communication and Journalism published a LinkedIn article titled “We’re taking our students to the frontlines of the AI revolution” in August 2023. In the piece, Bay announces a new course at USC called “Artificial Intelligence and the Future of Creative Work,” which, at Annenberg, was the “first course focusing exclusively on the subject.”

One student in the course, Rory Burke, a senior majoring in journalism, was working for the Viterbi School of Engineering as a video producer when they started hearing talks about AI. 

“That’s what piqued my interest, and then they announced the class and it felt like kismet that I should take it,” Burke said. “Especially as I’m about to graduate, I feel like integrating such an important tool into my journalism now would be good to get ahead of the curve.”

The students of the class worked with the professor to shape and reshape the syllabus as the semester progressed. 

“He is so accomplished in his career and just knows his stuff, so I went in with a full trust of him, and he was like, ‘Some of you might know more than me.’ He built upon our input,” Burke recalled.

When asking Burke about professors not allowing AI in their syllabus, she explained, “If I’m using it responsibly and can prove that, I feel like that should be allowed. If it’s gonna help make our jobs a little easier, then why not?”

“Why not” was also a key question of mine, and I found an answer on the fourth floor of Taper Hall. There sits Thomas Gustafson, a professor of English teaching large seminar classes like Reading the Heart: Emotional Intelligence and the Humanities. 

Gustafson invited me inside his eclectic office, full of book towers and DVD shelves, and told me that during the fall semester, he endured many sleepless nights wrestling with the imposition of generative AI in his classroom. 

“I don’t see any winning solution,” Gustafson said. “AI is going to be the future. People are going to be generating essays. Maybe they can improve detection systems, but it’s like 2,000 years of humanistic education — teaching people how to read, write, think critically — is being undermined.”

In his seminar classes of over 100 students, Gustafson spent hours meeting with students one-on-one when AI plagiarism systems flagged their work. He approached students with empathy, rather than immediate punitive measures.

“I had these clever students who are brilliant in engineering,” he said, “and they’re all studying AI generation. They said, ‘Oh, we put your syllabus through ChatGPT, and it shows up as 7% plagiarized.’”

Thomas Gustafson did not commit an intentional act of plagiarism — rather, the software detected the cross-resourcing of previous syllabi published by Gustafson himself. 

There are no firm stances across administration boards that reflect public opinion. There are no fail-safe detection services that understand human nuance. We have not yet even reached a consensus among students about why university is or isn’t important, and whether or not artificial intelligence contradicts our supposed intention upon enrollment.

One sentiment I did hear repeatedly on all sides of the argument was that an AI- generated paper is only conducive for about a B grade.  As Burke put it, AI is good at making “basic, 5th- grade sentences.”

Making AI more trained and more integrated does not promise its eventual outsmarting of us. What might happen, and is already happening, is a stunting of intellectual inquiry and enduring hard work. Luckily, as I will always restate, how we use AI is up to us. My proposal for a path forward? Let us remind ourselves why we learn and lead with that.

Victoria Frank is a junior writing about the inevitable AI future with a focus on ethics and well-being. Her column, “Natural Intelligence,” runs every other Friday.

© University of Southern California/Daily Trojan. All rights reserved.