Generative AI should stay outside the classroom

Honey, your AI slop is getting cold.

By LEILANI YBARRA
Generative AI is a disservice to a student's education.
(Tara Su / Daily Trojan)

Generative artificial intelligence is a drug we’ve been microdosing on since its inception, of which we have now become terribly reliant. 

Take a peek at fellow students’ computer screens in your 9 a.m. lectures, with the inevitable ChatGPT tab open, spewing out summaries about last night’s readings. AI has even found its way onto the posters littering the traffic light poles on Trousdale Parkway, advertising yoga teacher training programs and workshops on how to use Adobe’s generative AI, Firefly. 

At USC, the use of AI in classrooms is snowballing because a universal policy has yet to be established on its use for academic work — to the point where we’ve seen professors succumb to the wonders of AI by implementing conversations with chatbots into their lectures. Who’s to say whether this information fed to us is accurate data or mere hallucinations of a flawed model?


Daily headlines, sent straight to your inbox.

Subscribe to our newsletter to keep up with the latest at and around USC.

In June, a study published in the Journal of College Student Development revealed that roughly three-quarters of college students admitted to using ChatGPT, demonstrating how students have grown overly comfortable with plugging assignments into their chatbots or using AI to quickly generate slideshow presentations. In the same study, one in 10ten participants confessed to using AI to cheat, thereby willfully compromising their academic integrity. 

While it is becoming increasingly difficult to police the use of AI, especially with the inconsistency of AI detection tools and their propensity to generate false positives, the presence of generative AI in education is generating more harm than good. 

AI usage is a huge disservice to students’ academic growth, especially at institutions where they’re likely paying thousands of dollars to attend, creating a dystopian culture in higher education. We prioritize degrees that signal well to employers, fixated on the idea that our degrees should make us more marketable employees rather than informed, well-rounded members of society. 

This raises the question: Why do students feel the need to have their AI bots on standby in the first place, as the rampant use of AI unveils the cracks of education as an institution? After all, it’s not like cheating through coursework is anything new: Before large language models, there was SparkNotes. Before SparkNotes, there were strategically placed notes inside water bottle labels. 

As students, we’re tasked with balancing rigorous academics, all while expanding our career prospects and maintaining a rich social life — it comes as no surprise that some may succumb to the convenience of generative AI to circumvent tedious assignments. In a world shaped by the coronavirus pandemic, higher education is predicated on students trying to gain a competitive advantage over their peers. 

USC is simply a microcosm of this larger phenomenon. Many students no longer see AI as a shortcut, but rather a means of survival in an incredibly cutthroat environment, thinking they are somehow outmaneuvering the system by allocating their time to more relevant pursuits, such as applying for internships or clubs. 

The rampant use of ChatGPT and similar LLMs has desensitized us to the reality of over-relying on AI and forgoing any critical thinking required to complete these “tedious” assignments in the first place. It’s a tartarean nightmare where all thinking is outsourced as you become nothing more than a human assistant to a chatbot, just as disposable as the content it generates.

Bypassing the initial state of discomfort when working through complex homework problems — where actual learning occurs — deteriorates your ability to think critically and impedes your sense of self-fulfillment. It’s a matter of cognitive debt we have yet to see the long-term effects of, taking the path of least resistance that leads you nowhere. 

As students submit essays written by ChatGPT, coupled with some professors even using AI to give feedback on assignments, we’ve eliminated the middleman entirely, letting AI “slop” — low-quality content generated with AI — pile on top of itself. This AI “slop” is no longer exclusive to obscure Facebook posts that can only fool grandparents: It’s now permeating the culture of higher education as we know it. 

At times, it may seem futile to unravel the intricacies of calculus or try to understand exactly what Kierkegaard was saying about existentialism. However, it’s not simply about the content itself but rather your ability to problem-solve and get your hands dirty as you digest these curricula. 

This is not to undermine the potential positive uses of AI: We’ve seen it used to enhance accessibility by serving as a tool for people with disabilities or even used to expedite medical screening processes. But when OpenAI released its mission statement in 2015, remarking how it would embrace an almost utopian approach to making AI accessible to everyone, I’m certain its engineers didn’t anticipate their chatbots writing essays on Hamlet in the coming years. ,  I’m not here to wax indignant from a moral high ground or fearmonger about AI robot takeovers; I’m here to deter you from relying on these generative AI models to do your thinking for you. 

AI isn’t going anywhere, but it is ultimately your choice on whether you want to defy the entire point of pursuing higher education. I encourage you to sit in that state of discomfort, work through that writer’s block and collaborate with professors and peers to do what you initially came here to do: learn and grow.

ADVERTISEMENTS

Looking to advertise with us? Visit dailytrojan.com/ads.
© University of Southern California/Daily Trojan. All rights reserved.