Daily Trojan Magazine

Reimagining learning for the world of AI

It’s due time for USC to figure out not if, but how AI should be part of the classroom.

By DOR PERETZ
(Alexa Esqueda / Daily Trojan)

Three years since the explosion of generative artificial intelligence with ChatGPT and other large language models that use data and machine learning to generate human-like text, AI has firmly rooted itself in lecture halls, libraries and university dorm rooms across the nation. The reasons why students are drawn to using AI are obvious. 

Many students are busy juggling classes, club involvement and jobs. With long days full of these commitments, after getting home at 10 p.m., 11 p.m. or sometimes even 1 a.m., it makes sense why some students aren’t necessarily looking forward to finishing a 50-page reading for their 8 a.m. class the next morning. 

So, if AI can summarize the reading, it might be worth it to skip it and get an extra hour of sleep without having to waste another night at Leavey Library. 

Much of the debate around students using AI has centered around curtailing academic dishonesty, but the overarching conversation has lacked consideration for what universities and professors can do to promote legitimate intellectual curiosity. This isn’t an effective approach for such a nuanced issue. 

If the only motivator for students to limit their use of AI is threats of punishment, students will continue to use it as long as they think they’re smart enough not to get caught or are using it in ways that don’t amount to plagiarism. Accordingly, we must ask ourselves how we can create learning environments where students and professors alike can benefit from AI as a tool, while making sure it doesn’t impede students’ learning. 

In the meantime, AI models are getting bigger, smarter and more accessible by the day.

With OpenAI’s early August release of GPT-5 — which is proclaimed to harbor “Ph.D.-level” capabilities and less issues with hallucinations and instruction-following — and Google Gemini gaining popularity lately due to Nano Banana, its new image generation AI model, it’s clear that the field of AI has been progressing rapidly in recent years. 

Given these developments, it’s no wonder that many students have opted to use AI to help them study and complete assignments. Last finals season, OpenAI even offered college students in the United States and Canada two free months of ChatGPT Plus, enticing those looking to ace their tests and essays. 

Of the five students interviewed for this piece, all said they noticed that other students are using AI to manage their schoolwork.

“Most of my friends or classmates, they use AI more as a resource. Let’s say, if they’re trying to do research for an essay that they’re going to write, they don’t use AI to write their essay, but they use AI to find sources that they can use to write their essay,” said Francesca Kubica, a junior majoring in AI for business.

AI can give those who choose to use it an edge over their peers, which may be another reason why students are drawn to it. 

“College is quite a competitive space, and there’s a massive competitive advantage you get from using it. It feels like there’s supply. Everyone gets to use it for free, and there’s the demand that when you use it. You get better grades,” said Mattice Ureel, a sophomore majoring in chemical engineering. 

Many assignments are nonetheless worth devoting our time and attention to, because without doing so, we’re losing both the knowledge and critical thinking skills we are paying up to $90,000 or more a year to learn. 

“Once you start prompting [AI] and then having it almost do the project for you, even if it’s not verbatim, if you’re not using your creativity or critical thinking to actually conceptualize and understand an assignment, and you’re using AI to fill that space instead,” said Matt Glover, a sophomore majoring in economics, “that’s where issues arise, because then the student isn’t learning.” 

It is one thing to use AI to supplement the information you have while still forming your own understanding and analysis. It is another to use AI to prevent yourself from having to do any thinking of your own. In the first case, convenience exists alongside intellectual development, but in the second, convenience overpowers curiosity to a level that is concerningly in line with anti-intellectual sentiments. 

One TikTok user has claimed they “read” 100 books per week by using AI to summarize them into blurbs. Another user described overhearing classroom peers discussing their plans to avoid reading by AI summarizing the books for their world literature class — yes, a class dedicated precisely to reading and analyzing books. 

Many on social media have pointed out that AI book summaries don’t encapsulate the experiences you gain from actually reading the books. In fact, it’s not even reading a book.

This is the exact sort of disengagement with critical thought that dystopian novels like “Fahrenheit 451” — wherein book burnings occurred as a result of tech-created book summaries causing people to no longer engage with the real texts — have warned are indicative of the downfall of society. 

Jack Edwards, an author and book influencer, said it best in his video about his recent reads, which he thinks AI could never adequately summarize.

“It is impossible to capture the ethos or the mood or the atmosphere of the novel with a simple plot summary,” Edwards said. 

These two attitudes signify the crossroads we are in: On one hand, using AI for any and every academic need is the easy way out because it saves time and might help one’s academic performance; on the other hand, relying on it might impede learning and development, leaving students unprepared for professional life.

“If a student says, ‘Okay, write me an essay about this’ … they haven’t learned, and they may go on to be in a job where they need to convince someone of something. Well, they can’t convince someone of something because they didn’t [learn anything]. It’s a real skill,” said Khalil Iskarous, a professor of linguistics at the Dornsife College of Letters, Arts and Sciences. 

It’s not just that overusing AI might pose challenges in employment; replacing your original thinking with AI could create tangible cognitive deficits. 

A study published in June by researchers from the Massachusetts Institute of Technology, Wellesley College and the Massachusetts College of Art and Design found that over a period of four months, participants who used large language models like ChatGPT for writing essays showed weaker neural connectivity compared to those who didn’t; those who used AI had fewer interactions across different brain networks, signifying lower creative processing. 

Because of how recent the development of large language models has been, this area of research is still early and has the potential to change with time. However, initial findings like this are worth keeping in mind when deciding what responsible AI use should look like. 

While working toward the ideal solution, it’s crucial that USC administration considers both professors’ and students’ perspectives, as they are the ones most affected by these policies. Iskarous, also an AI researcher, said most of the focus has been incorrectly placed on the question of banning or allowing AI.

“The right debate should be: what are the ways in which to avoid the worst aspects of it and what are the ways in which we can actually use it to sharpen skills that very few people get to sharpen?” Iskarous said. 

Ryan Nene, a junior majoring in computer engineering and computer science and an AI ethics lead at Shift SC — a USC club focused on ethical technology — stressed the importance of guiding AI usages toward risk-reduction and maximizing benefits. 

“Technology ultimately is meant to help people improve their lives. That’s kind of the leading thought when it comes to any sort of tech ethics,” Nene said. “When technology isn’t doing that or directly harming people, why is it doing that? How is it doing that? And what can we do to kind of mitigate those effects?”

In the Undergraduate Student Government, students are working to bring different viewpoints on AI to the table to hopefully drive policy changes that are in students’ best interest. USG president Mikaela Bautista and vice president Emma Fallon initially ran on a platform of getting ChatGPT Plus for all students, and now the organization is starting to develop the project

Fallon stressed USG’s efforts to support the undergraduate student body’s wants and needs in an interview with the Daily Trojan.  

“USG is in a position where we have a lot of access to admin and meet with admin a lot, and I think we really want to leverage that position to make sure that student voices are being heard and bring all different perspectives,” Fallon said.

In the inaugural episode of the Trojan Talks podcast — a podcast dedicated to conversations about USC’s future — on Wednedsay, interim President Beong-Soo Kim discussed what responsible AI use looks like at USC, focusing on opportunities to further integrate AI. 

“I don’t want USC to be behind the curve or underprepared for the changes that [AI] is going to create, and I also think there are huge opportunities for us to lead, not just adapting or adopting AI but thinking about … how do we use it in an ethical, responsible way,” Kim said. 

Additionally, Kim discussed AI’s current and future impacts in three major areas: research, operations and teaching. For the teaching sector, Kim announced that the dean of the Marshall School of Business will be leading a strategy committee on AI in the classroom. 

At USC, the major barriers around AI so far have been a lack of standardization, limited transparency about expectations and imbalances in people’s experience with AI. 

Moreover, some students may pay for different models’ subscriptions and be experts on using AI ethically, but others might have less access to these resources due to technological or financial limitations and be unsure about how to use them responsibly.

Ureel, who is also an AI ethics lead at Shift SC pointed out how starkly different the approaches to AI can be between classes. 

“You have some extreme examples, like in my history class, where they’re like, ‘Okay, no computer uses it at all.’ But then you have other teachers who seem more and more friendly to the idea. So I think there’s a kind of mixed balance throughout the faculty,” he said.

USC-wide adherence to the same AI policy might not be feasible, especially because AI might affect each department differently. Still, it may help to create departmental AI policies so that, at the very least, there is consistency within students’ major-specific courses and less confusion stemming from professor-by-professor differences. 

Additionally, a compulsory AI ethics curriculum, whether in a similar vein to USC’s “Consent and Healthy Relationships” education modules or embedded into our GE requirements, could also prove useful. 

Perhaps most importantly though, professors must foster students’ excitement about learning and thinking critically. Kubica, the executive director of the Marshall Artificial Intelligence Association, said students should be given more guidance on what options are available for using AI in education.

“Make it exciting: Rather than say, ‘Hey, this is what you can’t do with it,’” Kubica said, “Say, ‘Hey, this is what you can do with it.’ And then have disclaimers like what you can’t write, and then outline, what does ethical mean to them? Because every professor is going to have a different definition of what ethical means.”

One way of doing this could be creating a chatbot that students can interact with, as some professors have already done. Kubica said one of her Marshall professors implemented a customized AI tool that could answer questions about their syllabus. Nene said a friend at USC had access to a similar resource with information about course content at the friend’s master’s-level class.

Still, stimulating students’ desire to learn doesn’t have to involve AI. Glover described how one of his professors encouraged him and his peers to write about the topics they found most interesting for their essay prompts, which he felt increased his creativity and engagement with the course. 

“Giving students that freedom to further their education on their own terms and furthering their education [by] prompting their creative minds already can serve as really beneficial,” Glover said. 

In the near future, with both more AI guidelines and sensible integration of AI into the classroom, we could find the right balance between mitigating AI’s negative effects and harnessing its positive effects to help students learn on an even deeper level. 

We’re navigating how to be the best students we can be with access to powerful tools no generation has had before. But we can still be just as ambitious and just as committed to our learning as students have always been. As students figure out how to remain curious while benefiting from AI’s convenience, it is crucial that USC guides professors and students toward reducing AI’s risk and simultaneously expanding students’ excitement for learning.

ADVERTISEMENTS

Looking to advertise with us? Visit dailytrojan.com/ads.
© University of Southern California/Daily Trojan. All rights reserved.