AI: INSIDE AND OUT

Can artificial intelligence write your academic paper? It depends

A closer look at course learning surrounding AI, ChatGPT and generative machines in the classroom.

By SCARLETT LOVALLO & KARISSA YAN
2 different AI syllabus clip art. One syllabus is against AI and the other accepts AI.
Lyndzi Ramos / Daily Trojan

At first mention, artificial intelligence seemed to be a professor’s academic nightmare.

With ChatGPT generating college students’ essay drafts, Midjourney designing inventive visual artwork and machine learning effectively debugging code, computer systems have instigated concerns of job replacement, copyright and dependence. And while concerns of ethical use of artificial intelligence are far from nonexistent, some undergraduate curricula are reimagining learning with generative AI.


Daily headlines, sent straight to your inbox.

Subscribe to our newsletter to keep up with the latest at and around USC.


General university guidelines give discretion to professors in determining course policies for AI. Students may find syllabi prohibiting use of computer systems between mentions of Turnitin and Academic Affairs, or in contrast, assignments incorporating artificial intelligence into academic work and independent projects.

Preparing students for careers integrating AI into neuroscience, biotechnology and pharmacology, instructors at the Davis School of Gerontology are incorporating machine learning into syllabi. In the upper-division course “Physiology of Aging,” John Walsh, professor of gerontology, encourages students to develop skills in artificial intelligence programming. Students create a Wix website with algorithm-generated data for a specialization of cancer, including prevention and risk factors, advanced diagnostics, treatments and clinical trials.

“My goal is [for students to] be competitive in the professional world that is completely embracing AI,” Walsh said. “People are getting really incredibly lucrative careers based on knowing how to communicate with AI and [using] the right search terms.”

At the Viterbi School of Engineering, artificial intelligence is arguably redefining computer diagnostics, advancing early diagnosis of Alzheimer’s disease. Researchers are training AI to detect blood markers of the neurodegenerative condition in cross-disciplinary research. Such innovation at the intersection of biomedical research and computer science prompts the introduction of AI training into undergraduate education.

But artificial intelligence is still (machine) learning: USC Libraries warns students of current model limitations, citing that generated information may be false, outdated or biased. For students, using this sometimes unreliable computer-generated information for assignments could cause professors to label them as plagiarized or entirely fabricated. Such reminders reflect that artificial intelligence remains remarkable, yet imperfect.

Recognizing computer system flaws, Walsh requires his students to fact-check algorithm-generated content. After generating their websites, undergraduates will reference credible sources, confirming the accuracy of data. Such policies ensure students correct misinformation and acknowledge “algorithmic bias,” or implicit bias from data training the computer systems. When left unaddressed, automated bias can perpetuate disparities in representation, exacerbating existing structural inequities.

While artificial intelligence may be to blame for computer-generated errors, students should be aware that they are ultimately responsible for work they create or endorse.

“Even though it’s AI generated … you’re still presenting it as yourself,” Walsh said. “You are now the advocate for … that tool, so you have to justify it.”

While most undergraduate schools have yet to standardize expectations for AI usage, professors are recommended to clearly state their individual course policies.

Mike Ananny, an associate professor at the Annenberg School of Communication and Journalism, takes on a reflective approach to the question of AI use in his syllabus, and more generally, the learning environment he wants to foster with his students.

“Generative AI can be an opening to other ways of learning and other ways of critical thinking, [and this critical engagement], to me, is the whole point of a learning environment. It’s a matter of how generative AI can fuel that,” Ananny said.

Ananny said one of the biggest mistakes in the rhetoric surrounding AI is in the personification and the agency ascribed to AI, which is essentially a mathematical algorithm. Much of the fear and hype surrounding generative AI is in its potential to replace human labor, but as students prepare for a future in a workforce that is saturated with fast-paced technological innovation, he hopes that his classroom can equip students with an important “metaskill”: developing an understanding of the greater politics of labor organization and technological distribution.

“I am most deeply interested in the politics of technology,” Ananny said. “Why are some systems made and other systems not made? What kind of image of people or humanity or success or public life — those big, huge humanistic concepts — end up being baked into these systems?”

In the School of Cinematic Arts’ course “AI and Creativity,” Professor Holly Willis has already seen increased awareness around the ethics of AI among her students. By using generative AI tools such as ChatGPT and Midjourney for language, still-image and motion generation, students have also taken on a reflective practice, critiquing the biases and limitations in the datasets that these tools are trained on — datasets of which are sourced from existing databases that may contain race, class, gender and geographic skews.

In this period of flux, professors face the challenge of designing course policies that balance the concerns around academic integrity while also introducing the opportunity to use AI reflectively and creatively in the classroom.

“The knee jerk reaction that I find most troubling is the idea that students just want to cheat, and they’ll be using these tools to get through their classes quickly and easily without doing the actual work themselves,” Willis said. “I kind of push back against that and recognize that students are often very serious about why they’re here and what they want to do as they’re learning.”

At the Marshall School of Business, relevant guidelines mirror the sentiment that fundamentals are essential. In foundational and lower-division courses, students are not allowed to consult AI for work relating to analysis, critical thinking or innovation. Such curricula are necessary for students to practice independently, prior to in-depth analysis and applications.

After students develop the fundamental skills in lower-division courses, they can apply AI to enhance their craft in upper-division courses. Ananny said AI tools can be integrated into the “journalistic workflow” to provide different starting points for projects that would originally take longer, more tedious preparatory steps. Likewise, in Willis’ upper-division course, generative AI has offered students different mediums for visual storytelling. 

“The myth is that it is super easy to generate something, [but] to create a body of work that shares an aesthetic and that feels continuous really takes skill and hard work and a kind of visual imagination that I think is exciting,” Willis said. “There still needs to be a kind of sensibility and attentiveness to storytelling — all the things that go into traditional forms of art making.”

Regardless of the tools students use, both Ananny and Willis emphasized the importance of the creator as the driving force in the relationship between creator, creation and the subsequent commercialization of the craft.

The integration of new technologies in USC classrooms continues to change in tandem to the broader, societal impacts of AI as the University aims to foster future leaders in creative and technological fields. Across all the schools at USC, carving out space in the syllabus to combine the frontiering of technological innovation and a perspective of humanistic inquiry aims to prepare students for ethical advancements in AI.

“This is a moment that is about the politics of technology,” Ananny said. “And if we see it as an opportunity to better understand the politics of our technologies, then this can be a really productive, exciting, positive moment.”

Trending Posts

ADVERTISEMENTS

Looking to advertise with us? Visit dailytrojan.com/ads.

© University of Southern California/Daily Trojan. All rights reserved.