USC’s embrace of AI brings peril and promise
Artificial intelligence’s rise in higher education brings forward concerns of ethics.
Artificial intelligence’s rise in higher education brings forward concerns of ethics.

Every day, artificial intelligence becomes a more visible part of university life: emails drafted by AI, readings summarized in seconds, assignments structured with AI-generated outlines.
And USC is diving in headfirst with new programs like the bachelor’s in AI for business, the University’s inaugural AI Summit, interim President Beong-Soo Kim’s AI Strategy Committee and the announcement that ChatGPT Edu will be available to all students, staff and faculty beginning next year.
While some see universities racing to embrace AI as a bid to keep pace with industry, others worry campuses are too eager to brand themselves as AI-forward without reckoning with what that means for learning, creativity and labor.
USC must pursue innovation without losing sight of the costs — striking a balance between embracing new technology and protecting the integrity of students’ academic and creative work.
USC’s integration of AI hasn’t come without turbulence. Like many universities, USC met ChatGPT’s initial release in 2023 with recommendations — issued by the Academic Senate’s Committee on Information Services in February 2023 — that professors either “embrace and enhance” AI usage or “discourage and detect” it, an increasingly futile distinction.
The reality of AI usage by students is ubiquitous; pretending they won’t use it is merely ignorance. Seemingly in response, the University has employed a deliberately broad response.
Marshall School of Business Dean Geoffrey Garrett, who chairs the President’s AI Strategy Committee, described the University’s philosophy as not everyone must build AI systems, but everyone will need to use them responsibly.
The University’s recent pivot introduces a new layer of concern: what USC’s partnership with OpenAI means for data privacy and student autonomy.
Platforms like ChatGPT may retain user prompts, questions and conversation histories, unless institutions negotiate strong privacy protections.
An October study by the Stanford Human-Centered Artificial Intelligence Institute, which surveyed six major AI developers, found that some tools store user chat data indefinitely and use it to improve model performance unless users or institutions opt out.
The United States Federal Trade Commission launched an inquiry into several AI chatbot companies in September to evaluate their data collection and privacy practices, specifically for children and teens, underscoring that the retention and use of conversational data is an emerging national issue. Higher education is not exempt from these risks.
As USC expands access, transparency must be non-negotiable. The University — not OpenAI — must disclose what ChatGPT collects, how long data is stored and whether student interactions will train future models.
These demands would strengthen the Instructor Guidelines issued by the Academic Senate in 2023, which require faculty to communicate expectations around AI use. Without updated, explicit policies governing data retention and student consent, the integration risks undermining both academic integrity and student and faculty trust.
From the School of Cinematic Arts, to Thornton School of Music, and the Roski School of Art and Design, students who spoke with the Daily Trojan have expressed concerns that AI-generated content blurs lines of originality, raises ethical questions about training data and threatens to devalue early-career creative workers.
Recent academic research shows generative AI may undermine the diversity of creative output: A 2025 meta analysis published by Cornell University found that humans collaborating with AI produce more output, but the breadth and originality of their “ideas” shrink significantly.
Meanwhile, a 2025 study tracking artists’ job outcomes discovered that exposure to AI tools correlates with a slowdown in employment growth in creative fields — a red flag for job security. Some of this automation has already materialized: At least six AI or AI-assisted artists debuted on various Billboard charts this year.
These developments cast serious doubt on early optimism about AI’s role in democratizing art, suggesting that human creativity could be marginalized. For students and artists, this moment demands a principled, ethical framework that protects creative labor, promotes diversity, and ensures education remains a space for genuine intellectual and creative growth.
Students who study AI closely see enormous upsides, hoping it will offload the trivial and time-consuming tasks. Pia Atal, a sophomore majoring in AI for business, sees AI as a force for empowerment rather than displacement.
“Instead of taking away jobs, I think it’ll just increase productivity in the workplace,” Atal said. “What a team of 10 used to be able to do in a week, they might be able to do in a day.”
But Alia Chand, a junior majoring in art as well as communication, said these shifts can shrink opportunities for emerging artists. Chand, whose art concentrates on photography and relies on digital editing, described how commercial art opportunities — often an early-career lifeline — are shrinking.
“A lot of fine-art artists early on lean into commercial work to … support themselves,” Chand said. “That’s really difficult with AI, because a lot of those tasks are now automated and don’t actually require individual designers anymore. It just takes away another source of revenue and income.”
As AI enters the classroom, the question is not whether students will use it, but how it will shape intellectual development.
The University isn’t blindly introducing new AI partnerships without approaching the field holistically. Marshall recently partnered with the Knight Foundation to invest $4 million into ethical and human-centered AI research, highlighting the University’s awareness that AI carries social and academic risks.
These initiatives show that USC is attempting to build a research-informed approach, even as its practical implementation still lacks uniform guardrails.
USC has published a collection of guidelines and advisories — from the Academic Integrity Office, policies by school compiled by USC Libraries and resources offered by Information Technology Services — but the net result remains a patchwork. There is no universal policy covering AI use in all courses.
Dan Levy, co-author of the book “Teaching Effectively with ChatGPT,” wrote that “no learning occurs unless the brain is actively engaged in making meaning.” A completed assignment is not the educational outcome; the thinking required to produce it is.
This “shortcut,” which the Massachusetts Institute of Technology has identified as an impact to critical thinking, is actually much greater than just that. It is a concerning descent into a massive loss of critical thinking in the one place critical thinking is essential — the University. MIT’s Media Lab warns that overreliance on AI for mentally demanding tasks may cause “cognitive atrophy,” reducing brain power over time.
Daya Asokan, a sophomore harp performance major, sees this risk daily.
“How are you learning things if something is learning it for you?” Asokan said. “It’s not helping you.”
Chand echoed this from a communication perspective: When AI-generated media increasingly resembles reality, students must work harder — not less — to think critically.
“It’s scary how you really can’t tell what’s reality anymore,” Chand said.
AI can be beneficial when used innovatively and intentionally, but its usage as a substitute for the intellectual labor that college is designed to develop risks toward a shift in anti-intellectualism. When students rely on AI to generate interpretations, solve problem sets or condense arguments they have not grappled with themselves, the incentive to think deeply diminishes.
This is the essence of anti-intellectualism: the erosion of curiosity, discipline and the expectation that learning requires effort. Higher education becomes less about engaging ideas and more about efficiently producing work.
In that sense, generative AI doesn’t create anti-intellectualism, but accelerates a trend already present in academia, one where speed is rewarded, surface knowledge is sufficient and reflection feels expendable.
Without intentional guardrails, higher education risks outsourcing the very cognitive labor it exists to cultivate.
The University is right to prepare students for an AI-driven road — ignoring the technology would be negligent. However, the true power of AI lies not in its availability but in the structure of its implementation into schooling.
Right now, USC — like most universities — is experiencing growing pains. Professors scramble to find AI detectors to embed into grading procedures at the same time that the administration is releasing AI tools into academic programming, resembling a fight between two robots trying to determine which one is real.
Higher education has always paved the way for the future, and it has the opportunity to take a leading role again, providing key structure and guidance: transparency, not just access; literacy, not just efficiency; and creativity, not just convenience.
AI will shape the future of higher education. The question — still unanswered — is whether universities will shape AI with equal care.
Disclaimer: Alia Chand formerly served as a photo staffer at the Daily Trojan in Spring 2025. Chand is no longer affiliated with this paper.
The Daily Trojan Editorial Board is a group of diverse editors and staffers from the print Opinion section. The views of the Editorial Board do not reflect the Daily Trojan staff as a whole.
We are the only independent newspaper here at USC, run at every level by students. That means we aren’t tied down by any other interests but those of readers like you: the students, faculty, staff and South Central residents that together make up the USC community.
Independence is a double-edged sword: We have a unique lens into the University’s actions and policies, and can hold powerful figures accountable when others cannot. But that also means our budget is severely limited. We’re already spread thin as we compensate the writers, photographers, artists, designers and editors whose incredible work you see in our paper; as we work to revamp and expand our digital presence, we now have additional staff making podcasts, videos, webpages, our first ever magazine and social media content, who are at risk of being unable to receive the support they deserve.
We are therefore indebted to readers like you, who, by supporting us, help keep our paper independent, free and widely accessible.
Please consider supporting us. Even $1 goes a long way in supporting our work; if you are able, you can also support us with monthly, or even annual, donations. Thank you.
This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies.
Accept settingsDo Not AcceptWe may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.
Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.
These cookies are strictly necessary to provide you with services available through our website and to use some of its features.
Because these cookies are strictly necessary to deliver the website, refusing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.
We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.
We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.
These cookies collect information that is used either in aggregate form to help us understand how our website is being used or how effective our marketing campaigns are, or to help us customize our website and application for you in order to enhance your experience.
If you do not want that we track your visit to our site you can disable tracking in your browser here:
We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.
Google Webfont Settings:
Google Map Settings:
Google reCaptcha Settings:
Vimeo and Youtube video embeds:
The following cookies are also needed - You can choose if you want to allow them:
