Students are at the forefront of the AI ethics dilemma
Artificial intelligence has now broken security barriers that proves the urgency of intentionality moving forward.
Artificial intelligence has now broken security barriers that proves the urgency of intentionality moving forward.

As universities begin to embrace Artificial Intelligence, they owe it to their students to equip them with guidance for responsible and ethical use. As intermediaries between a product and its users, institutions have a responsibility to the consumer — in this case, students, faculty and staff — to disclose the risks associated with emerging technologies.
USC’s rollout of ChatGPT-5, Perplexity AI, Zoom AI Companion and similar tools like NotebookLM displayed the University’s optimistic stance toward AI in higher education. While there are many feats researchers have achieved using the tools, there has also been minimal discussion around the risks we expose ourselves to by relying on them.
Such silence with regard to the moral dilemma we face at the hands of AI speaks volumes amidst the extreme cases of such issues in the news today.
In an interview with the Daily Trojan, Elisa Warford, associate professor of technical communication practice at the Viterbi School of Engineering, said that her main concern in the realm of education is students’ relinquishment of critical thinking skills,
“It concerns me that they did just hand this over without much guidance, especially at a time when faculty … [are] trying to keep up and adjust our teaching and our assignments and there’s a whole range of faculty views,” Warford said.
A large concern is that students and people at large are doubting their own skills and crediting AI more than they should be, thanks to a seemingly omnipotent entity. The term for this phenomenon has been coined “automation bias.”
“People still need to develop a domain expertise so that even if you’re just going to oversee the AI, you still have [the] knowledge and judgment,” Warford said. “There’s a worry that the human would just be like, ‘Yeah, it’s fine.’ … We know that [Large Language Models] hallucinate and that they make mistakes.”
Key figures at tech companies like Anthropic and OpenAI have openly expressed concerns about the fallibility of AI tools — especially in situations where even a minor error could mean the difference between life and death.
Secretary of Defense Pete Hegseth was so eager to get the government’s hands on AI for military purposes that he threatened Anthropic by going as far as giving Anthropic’s CEO a deadline to open its technology up to unrestricted government use. In doing so, the military has disregarded the warnings from developers about the error-prone state of AI tools, rendering them ill-suited for high-risk assignments such as weapons deployment.
AI ethics deserve equal consideration among all institutions that employ them and the individuals that comprise those institutions.
When AI companies themselves warn of the hallucinations and imperfections of these tools, it’s a given that we should take their word for it. Even top researchers struggle to understand exactly how AI works, meaning the general public is not even close to comprehending it.
We must remind ourselves that it’s precisely because AI systems are opaque and not understood by the average user that they entice us. This, however, does not mean that AI tools can be relied on to a greater extent than we rely on our own skills.
The clash between Anthropic and the Pentagon is a prime example of a tendency to quickly accept the risks of imperfection in AI because we have the privilege of being shielded from the consequences.
Students now are at a unique advantage, however, in that we are learning the ropes of AI and exploring our careers simultaneously. If we choose to, we can shape our careers to align with this changing landscape.
But with that, we must remind ourselves that AI has been and always will be a mere tool to help us, not an independent entity that can function without human oversight.
Yuval Noah Harari, historian and bestselling author of “Sapiens,” said that AI doesn’t have a self-correcting mechanism in the way that humans do. Constrained by the information that already exists, AI can’t correct its own mistakes without human intelligence to identify the flaw.
So, at this crossroads of attitudes toward AI, what’s critical to remember is that artificial and human intelligence will always be inextricable. Our own ethics with regard to AI use are the most important consideration in paving the path forward.
We are the only independent newspaper here at USC, run at every level by students. That means we aren’t tied down by any other interests but those of readers like you: the students, faculty, staff and South Central residents that together make up the USC community.
Independence is a double-edged sword: We have a unique lens into the University’s actions and policies, and can hold powerful figures accountable when others cannot. But that also means our budget is severely limited. We’re already spread thin as we compensate the writers, photographers, artists, designers and editors whose incredible work you see in our paper; as we work to revamp and expand our digital presence, we now have additional staff making podcasts, videos, webpages, our first ever magazine and social media content, who are at risk of being unable to receive the support they deserve.
We are therefore indebted to readers like you, who, by supporting us, help keep our paper independent, free and widely accessible.
Please consider supporting us. Even $1 goes a long way in supporting our work; if you are able, you can also support us with monthly, or even annual, donations. Thank you.
This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies.
Accept settingsDo Not AcceptWe may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.
Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.
These cookies are strictly necessary to provide you with services available through our website and to use some of its features.
Because these cookies are strictly necessary to deliver the website, refusing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.
We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.
We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.
These cookies collect information that is used either in aggregate form to help us understand how our website is being used or how effective our marketing campaigns are, or to help us customize our website and application for you in order to enhance your experience.
If you do not want that we track your visit to our site you can disable tracking in your browser here:
We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.
Google Webfont Settings:
Google Map Settings:
Google reCaptcha Settings:
Vimeo and Youtube video embeds:
The following cookies are also needed - You can choose if you want to allow them:
