What does a future with AI look like?

September 3, 2023
Filmed April 19, 2023

The 2022-23 academic year proved to be a pivotal period for artificial intelligence. 

In particular, ChatGPT — a chatbot rendition of OpenAI’s flagship Generative Pre-trained Transformer —  has sent shockwaves through industry and academia since its release late November. The large language model’s ability to produce almost anything — from entire essays to low-level code to instructions on making a radioactive dirty bomb — in mere seconds has raised concerns around the ethics and ramifications of using the tool, even as it becomes integrated at breakneck pace across social media platforms, businesses and search engines. 

USC made public its investments in the technology in March, announcing a new $10 million Center for Generative AI and Society. The University’s Academic Senate also released instructor guidelines that same month on the use of generative AI for academic work, suggesting two ways to approach tools like ChatGPT: “embrace and enhance” or “discourage and detect.” The decision to take either approach is ultimately at the discretion of each professor — and the University is, as of March, seeing a mix of both.

To better understand the sentiments of the USC student body surrounding AI, the Daily Trojan is debuting rounDTable, a series of open dialogues about issues concerning the University community and greater society. 

Three students participated in this inaugural episode, filmed April 19:

Leo Zhuang, a senior majoring in computer engineering and computer science, and a project lead at the USC Center for AI in Society’s Student Branch;

Max Wong, a senior majoring in business administration and assistant producer/project manager at DGene, an AI content creation company; 

and Cassius Palacio, a sophomore majoring in architecture and incoming intern for NBCUniversal’s Operations and Technology team. 

Director’s note: Information is accurate as of the time of filming. Zhuang and Wong have since graduated; Palacio is now a junior.

Their conversation, moderated by Spring 2023 opinion editor Helen Nguyen, has been edited for length and clarity. Participants’ opinions are their own and do not represent the views of the Daily Trojan.

— JONATHAN PARK, Summer/Fall 2023 digital managing editor & rounDTable director

There are 10 questions total. Can you identify how many are ChatGPT-generated?

Do you think you could recognize ChatGPT-generated text?

Leo Zhuang: For the most part, no — but if it was something more specific or focused in a certain domain, maybe.

Max Wong: In the majority of cases, no. Unless I was really looking out for it. But on a day to day basis, if I wasn’t prepped, probably not.

Cassius Palacio: I’m kind of dense when it comes to recognizing the difference between real text and ChatGPT. It all feels like it comes together.

Helen Nguyen: Do you think any of your classmates have used it?

All said yes.

MW: Absolutely, because they’ve come to me to check: “Hey, does this sound human?”

Raise your hand if you think that artificial intelligence will eventually surpass human intelligence.

Wong and Palacio raised their hands.

CP: I use things like Midjourney [an independent AI art generator] and ChatGPT simultaneously. It’s insane to let it come up with the ideas I thought I had, but, instead, it just goes even further than that. It just makes me feel a bit incompetent, but I still use it. If anything, it’s more like an innovative way of thinking, because I’m still building on the past.

MW: In many instances, it already has surpassed human intelligence. If you look at things like precision and accuracy, there’s no competition whatsoever. AI beats humans in every situation. I think that we’re actually on the defensive, and we look for things in which humans could be better at versus AI — more complex, creative tasks.

HN: Leo, you’re the only one of the three to not raise your hand.

LZ: AI is generally going to be trained on something that humans generate in the first place. So, in a lot of contexts, it can really only do as much as it is provided. There are examples of where it excels, but there are also examples where it does very poorly. For example, ChatGPT is very bad at math.

Do you think universities should implement the use of AI detectors to evaluate academic writing?

LZ: A very important part of academics is that students produce their own material for evaluation. If you just use a large language model, or something like ChatGPT, to do it, I don’t see that any differently than plagiarism.

MW: I couldn’t agree more. Another solution I’ve heard tossed around would be just going back to old-school timed writing; whatever needs to be done so that students are actually learning and are getting their money’s worth.

CP: I do think that ChatGPT can be used as a resource. There have been moments where my friends have used it just to understand what their topic sentence is, or to find out, “What am I missing from my paragraph? What can make it stronger?” But I do not support a full-fledged paper based on ChatGPT, because even when students do that, they get a really crappy paper. I wouldn’t really hold it against them wholly, but I think [using AI to write a whole paper is] dumb and redundant.

Do you think that AI should be regulated more to prevent misuse and to protect individuals’ privacy?

LZ: At the end of the day, [AI] should be viewed as a tool; and then, whether it’s used for good or bad, it should be regulated. With regards to privacy: I think there have to be some advancements and limitations that need to happen because, inherently, AI needs to have access to [personal] data in order for it to operate, depending on what you want it to do. There’s a fine line: Yes, you should be able to use the data, but, at the same time, you need to make sure you don’t leak anyone’s information.

MW: I completely agree: AI is a tool. And bad actors can use it, but I think the emphasis should really be on those bad actors — because they will just find other ways. People focus a lot on the actual technology versus the intent behind it. They’d be doing this regardless; they’re just getting it done faster and more effectively because they’re using these tools.

CP: Yes; I say this because I think of AI in ethical ways. When you feed the AI information on criminal records, most of the time, it’s going to be like, Black is bad, white is good. [We] need to regulate this to make sure that this doesn’t happen again.

Raise your hand if you think the development of AI will reduce job opportunities.

Zhuang and Wong raised their hands.

LZ: The overtake of AI is somewhat exaggerated. However, things like cashiers or secretaries, which are very mundane, will disappear. On top of that, if you take a field like engineering, for example — more specifically, software engineering — programming is the monotonous part of that task. You have to do the engineering, but then to actually build your product, you have to program. And what AI can really do, especially those code generators, is just take away that back end of the process.

MW: I work at an AI company that works with [visual effects]; I’m watching the focus shift from the roles of VFX artists to how [the artists] can support the AI. So there’s opportunity for people in non-technical roles who can work with this technology to not just keep their jobs, but to benefit and to become more skilled. But, by that same token, the people who don’t adapt are going to get left behind, unless there is some regulation or greater access to being able to work with this kind of stuff.

CP: I don’t have full faith that every company out there is going to rely on AI to do their low-level tasks. Like the other participants were mentioning, there would be a small erasure, but there still needs to be somebody there to manage it. If you have AI running the show, you just can’t leave it there to fend for itself because, sometimes, it’s wrong.

LZ: Something that is often overlooked is who has access to AI. In general, if you want to use AI, you’re going to either have to build it in-house or use third-party software from some big company, where all the super powerful models are. If these large companies put it behind a paywall, then you’re going to probably see it adopted a lot less, generally. But if they make it a lot more open access, which is similar to what they have now, you will probably see it replace a lot more [jobs].

MW: ChatGPT is ultimately a good thing in terms of increasing accessibility and comfort. So many people are intimidated by artificial intelligence, and there are so many misconceptions. Any opportunity for the average person to just be able to play around with it, to understand how it works, is moving us in the right direction.

Do you think that AI will fundamentally change the way we interact with the world around us? If so, how?

LZ: Yes, but not as much as we think. A long time ago, you would have to write a letter to someone, then you could call someone on the phone, then there’s FaceTime. Technology could maybe one day advance where you might be able to just [use a] hologram. So AI will definitely change the way we can interact, but I don’t think it’ll fundamentally change it to where we don’t recognize it anymore.

MW: I guess that would depend on whether you’d include AI as part of the world or something that can take you away from it. If you don’t include AI, [there’s a good chance that] it’ll reduce the amount of time that we are spending with the world. But, by that same token, it might just fundamentally make us a little savvy, or it might give us more skills. I will say this: The amount of time we’re going to be spending talking to bots will increase. And I think it will, over time, become more fulfilling, more organic and more informational.

CP: Depends on the user. I’ve started hearing things about revenge porn, and just how messed up that is; if your face is out there already, they already got you. But, as a regular human in a law-abiding society, I think it would change the world a little bit — in the sense [that] you are a bit savvier — but you don’t really have the mental capacities that you did have before. If I plug in the right prompts, if I know what to say, then I’m not really critically thinking; I’m just thinking about what its response could be or how its response could enhance my answers.

There are some concerns that, based on the data and algorithms provided, AI systems will perpetuate existing biases and discrimination. Do you agree and do you think this can be mitigated?

LZ: That is very true: AI is trained on data, and we as humans have to provide that data. If you provide biased data, you’re going to get biased results. To alleviate that, you have to focus on the people aspect of it: Who’s training these models? Who’s providing the set data?

MW: It’s changed a lot in the last decade or two, but so much data and so much of the base of engineers come from very specific demographics and [geographic] areas. And I think that there’s such an emphasis on the output of this entire pipeline, but the focus needs to spread throughout.

CP: This can be mitigated; it just has to be done on a federal level, [rather] than having these big companies, like the FAANG companies [Facebook, Apple, Amazon, Netflix and Google], doing this. I know it’s not always smart to go the federal way to deal with certain things, but when it comes to resources or toolkits like AI, I feel like you need to be impartial. 

MW: I don’t understand how the feds could necessarily crack down on this; I mean, are they going to put handcuffs around the algorithm? [Laughter.]

I’m for regulation where it can be applied. But — maybe I’m too hopeful — I feel like this issue is one that is going to fix itself, because I just think the [public relations] scandals that come out of this are embarrassing enough for these corporations where they’ve put themselves into position like, “Okay, we need to improve.” It’s up to us as citizens, and users of their products, to continue to press them to do better.

CP: That’s true, and I’m just going to double back a bit. I don’t mean, like, “I’m the government, and I’m going to make sure everything’s XYZ.” I just meant more laws in place to be like, is this data what it says it is? And yeah, PR is huge, especially when it comes to these big corporations. I mean, granted, I am a loyal Apple user, but, if my phone doesn’t work, I’m going to chuck it in the trash [and] get a new one from a different company.

LZ: It is a little difficult, sometimes, to make AI systems themselves unbiased. For example, if you wanted to do a Twitter sentiment analysis — depending on who [wrote the tweet] and when a tweet is made, the meaning of the tweet could mean very different things. And, if you think about building some kind of classifier to basically say, “Is this something that should be flagged or not?” — that water becomes very murky.

MW: This is one of my favorite parts of the discussion. Ultimately, this is something that is good for the whole world to talk about; these issues would exist, regardless of artificial intelligence, people would still have biases, people would still treat people differently, based on their color, creed, et cetera. And having something that is inherently not human, that you can’t put handcuffs around, to force this into the light and say, “Hey, you need to look at what you’re putting in, because this is what’s coming out” — it’s a really healthy thing for us.

Raise your hand if you think artificial intelligence is a tool that is equally accessible to all.

No one raised their hand.

MW: We’re doing better, but we’re not there yet. I still think that there’s a minority of people who are going to get much more out of AI than everyone else. Everyone can get something out of AI, but everyone could get a lot more.

CP: We have this issue called the great digital divide, and with that comes these issues of equity. To be on the receiving end of the digital divide is to simply be lost in the dark: At least from what I’ve observed, everyone needs a phone to navigate this country or to even navigate your surroundings, to learn, to be educated. And, if you are missing out on that, then how can you accelerate to other places where you need that [information]?

LZ: I have two things to add. One is: Even if everyone had the ability to connect to some kind of AI platform, the knowledge gap to use that AI is also pretty large. ChatGPT kind of brings that barrier of entry down, but I think it’s still very apparent. 

ChatGPT is mainly a language model; if you wanted to use AI for something else — let’s say you have a school project or your own small business — and you want some kind of custom AI software, a lot of big companies provide APIs [application programming interfaces], or abilities to use their software to create your own stuff. However, those resources can be expensive, as well as difficult to use.

The second thing: ChatGPT is free, but the shareholders are going to eventually want more money. And once you start introducing money into the equation — which I think is inevitable, because it is corporations that make these services — it’s going to become a lot less equitable.

CP: ChatGPT already [has] a subscription service as well; would you pay for it, even if the free model’s removed and there’s no other AI service?

MW: I’d ask you how much. [Laughter.] But yeah, it depends; let me see GPT-4 and I’ll get back to you.

LZ: I think that also depends on the atmosphere. Take streaming services, for example. These days, if you want to watch a movie, you have to have a subscription to some streaming service; besides 123movies [a Vietnam-based site hosting pirated films], there’s no other way you’re going to get it. It’s similar with AI: If industries tend towards using that as a tool, I think people, including myself, would be very compelled to get it — or might even have to get it — in order to stay competitive or relevant.

Do you think that ChatGPT-generated responses have merit as genuine or quality academic writing, equal to those written by human academics?

MW: As genuine? Absolutely not. But, as quality? Yeah, with refinement; give me 10 minutes and I’ll get it better and better and better, and I can almost guarantee that if I didn’t tell you, five out of 10 times, it’ll look pretty similar. But I don’t think it ever will be [genuine], because if you just treat it as another person [rather than] as AI, it’s just plagiarism. It’s ghostwriting, best case. If it doesn’t come from you, it’s not your writing.

CP: It is ghostwriting at best. I don’t think any student here active on USC’s beautiful campus is going to be like, “I used ChatGPT to write my paper” — at least not publicly. But, in instances where that has happened, they’ve always said, “Oh, no, this isn’t the final cut. I’m just making edits here and there; if this doesn’t make sense, I’m going to follow up with my own reasoning and words.” It’s more like plug and play. So, I would say it doesn’t have its own standing for academic merit, but it does hold some sort of merit when it comes to general structure and composure.

LZ: I think the conversation [about] AI and whether it’s genuine writing or not becomes a little too philosophical for my taste. There’s always these questions about whether something is good or bad.  When comics came out, they said it’ll rot your brains; you go further back, ancient philosophers said books will rot your brains. Many years down the line, I could somewhat see that it could be genuine, because it does, to some extent, rely on your input to produce content.

Do you think AI will disrupt social structures and institutions or improve them?

MW: Academia is an institution that we’re watching it disrupt: Since Dec. 10 or whatever [Note: ChatGPT was announced Nov. 30], we’ve watched it [be] absolutely disrupted. Whether it has improved [academia] is yet to be seen. We’re living through that.

There’s some inequality in how much it’s affecting industry: For example, a whole variety of tech has been disrupted. When GPT-3 came out, you have all these startups that popped up that were just integrations and APIs. Then [GPT-]3.5 got announced, and [GPT-]4 got announced, and now they’re all getting wiped out.

CP: In this country, we’re already divided; I don’t even know if AI being introduced to that table is the right way to go because everyone’s going to be stuck in their own bubble. USC is a bubble itself. And I do fear for it because, whether it’s wrong or not, someone’s going to have full faith in this toolkit. The computer says it’s right, so it has to be right. 

LZ: A very scary application that I thought of is deepfakes; these days, they’re getting very real. Most of the exposure [I get] is just me seeing [Barack] Obama and [Donald] Trump playing Minecraft on TikTok. But, the thing is, [in] that kind of content — the voices are so shockingly fluid and real. I could see, in maybe even five years, that people who have malicious intent could create fake content — the president or some political leader overseas saying things that they don’t say. And that goes back to, who do you trust? If it’s that real, who do you trust?

MW: I completely agree. I’d say that trust in information and how it’s received, and also how it’s talked about and interacted with, is an institution that has absolutely been damaged and could [get] so much worse. I work with deepfakes on a very high-end level; those tools are even scarier, and what they could be used for. Ninety-six percent of deepfakes are used without people’s consent. [Note: This statistic appears to refer to simulated pornographic content of women celebrities, according to a 2019 study.] Five years from now, that technology is going to be a lot more fluid.

Not just that: This is done, usually, by individuals. If you could put together a team and funding, [it] exponentially expands the amount of damage that you could cause. So, that has to be tracked. We have suffered [in terms of] credibility with anything in regards to AI because of this whole issue of trust.

CP: AI could get so ugly, so bad, because of things like this, and it is up to us to try and put it in check. But the way AI moves — it’s so fast already that I fear we’re just simply behind in the race.

Thus far, there were 10 questions asked. How many do you think were written by ChatGPT?

Note: ChatGPT-generated responses may have been minimally altered, such as being rephrased as a “Raise your hand if …” question.







There were two; can you guess which ones?

1. Do you think you could recognize ChatGPT-generated text?

2. Raise your hand if you think that artificial intelligence will eventually surpass human intelligence.

3. Do you think universities should implement the use of AI detectors to evaluate academic writing?

4. Do you think that AI should be regulated more to prevent misuse and to protect individuals’ privacy?

5. Raise your hand if you think the development of AI will reduce job opportunities.

6. Do you think that AI will fundamentally change the way we interact with the world around us? If so, how?

7. There are some concerns that, based on the data and algorithms provided, AI systems will perpetuate existing biases and discrimination. Do you agree and do you think this can be mitigated?

8. Raise your hand if you think artificial intelligence is a tool that is equally accessible to all.

9. Do you think that ChatGPT-generated responses have merit as genuine or quality academic writing, equal to those written by human academics?

10. Do you think AI will disrupt social structures and institutions or improve them?

Directed by: Jonathan Park
Filming, equipment: Matthew Karatsu, Patrick Warren
Audio mixing: Matthew Karatsu
Moderator: Helen Nguyen
Editing: Quincy Bowie
Script: Jonathan Park, Helen Nguyen
Photos: Tomoki Chien
Thank you to:
Leo Zhuang
Max Wong
Cassius Palacio
Gordon Stables, director of the School of Journalism, for his advice and guidance
© University of Southern California/Daily Trojan. All rights reserved.