NATURAL INTELLIGENCE

What the heck are the AI ethicists up to?

If you’re interested in having a career that is ethically meaningful, technologically inclined, potentially lucrative and intellectually involved, this article is for you!

By VICTORIA FRANK
(Miranda Davila / Daily Trojan)

As it turns out, interviewing professional artificial intelligence ethicists sucked me into hours long, meandering philosophical conversations. At first, I was slightly perplexed, as I have a column to write and interviewing sources is usually fairly straightforward. 

Upon second thought, I found my predicament comical. What else could I have expected from people whose careers hinge on their talent for deep inquiry?


Daily headlines, sent straight to your inbox.

Subscribe to our newsletter to keep up with the latest at and around USC.


Morgan Sutherland grew up on the island of Nantucket, Massachusetts. At a young age, he felt stifled by the tight container of his small-town environment and sought out something bigger and more meaningful, leading him to the combination of philosophy, psychology, art, technology and history. In mixing everything together, the outcome was obvious: the rich, unharvested field of artificial intelligence.

After a decade of networking, freelancing, consulting and gaining interdisciplinary experience, Sutherland landed a six-month project with OpenAI, doing research in direct involvement with OpenAI’s leadership. Sutherland’s team rode a wave of interest into fine-tuning AI models for sentence completion and reconciling value conflicts in a way that promotes a net positive impact on humanity.

Another thing I quickly learned: the title “AI ethicist” does not cleanly encapsulate the profession I’ve been so keen on uncovering. 

Rather, there are three general terms to know — AI ethics, AI safety and AI alignment. Each subfield attracts its own people and subsequent ideological background. For example, AI ethics workers tend to have a humanities background and focus more on issues of diversity, accessibility and socioeconomic politics, whereas AI safety concerns things like the structural and psychosocial risks of humans interacting with this technology at large.

Sutherland, with an ever-observing eye, laid out the anthropological landscape of these disparate groups. “These are fields, but they’re also scenes. There’s AI safety people and an AI safety scene, and they all party together, they all know each other, and they all read the same stuff.” 

I’m tempted to paint a fantasy of intellectual elites in the northwest corner of the United States meeting in skyrise gardens for their weekly book groups, arguing over drinks at a house party about which tech startup is most likely to accidentally start the apocalypse. 

After all, OpenAI’s mission statement is “to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.” 

At this point, the technology we’re experimenting with is desperately close to science fiction. 

The biggest companies in our current global economy are puppeteering godlike creations. There is an undeniable demand for level-headed, well-read, morally in-touch people to advise these companies in the right direction — though “right” itself is an incredibly subjective idea. 

While the position does involve heavy doses of personal creativity, the workplace realities persist. When working under a company, even as an ethicist, you are still a part of an organization that requires awareness of its brand positioning and company culture.

“When you join a corporation, there’s other people and incentives that you’re participating in. If you want the company to publish something very critical about itself, it’s a bit idealistic to think that they’re gonna do that without asking any questions,” Sutherland explained.

 

This paradigm involves finding allied colleagues to back up your research and working well with company-specific editors.

Another layer of complexity is that while ethical conundrums already intercorrelate varying schools of thought, different companies also prioritize different approaches. Before committing to a position, I recommend researching the ethos of your potential overseers. Two ethicists can have starkly contrasting jobs depending on where they work.

On a lighter note, amid a shapeshifting job market, I’ve learned that it’s possible to make an entire career out of debating prevalent issues like the quality of evil, what makes good art, realism versus idealism, money and power, corporate responsibility, psychedelic countercultures and more. 

I’m excited about this slice of the future. It gets me out of bed and moves my fingers to write, which is not something to take for granted. With the future so up in the air, I foresee a wide-scale refocusing of what we deem valuable. 

Sutherland acknowledges that the industry is moving fast. However, he cautioned not to interpret the intimidating pace as disempowering. “What I saw inside is that one brilliant person could change the course of history right now. So it’s possible to do something. Whether it will happen or not is up to us.”

Victoria Frank is a junior writing about the inevitable AI future with a focus on ethics and well-being. Her column, “Natural Intelligence,” runs every other Friday.

© University of Southern California/Daily Trojan. All rights reserved.