From The Soundboard: AI is on the rise to curate the soundtracks to our lives


Effren Villanueva/Daily Trojan

For many people, the first step to exercise is finding a good playlist. Exercisers turn to Spotify in search of high-adrenaline tracks to help on them on the treadmill or at the barbells. Or, on date night, the host sets the mood by searching for “romance” playlists to play in the background while they prepare dinner. Sleep enthusiasts rave about the benefits of soundscape apps that play back terrestrial noises to lull users to sleep.

Mood and environment-based playlisting is gaining ubiquity. Listening to music is arguably becoming more of a background activity than one that is given unadulterated attention (such as film or video games), and people want sounds that supplement their daily lives. Instead of trying to find specific albums that conjure a particular ambiance, people spend time looking for playlists that align with the environment they are in or the activity they are engaging in.

Why do we need highly curated playlists, though? What if there was an entire platform that did this in real time, without the need for human interaction? Adaptive music — music that “adapts” to a person’s setting — is on the horizon for the music industry and will have monumental implications for the way music is created and consumed. The premise of adaptive music is fairly simple: An user indicates to the software in some way (or the AI would be smart enough to recognize) that they are engaged in some activity or have transitioned from one activity to another, and music is created in real time to fit whatever they are doing.

For example, suppose someone is taking a quick jog around their block for 30 minutes and they enter a “workout” application on their smartwatch or smartphone. AI would use this information to immediately craft a soundscape that complements the energy of their run. After the workout, the music would change instantaneously to help the runner relax. Or, from greetings and cheese board appetizers to red wine and goodbyes, the music would adapt to a party host’s needs without the host having to ever interact with the software.

Interestingly, the technology for something like that already exists and is already in use. Warner Music Group, one of today’s three major record labels, just signed a deal with AI start-up Endel, an algorithm creates albums based on mood and environment and suggests those to users based on what they are doing. Similarly, start-up Weav Music takes existing songs and alters their BPMs based on the activity the user is engaged in.

This has pretty far-reaching ramifications for the music and tech industries. Since major labels are beginning to sign contracts with adaptive music platforms, it is conceivable that they may own copyrights for AI neural nets in the future instead of albums and singles created by humans. That would revitalize the recorded music business and keep labels from going under and would open up myriad job opportunities for musicians. The software would definitely need human input in the early stages for things like basic music theory and mood detection; the neural net would need to understand chord progressions, cadence, melody and atmosphere. This provides ample opportunity for practicing musicians, music professors, musicologists and music engineers.

For tech giants like Apple, Google and Amazon, this frontier presents major means for innovation. All three companies have their own music streaming services, as well as their own home speakers. Take Apple, for instance. The company’s music streaming service Apple Music is fully integrated with its home speaker, the HomePod. Imagine someone walking into their living room with their iPad, lying down on their couch, opening the Apple Books app and then saying, “Hey, Siri, play my Music IQ radio station.” The AI, in turn, would recognize that the user is in their living room, lying down on their couch with a book and create music accordingly.

AI is scary to many people, but it has a plethora of applications that can augment our everyday lives. Think about how common it is to sometimes endlessly search for music to fit particular activities or moods. Why not use an AI that creates the exact sounds you are looking for, and does it all instantly? Adaptive music is already here, and it is only a matter of time before we are all engrossed in it.

Willard Givens is a sophomore writing about the music industry. His column, “From The Soundboard,” runs every other Monday.