Call us toll free: (347) 687-4270
Top notch Multipurpose WordPress Theme!
Joel Roston Composer | Instrumentalist
Call us toll free: (347) 687-4270

AI and Me and You and The Beatles

18 May 2020

On May 7th, I received an email from my friend, Ryan Walsh asking me if I had any thoughts on this article for a piece he was writing for Stereogum.

The good news was that I definitely had thoughts; the bad new was that I had ten-jillion of them, which, of course, is just too many thoughts for a single email. I wrote back to him asking if, in an effort to focus my response a bit, he could perhaps get a bit more specific, which prompted him to send back this pointed two-parter:

Well, as someone who earns his living making the precise kind of music that A.I. threatens to replace first, I wanted to know a) your impression of the newest batch of music from OpenAI (beatles song in particular would be enough) b) if you think the creation and sale of AI music will affect your living.

I sent him back a focused-for-me response, which resulted in my brief appearance in the published article.

I’m posting my full response here because, a few hours after the idea popped into my mind to do it, I received an email from Ryan that, among other things, said, “And your entire explanation was fantastic, you should publish it in full on your blog if that catches yr fancy.”

It wasn’t until today that my fancy was definitively and unambiguously caught.

Note: The text here differs from the email I sent to Ryan in the way that one tends to change text around when it’s going to be displayed to the entire world instead of merely being mined for a quote or two by a friend.

AND WE’RE OFF!

a) your impression of the newest batch of music from OpenAI (beatles song in particular would be enough)

Right, so, I’m at the intersection of writing, like, “arty” personal music, being a professional composer for media, and being a Beatles super-fan.

It’s clear that, throughout history, the artists pushing the medium of music forward did so, in part, by attempting to wield the technologies of their time. John cage incorporated radios into his pieces, Wendy Carlos was an early adopter of analog synthesis/synthesizers, hip hop pioneers like Grandmaster Flash pushed turntables and samplers to their absolute limit in service of defining a new sound, Daphne Oram and Delia Derbyshire compiled a veritable encyclopedia of sounds that would define an entire generation over at the BBC, Steve Reich’s “Aha!” moment emerged from a tape-loop experiment, and poor Les Paul did his best to realize his intensely forward-looking ideas by bending the primitive tools of his time as far as they’d go.

The Beatles and George Martin, of course, were exceptionally visible experimenters—to the point that a track like “Revolution #9,” to some people, might not sound too far off from a piece of music generated by a contemporary AI.

With regard to the track that you sent over, does it “sound like” The Beatles? No, it doesn’t sound like The Beatles. Does it contain attendant properties or variables that somehow nebulously invoke The Beatles? Absolutely.

Here’s a story: In a former life, I was an audio transcriptionist and at one point, over a weeks-long-period, I had to work on transcribing a meeting of executives from [what I’ll call] “a popular, American fast food chain.” The dialogue was mostly unremarkable, but I did learn the interesting fact that this company has thirteen unique indicators that serve to make it immediately identifiable to any consumer. Like, if you walk into one of their restaurants in Santa Fe, Paris, Dubai, Lagos, or Osaka—regardless of who you are and despite all of the culture-specific changes and designs—you’ll be able to know that you’re in one of their stores because of these thirteen things.

Now, as a composer for media, one of my jobs is to be able to listen to a piece of music that a client likes, determine what elements of the music make the piece “the piece,” and then weave those elements into a distinct, new track. With regard to The Beatles specifically, anyone who’s listening closely enough can easily come up with a bunch of elements/decisions that make the Beatles’ music “Beatles music” and not, say, the music of Minor Threat. This isn’t really a bold claim or a flex of any kind—it’s just the fact of the matter. Like, I don’t really know how to write in the style of Shakespeare, but if I were hired to do my best, I’d start by reading about whatever “iambic pentameter” is and I’d try to stay away from phrases like “solar panel,” “Costco’s televisions,” and “GitHub repository.” I more organically dive a bit deeper into this sort of thinking in this Beatles-analysis-y blog post.

All of that said, I’m no Dominic Pedler.

On the Shakespeare tip, consider this foreshadowing: Whatever I’m taking into account when attempting to write in his style, it’s clear that there are trained specialists who would do it forty-billion times better than me.

I should also note that some clients request music that I don’t feel comfortable attempting to approximate or emulate due to a number of factors including, but not limited to, (1) not rolling extra-deep with a given micro-genre (some very, very specialized electronic music might fall into this category) or (2) not having a touchpoint with the culture that produces the music in question and, therefore, not having a solid theory-of-mind with regard to how it works. These are things that contemporary AIs, I’m guessing, don’t tend to bump up against.

Part of the problem in attempting to think some of these things through—for me, at least—is that, for a group like The Beatles, AI is a work in progress. If you look at AIs that have been asked to tackle, say, the music of J.S. Bach, it’s a bit of a different story. I’m not a computer scientist*, of course, but it’s clear that if we want to get close to emulating the chorale works of Bach, we could simply pick an organ sound, input the rules of tonal harmony, feed the AI a bunch of Bach’s chorale harmonizations, and we’re off to the races. The difference between the extremes of Bach’s chorale writing and timbres simply aren’t as vast as the differences between, say, the crescendo/build-up-y part to “A Day in the Life,” “Ob-La-Di, Ob-La-Da,” “Flying,” and “The Inner Light.”

It should be noted that this analysis of mine is a hair disingenuous because I’m comparing a small slice of Bach’s music to the entire Beatles canon. On the other hand, arguably, while the music of the Beatles can be sliced and diced into all kinds of sensibly-organized data sets, I’m not entirely certain that a reasonable analog exists to Bach’s chorale writing. Dividing it up by instrument doesn’t get you very far due to their changing sound. Dividing it up by period or album doesn’t really do it—the 1965 album Help!, alone, still gives you “You’ve Got to Hide Your Love Away,” “I Need You,” and “Dizzy Miss Lizzy” to reconcile with each other. Again, the problem isn’t that there aren’t harmonic, melodic, timbral, and textural themes that run through all of this music. The problem comes when asking an AI in the year 2020 to find them.

Also, while it’s not an AI, it’s worth noting that we’ve been attempting to use computers to approximate the sound of The Beatles for a while now with products like East West’s Fab Four.

All in all, while I don’t think that we should blindly “go all in” the minute a shiny, new thing presents itself, I do think that we should embrace the technology of our time and insert ourselves into the dialogue it’s attempting to have with the things we already know.

To that point, I invite you to check out the late Matt Marks’s orchestral arrangement of “Revolution #9.”

In conclusion, I hereby state my official impression of The Beatles AI thing to be: “Whoa! Cool!”

b) if you think the creation and sale of AI music will affect your living.

Note: The following isn’t a complaint—I’m mostly a happy-go-lucky person who’s just honored to be here. It is, however, what I [currently] feel is true (Sub-note: Joel Roston’s feelings and beliefs are subject to change at any moment without prior warning or post-change update):

I think the main threat to my livelihood is a structural/human problem, which, while perhaps intensified by technology, isn’t caused by it.

Essentially, as far as I can tell, the way that the living of someone like me is being affected by our changing world is by a lack of specialization. In places like the ad and podcast worlds (not so much the documentary film world, but it’s happening a bit there too), the people who are tasked with finding and placing music have no real specialized understanding of music. This, in itself, isn’t a bad thing; as we know, everyone is a unique, special flower with their own interests, desires, and artistic inclinations, which is part of the reason why many of the sections of our world that are wonderful are so wonderful.

How this plays out for a media composer, though, is complex.

Many non-musicians who aren’t used to listening to certain types of music, or listening to music in certain ways, tend to converge on music that isn’t difficult for them to understand or talk about. This isn’t specific to music, of course (for instance, I imagine that I’m this way about everything in the world that’s not music and I feel bad for the graphic designers, cooks, and other specialists who may have to fulfill personal requests from me). Still, the fact of this situation is why our society is currently bottle-necking through a sound universe created almost entirely of airy, amorphous, swirling, impressionistic, overtly-unstructured music. It’s easy to understand, it’s easy to talk about, and it “works” under most media content.

If you look at the podcast scene, for instance, while there are for sure shows and production companies using music to great effect, the fact of the matter is that most shows (out of the, last I checked, nine-hundred-thousand shows) are one- or two-person operations put together by people who are more often than not, incredible journalists, researchers, thinkers, narrative-weavers, and storytellers, but who’ve never, in their entire lives, had to think about or consider how to present their ideas in an audio format. Hiring a composer, creating a musical brand for their show, and thematically providing continuity from cue to cue/segment to segment isn’t what’s on these people’s minds (or in their budgets).

With regard to AI specifically—as with stock music libraries before it—I think it’s simply another way technology will make it easier for non-musicians to find and place music.

Lastly—and I say this as a person who leads workshops for producers and media professionals on how to listen to and talk about music—I’d guess that, at least in the near term, producers would be more likely to use something like a stock music library over attempting to create a new track armed with some sort of AI-powered creative interface. It’s simply less overwhelming for non-musicians to say “Yes” or “No” to a pre-existing piece of music than to have to dicker with parameters in an attempt to fully realize a piece of music based on subjective linguistic signifiers, tempo/instrument sliders, and/or whatever else. It’s the same reason that I’d prefer to taste a little cheddar and a little Swiss to decide which one I want on my sandwich, rather than have to move a slider around in a way that effects the pasteurization process in an effort to make my perfect cheese. I barely even understand the pasteurization process.

The fact that some of the companies are using AI to generate pieces of music in an effort to build their own stock libraries doesn’t really disrupt the model too much from my perspective—these are just other music libraries out of the thousands of music libraries. Now, if someone develops an AI that allows a producer to simply input a twenty-five minute podcast episode, click a single button and, through sentiment- and other analyses, receive a link to a perfectly-scored and sound-designed piece, I might be a little worried. Even then, though, it would have to be perfectly-scored for any given individual producer, which, on the one hand seems like a tall order and, on the other hand, seems like it’d just result in more of the minimal, easy-to-understand/talk about music I mentioned above.

In conclusion, I hereby state my official impression of AI and how it relates to my career to be: “It’s probably aiding in making me irrelevant, but, also, it’s helping a lot of people make stuff they otherwise might not have made AND it’s super cool.”

Joel

*Though, I did ask two deeply accomplished classical musicians who have advanced degrees in computer science and electrical engineering from MIT and work in computer-music related fields to look over this paragraph and they gave me their blessing(s). (back)

†I feel like the reason the documentary film world is somewhat outside of this set-up currently is primarily due to the fact that the tools and processes of filmmaking are much more inaccessible and daunting to laypeople than the tools and processes of audio production, so there’s still a sort of academic feeling to it. For the most part, the people making these films are “classically” trained in the techniques of their craft and more likely to view music and the role of a composer in a somewhat traditional way. Put another way, if you were to walk in on a professional documentary being made, it would most likely look like whatever “people making a film” looks like in your mind. If you were to walk in on a professional podcast being made, it would most likely look like a person sitting in their closet under a bunch of blankets with a Zoom recorder and/or a laptop. Again, nothing about anything I’m saying here should be taken as a value judgement. Yes, I’m a professional composer, but it’s not like I’m typing this from my million dollar, world-class recording studio. (back)

In Blog, Friendship, Hard Bloggin', Total Friendship, Work, Workshop

Tags: , , , ,

Related Posts

«