Jenny Wei is a composer from Dallas, Texas. At Stanford University, where she is an undergraduate, Wei is under the tutelage of Dr François Rose. She has previously studied composition with Anny Hong and Yiming Wu. Wei has received awards from the National Young Composers Challenge (2021), Luna Composition Lab Fellows Program (2021), MTNA National Composition Competition (2022), and The American Prize Competition (2022). She aims to illustrate stories and inspire joy through her music.
Jenny Wei: Ebb and Flow (2022) for piano and violin (excerpt).
Luna Lab Reading Session by Decoda in April 2022 :
Our interview on 4 January 2024 focused on how new technologies can be used to compose, perform, and enhance classical music. We also discussed whether technology would limit innovation or revitalize it and the possibilities for cross-pollination between classical music and 'synthetic', or 'electronic', music.
Listen — Jenny Wei: The Whims of Water (2023), for string quartet
(Excerpt from computer audio generated with NotePerformer):
Jeffrey Neil: What are the various kinds of computer-assisted composition software or hardware, and what do they do?
Jenny Wei: A lot of the standard technology that's being used in composition right now is notation software. Some of these might be familiar, like MuseScore, which is really popular. I use one called Sibelius. I know some people like to hand write their music and then transcribe, but I write it directly on the platform. It's super nice because you input all your notes, and it's clean. You can also use the playback feature. So, you can listen to your composition as you write, which is really great, especially when writing for a larger ensemble when you can't necessarily play each instrument or hear how it sounds by live musicians. I would say that, and Digital Audio Workstations or DAWs, are probably the most common piece of software that people use in music.
Some examples of DAWs are Logic or Ableton. A lot of producers use DAWs to write their music. And so, the difference between DAWs and notation software is that you're not inputting each note on a music score. You're taking a library of sound sets and then putting them together making your tracks. For pop music, that's really popular. I know some classical composers use DAWs to generate a sound file of their pieces, which you can do with notation software, but the audio is pretty robotic. But with DAWs, you're able to use sound sets from actual orchestras that sound realistic. So, that's how they generate more realistic sounding versions of their pieces.
Then, on the generative music side, I don't think there are any tools that are particularly mainstream or as popular as the software I mentioned. I know companies such as Open AI have been working on generative music tools. Similar to DALL-E or ChatGPT with Open AI, they've been working on a music one called JukeBox. There's a website where you can listen to some sample AI-generated tracks.
Another category is algorithmic music composition tools. Those are where you use programming languages. The one that pops into my mind is called ChucK, but essentially it is using code to generate music, and that one's more similar to the generated music that we think of when we think of AI music.
JN: For AI to generate music would you input a musical clip or the musical notation or code?
JW: I'm not super familiar with it, but I think the one that's being released by open AI generation is where you give it a prompt, like I want a pop score that's in the theme of Christmas music. So, it would generate music based on that. You input a prompt that's maybe like a sentence or two and then it analyzes that and produces something that's similar to what you requested. If you don't like it, you can always fine tune it by requesting something more specific. It has knowledge of your previous requests, so it modifies it and generates something that's more similar to what you want.
Many of these models are still undergoing a lot of development. The versions that have been released are beta versions where they show you some samples, but it's not anything like a full-fledged tool, unlike ChatGPT. Because they are still undergoing development, the samples that they give you are pretty limited.
JN: How can AI be used in the composition of classical music?
JW: The generative aspect of AI can be used to develop sound samples, which isn't particularly something that you use in classical music. That's more in the realm of pop music or electronic music. I've seen videos where people try to use AI to generate a four-part melody or something like that. And so I think you could probably use AI-generated melodies and incorporate those into your music. I do feel like AI and generative AI has more application in more modern types of music than classical though. So things like producing beats or electronic music.
JN: What is the difference between AI and generative AI?
JW: Generative AI is a subset of AI. I'm not too sure about the exact technical definition, but it refers to AI that is basically trained on a lot of existing data, and then you use it to generate new materials. So similar to how ChatGPT is trained on a lot of resources off the internet or books. Now when you ask it a question, it'll generate a text for you. In the same vein, generative AI will do so in the context of music it's trained on: maybe a bunch of scores that people have written or sound samples, and then when you give it a prompt it will generate a new piece of music that's based on what it's been trained off of.
JN: How can AI be used to perform classical music, if at all?
JW: I think AI has a lot of potential in adding a visual aspect to performance. There's actually something that the Stanford Music Department is working on. So we have something called CCRMA [pronounced karma], which is the Center for Computer Research in Music and Acoustics. They run the Laptop Orchestra, which is an orchestra that's made out of these electronic instruments, essentially laptops and then some amplifiers. I was looking at one of the concerts and they have a visual component where the music is being played from the laptop and the amplifiers. They have this visual background that goes with the music. So, I'd imagine that AI can be used to create some sort of visual representation of sound. That's one aspect.
JN: What is the visual background that you're talking about?
JW: The piece I'm thinking about was ... the theme was a desert, and so they had this huge screen behind all the performers. It was this moving animation of the desert. And then as a beat was played a little visual would pop up on the screen, like a tree blowing, or something like that. You could also add an essence of randomness into a performance where you have your orchestra performing a piece, but you could have some sort of AI model generating a beat or new melodic material as the orchestra's performing. I don't know if anyone's done that yet, but I imagine that's something that people might try to consider with AI and music performance.
JN: So, it's like a spontaneous embellishment.
JW: Yeah. I think that would probably be what it would look like ...
Listen — Jenny Wei: Flight Through the Night, 2021, for large ensemble
(excerpt, generated using NotePerformer):
JN: Are compositions written with the help of composition software meant to be performed electronically or by instrumentalists or both?
JW: Definitely both. For example, a lot of the music that I write with notation software, it's meant for instrumentalists. I think most - if not all - modern composers nowadays use notation software just because it gives you a really nice platform to notate everything. And then you can automatically generate printed out scores to give to actual performers. I think DAWs, as I mentioned, are more commonly used for electronic music. I know a large subset of it is EDM music or beats. Those are all fully electronic. And so those kinds experiment more with sound sets that aren't necessarily reproducible on an acoustic instrument. So, like synthesizers, for example. Those might be pretty common themes in music generated by DAWs.
JN: Does computer assisted composition open up the possibility for more people to make their musical ideas a reality?
JW: Yeah, I would say so, for sure, with traditional software. I know a lot of people who are just getting into composition, and they use free software, like MuseScore. It's a free, open source platform for music scoring. A lot of people who are first getting into composition like to just notate on there because it's super easy. You just have to input the notes that you want. Then, a piece of hardware that comes to mind is those MIDI cables where you can plug it into a keyboard, then plug it into your laptop and even if you know nothing about music notation it transcribes notes you're playing on a keyboard onto music software. And so, that's something that I think has increased accessibility in composition because it allows people who don't know how to notate to still be able to write music. Same thing with DAWs because you're not necessarily working with precise musical notation. You can just drag and drop sound files, and so I think that also broadens the number of people who are able to use the platform to make music. Then with the generative aspect of music technology, I'd imagine that people who don't really have any sort of music experience can still create music that they want. For example, if you have a game developer, and they need some music for their game, with a generative model you could just input: 'I want spooky music' or 'adventurous music'. And then they could just kind of take the music from that and incorporate it into whatever they need.
JN: What is your composition process? What inspires you? How do you start? What steps do you take?
JW: When I'm starting a new piece I like to brainstorm things I might want to write about. For example, with the piece that I'm working on right now, something that I thought was really cool was when I went to this planetarium back in the summer, and so I wanted to write a piece that had the theme of an adventure through the universe. I start with a theme in mind, and then once I have a theme, usually I start thinking about the form of my piece, how I want to structure it, and I consider some constraints that I might have. For this piece, I was constrained to write a piece between five and six minutes. That limits how I can structure the form. I work on that, and then once I have an idea of how I want different parts of my piece to fit together, I start writing sketches for different parts. I take my sketches to my professor, and we talk about things that could be improved on, whether it's technique or style, and then I go back and continue running sketches. Once I feel like I have enough sketches to where I have a general idea of how I want to write the piece, I start writing it. Usually I like to go from beginning to end, inserting material from my sketches. I think pretty non-linear at first, but then when I have the pieces in mind, it's a linear process.
JN: When you were inspired by the planetarium, were there any particular instruments that came to mind?
JW: I was constrained by the ensemble that's performing the piece, specifically four wind instruments. But it really depends on the theme of my piece, and so for example, if I want something that's a little bit more grand I'll use more instruments - with a variety of them: strings, winds, then percussion. If I want something that's a little less dense, I might just pick a piano and a violin. So, I think it just varies depending on the theme and the scale of what I'm trying to write.
Jenny Wei: Pandora's Box (2021) for string quartet (excerpt)
First performance on 20 November 2021, by Chordos at Klaipėda Concert Hall:
JN: Do you see classical music as becoming more derivative in the future as a result of technology?
JW: I don't think so. On the contrary, I feel like your options are expanding, actually. I know a lot of people are concerned about if generative music becomes a thing, is it gonna take over the jobs of composers and all of that. But I think there's something that generative music lacks; it is, I would say, novelty. Because it's being trained on existing music, the music that it generates is still going to have some similarities to pre-existing songs that people have written. And so I think that aspect is something that AI and music technology can't really replicate. Versus a real-life human composer who is always able to generate new material that's different from things that people have written in the past. And then I guess another aspect to it is even with generative music, there are so many ways that you can incorporate it into the music that you write. For example, adding electronic sounds to an existing composition. Because you have more options, it just allows for more variety as a whole instead of narrowing the options and competition.
JN: Will we see more recycling of older music stitched together from different compositions? Or more musical innovation?
JW: I think more innovation. Just based on trends of music throughout the years. A lot of the pieces that you hear nowadays are completely different from what people wrote, like a hundred or a-hundred-and-fifty years ago. And, so I think that's a trend that's likely to continue. That people are always going to be coming up with new styles or new techniques. And so I don't think that music is going to be limited in that sense.
JN: Right, I guess my thought was just that if there's a mentality that develops of depending on aids to musical production, maybe the next stage isn't wholesale imitation, but that people do end up taking on a more imitative mindset that makes it harder to be innovative. And I realize that's more of a philosophical and speculative question.
JW: That's a good question. A lot of music that's been written for commercial purposes, like film music or video game music, may have more of a repetitive aspect to it because a lot of times there's not that much creativity involved in the music that's written for those products. I'd imagine with generative music, it's going to be very easy for a game developer or for someone who's making a small film to query a generative music model, and then have it spit out something. I think in that aspect, there's going to be wholesale recycling of material. But, in terms of the direction that classical music is going to be taking, I feel like it's not gonna be constrained by new music technologies. I mean, just looking at trends with music throughout the eras, something that's really prominent in music nowadays is atonality and then extended techniques, which definitely weren't used at all a-hundred-and-fifty or two hundred years ago. And so I think as a lot of composers continue to write music and experiment with new techniques, the options and possibilities are just going to expand. In that aspect, I don't think it's going to be like old material stitched together. I think there are still going to be a lot of novelties.
JN: I gave you a video of the German disc jockey Martin Stimming, who was approached by someone from the Brandenburg State Orchestra (Frankfurt an der Oder) when he was spinning at Watergate in Berlin.
Listen — Stimming - November Morning (excerpt)
℗ Diynamic Music:
JN: This composer asked him if he could transform the track 'November Morning' into a symphonic orchestral piece, and he did.
Listen — Stimming - November Morning (BrandenburgischesStaatsorchesterVersion, excerpt)
℗ Diynamic Music:
JN: It's incredible to see how this dance track was turned into an orchestral piece, and even more fascinating to hear about Stimming's emotional journey. He was someone who had no experience with classical music, and suddenly his work has become something else and is being played by classically-trained musicians. I came across that video years ago, right when it came out, and I found it so compelling. This is obviously a very exceptional type of situation. But, why not? The idea of this kind of cross pollination works on the psychological - or social - division between high culture and low culture, which maybe starts to break down with more accessibility and maybe also the ability of normal people to be able to compose with computer software. So, perhaps the classical music world becomes less of a rarified community. What do you think?
JW: Yeah, I think it's super cool. Something that I would like to see more of, too. I feel like, at least talking to people when I was growing up, classical music was seen as boring or inaccessible or that you could only enjoy if you understood it. And so I think, like these examples of cross-pollination where you're taking a more electronic or a pop piece of music and adapting it into something that an orchestra can play increases the audience that listens to orchestral music. I think it's really good at making orchestral music seem more digestible or less elitist or exclusive per se. When I saw this video that you sent, something that came to mind was actually some K-pop companies that are actually doing orchestral renditions of songs that their artists have produced, so that was something that it reminded me of.
JN: What do you think of taking canonical compositions of classical music and turning them into electronica, dance, or ambient music? I think you saw a bit more of that in the sixties and with disco in the seventies that combined synthesized and orchestral music.
Listen — Walter Murphy: A Fifth Of Beethoven (opening)
℗ 1976 Barry Gibb, The Estate of Robin Gibb and Yvonne Gibb:
Listen — Herb Alpert: Rise (excerpt)
℗ 1979 A&M Records:
JW: Similar to my answer to the previous question, I think it's a good thing for the music industry as a whole because you're broadening awareness of the classical music scene, and I know a lot of pop songs sample snippets from classical music. And so I think that's a really cool way to bring in classical elements into modern music, which I personally like. I think some artists have also been experimenting with orchestral music in their songs. So, there's an artist called Cody Fry, and he's very much a pop artist, but a lot of his music is orchestral sounding. So I think those elements of taking sounds or instruments that you would associate with classical or older music and incorporating them into pop music is something that's super engaging and also novel.
There's one example that immediately comes to mind. There's a Maroon 5 song, Memories, which samples Pachelbel's Canon. The chords to Pachelbel's Canon serve as the foundation for the melody of the piece, which I felt was pretty interesting.
Listen — Maroon 5: Memories (excerpt)
℗ 2019 Interscope Records (222 Records) :
JN: Do you have any advice for composers who are not tech savvy, but who want to incorporate new technology into their inspiration or composition processes? A simple and approachable plan for the beginner.
JW: Yeah, I think listening to music of this genre that incorporates electronic elements or something that's more technological is a good place to start. I think after that, DAWs are pretty beginner-friendly to an extent. It's just a piece of software, and you can download it. I think some you have to pay for, but there's some that are free. And it's super straightforward. I believe you just drag in sound files. There's a lot that you can experiment from. So I think that's a good foundation for just experimenting with different sound options or different music technology. Listening to more electronic music, experimenting with DAWs. I think some aspects of music technology like the algorithmic music technologies, such as the ones that require you to pick up a programming language, are less accessible, less beginner-friendly. But with those, I think if you have any sort of programming experience there are tutorials online that you can follow and play around with.
JN: What about ChatGPT? Is that something that one could use?
JW: Not for music generation. Specifically, if you wanted to query ChatGPT for resources that would be a good option. You could ask something like: 'What are some resources I might use to learn about music technology?' It would give you a list of links or tutorials.
Copyright © 7 January 2024