Author(s): Casey Yates
Mentor(s): Jesse Guessford, Music
https://www.youtube.com/watch?v=WY8D53E_anA
AbstractAs a composer, I want to understand the music that I love. What is it that gives a song its particular sound or character? What do all the songs that I like have in common? And what makes one composer’s style sound different from another composer’s style?
So during the fall 2021 semester, I investigated the field of computational musicology and how it might be used to analyze video game music. I wanted to know if computational musicology tools could be used to identify musical features that give a soundtrack its characteristic sound. In short, it’s data science applied to music: our data set is a collection of songs, and we look for patterns by analyzing the data set.
To do this, I used a Python library called music21, developed by Michael Cuthbert’s lab at MIT. It seemed to be a promising fit for my research, because it’s been used to analyze Bach chorales in the same way that I wanted to analyze video game music. But one difference between Bach chorales and video game music is that sheet music is available for Bach chorales. music21 operates on symbolic musical data, like notation–not audio files or recordings. So in order to use it to analyze game music, I first had to transcribe the music to create the symbolic data to analyze. This means that I had to listen to the music and copy all the notes into my notation software for each of the instruments.
I referenced a technique used by George Mason University’s own Dr. Megan Lavengood to help with the transcription, which is using a software called Audio Overload by Richard Bannister. It can isolate the individual channels from the video game program data and separate the notes, which makes them easier to hear. I also used Capo to create a spectrogram and help me visualize the different pitches that were sounding at the same time.
But this process was really slow, so I looked for a way to extract symbolic data from the game program code directly. Fortunately, an open source project called “vgmtrans” solves this problem by identifying the instructions sent to the game hardware’s sound processor, and then translates those instructions into MIDI, which is a standard format for encoding musical notes. I could then take that MIDI data and import it into my notation software. However, the notation result requires human review and editing before it makes good musical sense.
I converted my transcriptions into MusicXML format for importing into music21. I then tried to perform harmonic analysis using chord reduction, which is a copy of the technique used for the Bach chorales. But this didn’t work well: the Bach chorales are rhythmically simple, because they are written for singing. The music I was analyzing was much more complicated and didn’t reduce in a useful way. This means that I need to transform the music itself to support different analytical techniques. Further research is required here, but music21 does make it possible to take different musical streams and derive new musical streams from them for analysis.
Although I wasn’t able to find the unique musical character of one of my favorite soundtracks, I believe that my research demonstrates that the approach itself is viable. Computational musicology is appealing because of its repeatability. The analysis steps are made transparent and unambiguous through Python code, and other researchers can modify or build upon the code to develop new insights.
The final product of this research will be a repository of my transcriptions, my code, and documentation hosted on GitHub. My hope is that by providing a working example, video game researchers can apply computational musicology techniques to their own investigations, and maybe contribute back and show me some of the things that I missed in my own investigation.
Thank you.
One reply on “Computational Musicology and Video Game Music”
So cool. I look forward to further results. What a great idea to see if you can tell how pieces are the same, or different. Thank you for sharing.