Musician Maker
In the world of electronic music, there are many different options for performance. Video games like Guitar Hero and Rock Band let you get up on stage for a concert as long as you can press the right buttons at the right time. You cannot, however, choose when you play or what note you play. Wii Music lets you “jam” with other musicians or computer accompaniment. You can even improvise rhythms, but again are limited to a pre-programmed melody. Recent developments in audio software give singers the ability to correct questionable tuning instead of having to record all over again. One of the oldest forms of electronic music is the synthesizer. With the synthesizer, you can play any instrument you want, provided you have learned how to play the keyboard.
Musician Maker is an attempt to bring together the best of all of these options while eliminating those aspects that make them less accessible. Instead of spending years learning how to play an instrument, the musician can pick up an instrument and play a song without playing a wrong note. The user needs no instrumental or theoretical training, and can play either alone or with friends. It is possible to play a completely new melody and rhythm, and the only guideline provided is a chord progression. The ultimate goal is to be able to play music with instruments whose movements are natural and almost instinctive. In this way, many of the minor details—such as playing an instrument with good tone—are removed, and the user is left to simply play by feel.
In the beginning, there were several important decisions that needed to be made. These included choosing a musical interface to use, a programming language, and a hardware-software interface. The natural choice for our musical interface was MIDI. MIDI is a simple and common way to work with music on computers. Its conventions have been solidly established and it can be used in a wide variety of hardware and software. It provides incredible flexibility but also a simplicity that makes it very easy to implement.
After deciding to use MIDI, we began looking at programming languages. Graphical languages looked promising and are used with many similar applications. In the end, however, none were reliable or intuitive enough to learn on our own. We turned to linear programming languages and began investigating Python. Python is a relatively young programming language known for its syntax, which is much more intuitive than similar languages, such as C or Java. During our research, we found an extensive collection of Python modules called Pygame. One of these modules was specifically designed for MIDI, and this capability became the deciding factor that led us to choose Python.
We then needed a way to get input from sensors. The simplest way was to find a device that would convert analog sensor signals to a form our computer could use. We soon found two devices that could convert signals between zero and five volts into MIDI signals. Initially, we used the Icube-X MicroDig, chosen for its pricing and availability of sensors. We soon found that the MicroDig was not sending data at near a fast enough rate which caused a great deal of latency in our program. Also, the physical system was not very robust, because the connectors were three-wire PCB pitch connectors and the MicroDig itself was very small and light. It was very easy to accidentally pull a cable and have the sensor get dragged along and end up on the floor. Finally, the MicroDig required our software to send specific messages to it for each action we wanted to perform. This resulted in a lot of extra code in our program. With these problems, we decided to switch to the Eowave Eobody. Its advantages include a much more rapid stream of data, the use of 1/4” TRS connectors, a more substantial box, and no additional required system messages. We could also make some simple wiring modifications to the sensors from the MicroDig to plug them directly into the Eobody. This switch drastically reduced the amount of code in our program. The box and the connections are also much more stable and secure. We have also eliminated the latency in our program. The Eobody has gotten us to the point where we don’t need to worry about that aspect of our system.
One nice thing about both of these systems is that they allow us to use our own sensors. As long as the signal is between zero and five volts, we can wire up a TRS jack and use a connector cable to hook the sensor up to the Eobody. This makes the circuitry much simpler and also makes it very easy to build self-contained instruments that hook up to the Eobody as easily as an electric guitar hooks up to an amp.
[Transition to Main program] We wanted some way for a user to be able to define the chord structure of the song being played. However, because this information is not explicitly included in audio files such as .mp3 files, we needed an alternate way to specify how to filter the notes being played. We decided to have the user define a song by writing a text file to define elements of the song such as the key, tempo, and chord progression of the song. The user specifies the chord progression using music theory notation. The software then reads the text file line by line. Each line starts with a command that indicates what information is presented in that line. The user specifies the key and tempo of the song as well as the default number of beats for each chord before any chords are given. When these two pieces of information have been established, the user can construct the chord structure using simple music theory notation. For instance the tonic chord is represented by “i” or “I” while the dominant is expressed as “v” or “V.” Included are options for diminished and augmented triads as well as almost any kind of seventh chord imaginable.
For each chord the program reads, it adds that chord to a list that defines the allowed notes throughout the song. The notes are represented by their position in the chromatic scale from C to B (0 to 11), so a major C chord is represented by the list [0, 4, 7]. The program reads a chord, converts it into a list like mentioned before, and then transposes it into the key of the piece using the same chromatic numbering of notes. Upon reading a “I” in a song whose default chord duration is 400 and whose key is F, the program would add the list [[5, 9, 12], 400] to a larger list that looks like this:
[[[5, 9, 0], 400], [10, 2, 5], 400], [0, 4, 7], 400]]
After adding the final “I,” the above progression would become a simple “I-IV-V-I.” The list above is what the program reads when it plays the accompaniment and constrains the notes available to the instruments. To play the accompaniment, the program reads through the aforementioned list. It plays each chord in open spacing for the given duration. It then determines which notes fit into both the chord and the range of each instrument and creates a list of allowed notes for each instrument for that specific chord. While the chord is being played, each instrument can then play any note that fits into the chord and is also within its own range.
We also wanted a Graphical User Interface (GUI) for our program. At the beginning of the summer, we were running our program through the Command Prompt. Although this was easy enough for us to do, we didn’t want the average user to have to navigate folders, run our program, and see output using the command prompt, so we built a GUI for our program. The user simply double clicks the “program”, a python file, and the GUI pops up, ready for user interaction. Through this GUI, the user can specify what song file they want to play from, what instruments to include in the performance, and other options about those instruments like what channel they send their output to or what their range is. There is also a “demo” mode, where the user can test out the instrument before playing it with the song accompaniment, and a “play song” button.
One of the more technical aspects of our software development for the summer was to work multithreading into our program. A conventional program executes lines of code in sequential order, which is usually the preferred way to structure a program. For our situation, however, we wanted to have a responsive GUI, song accompaniment, and each different instrument run at the same time. To do this, we made each part its own separate thread. This way, the computer takes care of processing and is able to alternate between the different tasks with relative ease and no loss of speed. Since the computer alternates between each of the tasks hundreds of times a second, each task seems responsive to the user.
Program Overview
When a user opens the program, the GUI thread is created. This thread will run until the user quits the program. The user chooses which instrument to use, specifying the inputs used and the range of the instrument and adds it to the list of all the instruments to be used for the session. Once all the instruments have been chosen, the user opens a “.sng” file using the file browser in the GUI.
At this point, there are two options: play a demo or play the song. When one of these is selected, the GUI class spawns a MakeMusic thread. The MakeMusic thread converts the song file for use in our program, controls the accompaniment, and in turn spawns a ReadBytes thread and a thread for each instrument being used. The ReadBytes thread reads and distributes the data coming from the Eobody to the appropriate instrument, while each instrument’s thread handles this data and determines when a note has been played.
The Eobody sends a separate MIDI message for each change in any of the sensors. Each message specifies which sensor has changed and the status of the sensor. The status is a number between 0 and 127 that represents a voltage between zero and five volts. ReadBytes is constantly checking for new incoming messages. When it gets one, it finds which input has sent the message and adds the data byte to a queue for the corresponding instrument. The MakeMusic function simultaneously passes in all available notes in the chord to each instrument based on its specific range. Each instrument’s thread receives the data and processes it differently. In the end, it determines which specific note in the chord is being played and how loud it is being played. The demo mode only uses one chord based on the song, while the chord obviously changes as the song progresses in song mode. Once the note and volume have been determined, the instrument then sends a MIDI note on message.
That message is sent through a couple pieces of hardware before reaching the speakers. We use a Mac Mini whose sound card features latency of about 0.1 seconds when converting MIDI to audio. For this reason, we bypass the sound card and send the message through a USB MIDI to straight MIDI converter which then connects to a Miditech PianoBox. The PianoBox converts our MIDI messages to audio signals which are then sent to the speakers.
Instruments
We also created three custom-built instruments that can easily connect to and be played through our software.
The first, entitled the Baronium, is essentially a long metal bar with a force sensor at each end. Based 0n the data we receive from these sensors, we can fairly accurately determine the user’s position on the bar, as well as the force with which the user pressed. These two variables determine the instrument’s pitch and volume, respectively. To determine the volume of the note, the program simply adds the output of the two force sensors. The position, however, is a more complicated. The program takes the output from the right sensor, and divides it by the total output of the two sensors. This gives a fraction between 0 and 1 that correlates to the user’s position on the bar, from left to right. Both the total force and the position of the user’s hand are then used in the software to transform this data into a note.
The second instrument, the Obloe, simulates a wind instrument. The user blows into the mouthpiece, and a breath sensor determines the volume of the note. Naturally, the harder the user blows, the louder the instrument gets. Whereas the Baronium’s outputs go directly to the Eobody, the Obloe has an onboard circuit that processes the signal from the breath sensor before sending its output to our software. Because of specifications of the breath sensor, it outputs a signal between -9 and -5 Volts. Our circuit changes this signal into a much more usable 0 to 5 V. There is also a rotatable shaft on the bottom of the instrument that the user can twist to control the pitch. This shaft is connected to a potentiometer, which sends its output directly to the Eobody.
The third instrument is called the Plucknplay. Modeled after a string bass, the Plucknplay has a “pluck sensor” that can accurately determine when and how hard the user has plucked a note. The user then controls the pitch by sliding a handle up and down a rod, much like the neck of a string bass. A rubber hose connects this handle to a force sensor to determine the position of the handle relative to the rod, and therefore the pitch of the note.
Circuits
The Obloe and Plucknplay each have an on-board circuit that convert the signals coming from the sensors into a usable signal between zero and five volts.