The future will be digitized, and music is going to be a part of it. But it’ll be more than just the drum loops and auto-tune that we’re used to. The future is bringing us a new partner, machines that will jam out with us and understand the sound we’re going for.

Alexander Nodeland, a 20-year-old applied mathematics and statistics undergraduate student at Stony Brook University, is bringing together math, music, and computers to create intelligent products that will break down the current barriers surrounding music making.

“People don’t realize that music is data processing. It’s a digital process, which brings things to the next level,” he explained.

Computers and musicians have already been doing the type of thing Nodeland is talking about in the form of effects pedals, which are devices that alter audio, and synthesizers, which imitate instruments or create original sounds. Although Nodeland worked with these type of devices for years at Pigtronix, a music industry company based in Long Island, he plans to add a whole new element to pedal synthesizers.

Nodeland’s bass synthesizers would be able to recognize the pitch of a musician and calculate a corresponding lower pitch in real time using calculations based in music theory. This means that the bass synthesizer will be able to keep up with the musician in both key and time.

“If we can make truly expressive synthesizers, which function in real time, it will be game-changing,” said Margaret Schedel, the director of the consortium for digital arts, culture and technology at Stony Brook University, who is helping Nodeland in creating his products.

Another experiment that Nodeland is working to modify involves convolution reverbs, which allows people to digitally capture the acoustic property of a room. He explained that if successfully implemented, musicians and artists can produce music that sounds like it was being played at any type of venue they choose, which would essentially bring the location, from carnegie hall to The Bench, to the music maker.

When asked if his overall work would rob musicians of their authenticity, Nodeland replied that it would enhance the authenticity of the musician, believing it’s much more helpful than harmful.

“It lets them do what they couldn’t before,” he said.

Making It His Own

In order to make this type of synthesizer and convolution reverb his own, Nodeland needs to implement the use of his own product: the harmonic content transplantation synthesizer. By using machine learning to recognize frequencies, this product can calculate the right sound for the devices to produce. But the machine won’t be the sole voice of what sound it produces: there will be human input.

Nodeland’s product works with people to choose the type of sound it produces. It then suggests another sound to go after the selected sound. This process is repeated until the machine eventually learns the type of music that people want based on the choices they make.

“No one’s tried a product like this before,” Nodeland added.

It doesn’t stop there. Once someone creates their own original sound using Nodeland’s devices, they can send it over to him with requests and recommendations on altering or enhancing their sound. After processing the given sound in Stony Brook University’s supercomputers, Nodeland can not only send it back to that person, but also create a sound sharing system for anyone to download all the unique sounds created with his products.

If possible, Nodeland’s project could even offer a hardware connection to his own server that will allow for real time processing. This would allow someone to instantly create the effects of Nodeland’s machines as they play. Although this can be achieved by taking advantage of the parallel Fast Fourier Transform, an algorithm that is used in sequence calculation, this system of digital immediacy is an incredibly tough feat to achieve in sound development, but it is being experimented on by others.

Andrew Sorencen, an artistic programmer from Queensland, Australia, performs live events that uses codes to create intelligent music in real time using extempore, a programing language and runtime environment.

“This style emphasizes not just the music, but how the music is constructed,” Sorencen said during a TEDxQUT talk.

Extempore’s ability to perform live coding has even been used to connect musicians through supercomputers and perform a live festival together.

“At that point, you can collaborate over any distance,” Nodeland added.

Looking Even Further

But sound is not limited to music. The implications of Nodeland’s work in pattern recognition and parallel FFT can lead to a wide scope of research into several fields.

His audio processing can be used to digitize the cocktail party effect, a process that allows people to hone in on a certain sound while filtering out everything else. This form of signal isolation can enhance a security feature that allows people to mute background noise and enhance a voice from anywhere in a crowd.

And Nodeland’s use of FFT can be transformed into 3D and used in bio-imaging and x-rays.

“Using frequencies, you can find anomalies in the human body,” he explained.

Nodeland now has his own company, Motiff Technologies, and he is getting advice from Pigtronix on how to advance his work. He will be graduating this winter and going for his Ph.D. in AMS.

Author

Comments are closed.