Musical signals are complex, and so is the electronic engineering traditionally used to process musical signals.
It takes a lot of time and expertise to look at a device like a plate reverb, or optical compressor, and understand from a signal transformation perspective what exactly is going on. The musical signals themselves are dynamic and hard to predict, and the electronic devices used to process the audio have nonlinear interactions, and widely varying performances.
Machine learning is also complex, to say the least.
There have been many claims about what these types of technologies are capable of, but the reality is that machine learning is letting us do advanced statistical modelling in software, in real time, when previously this amount of computation was out of the question. We are using machine learning to more accurately model real world signal transformations, so we can focus on how to more efficiently support creative human musicians.
Our technology is a deep neural net (DNN) architecture designed to model the audio transformations imparted on musical signals through chains of popular audio processing devices. Our DNN architecture allows us to model a wide variety of audio processing transformations, but we’re starting with compressors, preamps, and reverb VST plugins.
VST plugins were created as an analogy to recording studio hardware. The audio processing is often based on popular devices, and the UI controls are often designed to look like vintage pieces of analog gear. This is because these technologies were initially designed for engineers and produces who had experience in analog music studios and wanted the digital equivalents.
Is the analogy of a traditional studio with expensive hardware gear appropriate for the next generation of musicians?
We are not trying to build products that emulate the look and feel of popular hardware devices, as our customers may have never had the privilege of using them. The analog audio processing offered by these devices, however, are still the standard in the music industry, with “clones” of vintage effect processors popular in the market.
We are imagining what the next generation of musicians need in terms of interaction, while harnessing the undeniable sonic value of the hardware devices that engineers have made all of our favourite music with.
Cofounder / Tech Lead
Ladan is a machine learning and data researcher with a PhD in computer science. In Montreal she has been on winning teams of two AI hackathons, and has been training her personal organic neural network on communicating in French and discovering optimal vertical ascent paths on various climbing walls throughout the city.
Cofounder / Product Lead
Duncan is a musician, audio researcher, and software engineer, passionate about building new musical instruments and tools for modern musicians. Duncan has toured and recorded extensively as a musician, and has designed and delivered interactive music tools for large and small corporations.
Audio Software Developer
Jesus holds a PhD in Electrical Engineering with a specialization in audio signal processing, as well as experience in digital image processing at the CIMAT Mathematics Research Facility. Jesus has been a musician since the age of 7, and is excited to be combining his two passions, engineering and music, with TONZ.
Work with us!
Our products combine state-of-the-art machine learning techniques with high quality digital signal analysis and processing. We are providing new interactions in audio processing to the next generation of musicians and music makers. We think it’s really fun!
If you want to be kept informed of TONZ news, like when demos or products become available, sign up for out newsletter!