Linux realtime convolution reverb

Convolution?

Some time ago I met someone who used his computer to produce beautiful reverbs for a live show. I wondered how he did this, as I saw no clear user interface of well-known VST plugins or other audio programms. The solution was something amasingly simple, it was a convolution based impulse response reverb : ) !
The thought behind this is that all frequency and time response information of a system can be found by giving a single very short pulse on the input, and then look at the output what happened. In audio terms you can give a clap on the input, and listen what happens next. This is often what people do to assess the audio quality of a room; clap your hands once, and you’ll hear the reflections from walls, or a reverberation of the room. Or, to listen to reverb, people make short sounds like ‘tsk’ to hear the effects. The impulse response reverbs work the other way around; you need to give the program a waveform (.wav file) of the impulse response to the input pulse, and it will calculate what that impulse response would make of your provided audio input.

Screenshot of final system

Screenshot of final system; two terminals with convolved reverb, patchage showing connections

The Challenge

Some time ago I was asked to mix the sound of a small music/film/theater show. I did the first performances of the show with rented equipment, and found out that using 2 reverbs and a delay enhanced the performance. I’ve always mixed for rental companies or in the Dutch popstage environment (“clubcircuit” as the Dutch call it) where 2 reverbs and a delay are mostly readily available for visiting productions. Now I found out that theaters mostly have one low-cost effect of their own. Then I thought:” I’ve still got that M-audio MobilePre laying around, let’s use it!”. So I started out searching for tools….

LInux Reverb / Running in freeware

For my personal computing I’m using a Lenovo Thinkpad T61, which was sold with Windows Vista pre-installed. The performance of that system was worse than my old laptop running XP on half of the amount of RAM. That’s when I switched to Linux, first using Ubuntu, then switched to Linux Mint, and I’m still very happy with the packages available. The first I did was taking a look at linux reverbs. I found Freeverb and some other reverb engines that run as a stand-alone program or as a plugin. No reverb gave me the performance I was looking for, which is something in the order of TC-M2000, maybe TC M-One. Then I encountered jconvolver!
I started the first example (York Minster) and my jaw dropped; this was unbelievable! The beauty of the sound, wow…..

Was it that simple?

Err.. well.. kind of. But here comes one of the major drawbacks / advantages of Linux: everything is configurable, and should somehow be configured. Please be warned, I like to automate where possible, so don’t get scared off by the sourcecode below. The main purpose of this page is showing how to use Linux in a live-pro-audio setup.

M-Audio MobilePre

Somehow, I couldn’t get my MobilePre to work under Linux Mint 10. The explanation below is not interesting if your’e not dealing with the same problem, but might be interesting for those with the same problem. I installed package madfuloader, performed a inserted the module snd-usb-audio in the modules to load at startup, but alas… Somehow the madfuloader (firmware uploader) was not started when the USB plug was inserted. The ‘lsusb’ output remained Bus 005 Device 002: ID 0763:2804 Midiman M-Audio MobilePre DFU. Finally, I found out that running madfuloader manually resolved this problem, but to run the command you need to know which USB port / endpoint(?) your’e using. I wrote this script to solve this, you need to run it after the USB plug is inserted:

#!/bin/bash
IFS=$'\n'
vals=(`lsusb | grep MobilePre| grep -o [0-9][0-9][0-9]`)
arg=`madfuload -D /dev/bus/usb/${vals[0]}/${vals[1]} -f /usr/share/usb/maudio/ma004103.bin --waitbyte3`
$arg
unset IFS

Running jconvolver / creating a live audio setup

Now I had my soundcard running under ALSA, very happily. The next issue was how to connect it to the jconvolver application. When using tools like Ardour (digital audio workstation or DAW) most programs can be run as plugin on channels. This might work for studio applications, but in my case I don’t want to record anything, I’d just like to use 2 input channels, give each one a reverb to my choice and mix both to a stereo output. No saving, just live processing. And since the show takes more than 1,5 hour recording it anyway would take up too much disk space.
The answer to all this is a combination of programs that I find to really work well: qjackctl and patchage.

qjackctl and patchage

Qjackctl starts the JACK audio server, and gives a user interface to change settings like the sample rate, the audio card to choose for in/output etcetera. Now, when the server is running you can make a patch to connect applications by using the ‘patchbay’ panel. Although a lot is possible here, I’d really recommend ‘patchage’ for this task:

  • It looks cooler
  • Connections are made by simply drawing wires between nodes
  • Updates in connections are immediately visible on screen
  • In contrast to patchbay in qjackctl I can make the signal path go through several applications from left to right on my screen.
  • In short: clear visual representation of routing!

jconvolver

To run jconvolver, you’ll need to have two things: the impulse response files (WAV) that ‘describe’ the reverb, and a configuration file that gives input to jconvolver on how this / these wavefiles should be processed (how many inputs, how many outputs, which channel in the WAV file to use, etc.). With the installation of the jconvolver package comes an example file for a set of wavefiles that describe the reverb of the York Minster. I’ve been there once, and it is an impressive building so I decided to try it out and downloaded the wav files. I started jconvolver with the aforementioned sample configuration file and connected the input of jconvolver in patchage to the capture of the system, I used a microphone on input 1 of the MobilePre. Then I connected the outputs to the playback. I was wearing headphones, and said ‘hello’ in my mic. This is where my jaw dropped.

Running in a live environment?

Now, that was running one reverb by using the commandline. Over the next days I tried several other reverbs / impulse responses, my thanks go out to Fokke van Saane who sent me the impulse responses of a piece of nostalgia: the PCM60. This gave me an interesting insight: the impulse responses are recorded at a samplerate, and this should be the same samplerate as used by JACK, otherwise the impulse response is not handled correctly. Fokke’s files were 44k1 WAV, whereas the York Minster files were 48k. I decided to go for 44k1, as this is enough for live audio, and most impulse responses are made in this format. The tool to do downsampling I use now is ‘sox’ which is used in this way:
sox input-file.wav -r 44100 output-file.wav.
To rename files (remove spaces, brackets from filenames) I use this script:

#! /bin/bash
for file in * ; do  mv "$file" "$(echo $file|sed 's/[ ,()]//g')";  done

Loading reverbs, volume control

Now, how to load and reload reverbs? One of the tools that could be used for this purpose is jc_gui, which runs reasonably stable on my machine. The major pro’s of this package are a nice gui and no need to manually create the configuration files for each reverb! Also, jc_gui will resample the impulse responses if the sample rate of the wave does not match JACK sample rate. A real time saver! The cons are: only one instance of the program can be run, and the program crashed several times while trying it out. This is not good enough for my purpose, but I should mention this program as it is great for trying out some reverbs. One other major plus is that it gives JACK only one application to connect to, and changing the convolution wave file does not need reconnecting, as running jconvolver does. This will be handled below.
What I did to select reverbs is write -again- my own script, called jc_starter:

#!/bin/bash
RESDIR=$(echo $PWD)
HEADER="/cd $RESDIR"
JACKIO="/convolver/new	1	2	128	500000
/input/name   1   Input
/output/name  1   Out1.L
/output/name  2   Out1.R"

RESPONSES="/impulse/read    1   1   0.1    0    0       0    1    $1
/impulse/read    1   2   0.1    0    0       0    2    $1"

echo -e "$HEADER \n$JACKIO \n$RESPONSES" > test.conf
cat test.conf
jconvolver test.conf

Usage: I open two terminals, and go to two different directories, one for each type of reverb. This script is not intended for starting two instances of jconvolver with impulseresponses in the same directory, as the ‘test.conf’ will be overwritten. For me, this is not a problem. Just think of it …
When in the directory with the impulse response I’d like to use, I type -for example to load hall-5s.wav-
~/jc_starter hall-5s.wav
The jc_starter script should be placed in your home directory.
Now I have two reverbs running. Cool or what? Now, how to control the level? Adjusting the gains on the MobilePre could be one solution, but I’d like to get some internal metering and muting on the output of the reverbs aswell. I’m using jack_mixer for this, it is a simple application, and I had to compile it myself, was not in the repositories. I created a default setup and saved that to a file every time I start up jack_mixer.

Adding a bit of magic

What did I have now? A system where I could load two reverbs, and control their levels. Now my main problem was routing the reverbs after reloading. When jconvolver is quit (give Ctrl-C in the terminal) the connections in JACK are also thrown away. after restarting, you should manually reconnect the reverb. Maybe this is nog such a time consuming task, but spending almost a minute behind your computer screen during a live show is not what I want. Then I discovered jack.plumbing, in the package jack-tools. jack.plumbing looks at a file in /etc/ or in your home to establisch connections after the jack connection graph is changed. This works like magic! I can now kill the reverb, start a new instance and it is connected again! Time at keyboard: Ctrl-C, arrow up to previously recalled reverb (OK, that’s a bit of time investment before show) and press enter. Bingo!

What latency?

OK, I had the best performance system, only with more than 60ms latency. That’s a bummer for live reverb, as it is too much pre-delay….. Also, I liked to keep my ‘show’ system clear from clutter I collect from other occupations. What gave fantastic performance was AV-Linux, run from a USB stick. Now I’ve got a Linux distribution on stick, made ‘persistent’ (changes in the file system are stored), not affecting my daily computing or vice versa! AV-linux is a debian based distribution with realtime kernel, and it has proben to be reliable! Latency has now dropped to 6 milliseconds, which is acceptable to me.

AV-Linux

A small note on AV-Linux; I have to praise the maintainer of this distribution for delivering such a well thought-of distribution. The distribution is without frills and runs a lightweight windowmanager. This means that you won’t see shadows under windows, and fonts are rendered quite ugly simply. I won’t use it as my ‘main’ system, but to have a realtime audio system with me on a USB stick; ain’t that great? An added bonus was that my MobilePre was usable after installing madfuload package, no other scripts needed. Lovely!

Summary

With the following packages I made a system that gives me performance, flexability and lots of beautiful reverbs:

  • AV Linux
  • qjackctl
  • jconvolver
  • patchage
  • jack_mixer
  • own scripts

I’ve used this on a live show and was amazed by the quality. Since noone has made a description of such a system before, I thought I’d write this up.

Legal Issues

One issue keeps running in my head though; how legal is it to use impulse responses of expensive reverbs? When using impulse responses you cannot change settings of the reverb other than load another file where settings on the original device were changed. My MobilePre has worse output converters and / or opamps than a TC M3000 or an L480. That still justifies the use of these machines, but I was amazed what reverb quality I could get from my system. A lot of research has gone into those great halls, and I’m not paying for it. I’m happy for my wallet, but as electronics design engineer I feel empathy for those who worked hard to make the originals. I wonder if any legal actions can / will be taken. I won’t be the first to pick up a stone….

2 thoughts on “Linux realtime convolution reverb

  1. Thijs says:

    Very clear and thorough description! I witnessed this setup during a live show and experimented with it a bit afterwards and it was indeed jaw dropping. Thanks for writing this down for others!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>