Home

On the advantages of off-line audio processing

Written on 2021-01-23

Most audio processing focuses on real-time applications: DAW plugins, MIDI instruments, analog devices, livestreaming, etc. While those applications are exciting and useful in many contexts, they have strong intrinsic limitations, which many have become so used to that it can be hard to imagine what could be achieved were those limitations removed.

One of most significant limitation of real-time audio is that the signal has to be processed as it is received, with at best a small buffer to work with. Anything more complicated will require delaying the output, which is often considered very expensive. Besides, the amount of delay is itself a compromise, usually about the lowest frequency that can be handled.
Even more significantly, the output cannot precede its corresponding input. This might sound obvious and inevitable, but it rules out whole classes of effects, including basic ones like reversing a sound. There is no way to play the end of a sound before its beginning in real time.

With that being said, those limitations can be lifted by using a simpletrick™: just process your audio off-line!
Obviously, this approach has drawbacks. First of all, it requires a little more management: since the input has to be determined before starting processing, it can't just be a steady stream. Therefore, the user must specify a file (usually) to be processed, rather than just playing something into the processor. In most environment, the output also has to be explicitly specified: unless the processor is integrated in a real-time environment e.g. as a sampler, an output file has to be specified.
Off-line processors are also rarely suitable for live previewing, otherwise they might as well be real-time! This means that at best the feedback loop for the user is a little looser, and at worst that live previewing will be glitchy, albeit with a potentially tighter feedback loop.
Those usability limitations are inherent to off-line processing, but are also accentuated by the fact that most off-line processors focus on technical aspects rather than user-friendliness. A good example of that is CDP, which is hardly friendly at all, even when using one of its GUI frontends, but provides interesting routines that could simply not be implemented in a VST plugin.

Once the limitations of real-time are lifted, many new possibilities can emerge. First of all, off-line processing allows for batch processing. While the feedback loop is looser, it can also be made much more powerful. Once can apply a processing chain on hundreds of samples at a time and audition them afterwards, or procedurally generate many effects chains and feed a single sample into all of them. It is a great way of generating lots of new and possibly unexpected material that can then be quickly audited.
Besides, algorithms can be made more accurate: any buffer can be tuned to the input audio and the processing window is much more flexible than the fixed buffer that has to be constantly fed to an audio interface.

I strongly encourage people who are interested in audio to give some attention to off-line processing, may it be by exploring new processing algorithms or focusing on the user-facing aspects. In particular, I believe that there is great potential in integrating with real-time systems like DAWs and in improving the feedback loop by providing quicker tweaking workflows and better auditability.
I am personally currently investigating algorithms that can leverage the unique advantages of off-line processing (especially pseudo-cycle distortions, which I will cover in another article) in my screech library.