Rendering out virtual instruments
I’ve learned over the years, for reasons variously practical, procedural, and painful, that it’s generally a great idea to render out virtual instruments as waveforms, and sooner rather than later.
(What this means, for people who aren’t familiar: it means taking your virtual instruments, right-clicking on them, and choosing “Commit,” “Bounce In Place,” etc., so that the computer bakes their output into audio waveforms.)
Here are some reasons why:
First, the painful: time moves faster than you think. And a session that you were able to open up just a year or two ago might not have survived an upgrade of one random piece of software or another. Here’s my personally painful experience with this: I can’t open up any of the songs on Shannon’s first two full-length albums — because I didn’t render out the VIs. I can open up the sessions just fine — but they call on plugins or samplebanks that I no longer have access to because something got upgraded.
Fortunately, I love how those albums came out, and I tend to look forward not backward anyway. But it has made getting those songs into our current live show way more onerous than it should have been. And if something happens in the future where remixing the album would be advantageous, we wouldn’t be able to do it. We don’t have the audio for all the tracks.
The takeaway: rendering out your virtual instruments as audio is a must for reasons of archival and future-proofing. You may think now that you won’t care in ten years if you can open up this session that you’re currently finishing up; you are probably wrong.Next, the practical: there are tradeoffs in practicality between having a track remain as a virtual instrument and committing it to audio. When it’s still a virtual instrument, you can continue to work with the instrument’s sound-shaping aspects. For virtual synthesizers, for example, you can change everything about the sound! You can change the filters, the oscillators, the envelopes — you can get into the nitty-gritty minutiae of the sound design.
And while you’re still actively honing in on the sound, you should leave it virtual. But: once you realize that it’s been a while since you’ve tweaked the sound, and you think it’s what it wants to be — that’s a great time to render it as audio.
Because: there are a lot of good things that you can do with audio waveforms that you can’t do with MIDI instruments! You can slice and dice audio — can’t do that in the virtual instrument. You can create choppy mutes. You can finesse the waveforms of individual notes with special needs — momentary EQ bubbles, loud notes, etc. All those sonic anomalies that we love synths for, which are at best a huge pain in the ass to deal with in VI form, are super easy to deal with once it’s a waveform.
The takeaway: you have more options artistically if you work with your sounds not just in instrument form but then also as waveforms.And, third, the procedural: there is a point at which it’s advantageous to know precisely how everything is sounding. And what I mean by that is this: modern “analog emulation” virtual instruments sound slightly different every single time you play the song through. Heck, hitting the same note twice will make a slightly different-sounding version of that note.
For example, like a real analog synth, the oscillators, which are oscillating all the time regardless of whether the song is playing, will definitely start in a different point of their cycle each time you hit play. This goes for plugins with oscillators, too — free-running oscillation-based plugins, like flanger and chorus for example, are always going to start at a slightly different place depending on when you happen to hit play.
This randomness can be fun when you’re working on a song — but at a certain point, before you finish the song, you may want to render these sorts of processes as audio, so that every time you play the song it sounds exactly the same. (And then you can focus in even more specifically via editing, as described above.)
Or you may not want to do that! You may love the idea that every single time you render the song it’s a little bit different. It forces you to live in the moment, and to recognize magical moments. On a meta level, it’s a fairly profound meditation on impermanence.
For example, our 80s kids song “West End Girls” has a tiny vocal timing thing that I would have loved to get a bit more perfect. And indeed I did go in and change it! But, upon bouncing the song and listening back, I realized that the bounce was missing something compared to the previous version. I wasn’t wiggling in my seat as much.
And, upon investigating further, I realized that it was specifically that the synth basses had lost some magic. And it was because I hadn’t rendered them as audio (there are three synths making up that bass sound), and the particular combination of randomnesses that I’d happened to capture in the previous bounce just had some extra special magic about it. And so I ended up keeping the version with the slightly imperfect vocal timing; you’ve gotta go with the magic, even if it comes with a side helping of imperfections.
The takeaway: if you want to be in perfect control of how your songs sound, and don’t love random little audio surprises, rendering out everything that could introduce an element of randomness is a very good idea. Instruments, sure, but also any effects plugins that have randomization built into them.
Making waves — jamie