Mixing Essentials
For the newcomer, mixing a multitrack recording can seem overwhelmingly complicated. The key is never to lose sight of the basic principles...
For me, the two most enjoyable parts of the entire recording process are the 'getting the sounds right' stage, where I'm choosing and setting up microphones, and the final mixing stage. The bit in between is often mostly about staying focused, watching levels and making tea! Of course, mixing never takes place without a recording stage, and what you record will have a huge influence on how smoothly, or otherwise, the mixing session is going to proceed. A well played, well arranged piece of music is always going to be a lot easier to mix than a session where nobody has thought very much about the arrangements and the sounds being used.
Mixing is a huge subject, and in this issue of Sound On Sound, we're hoping to have something to say about it both to beginners and more experienced engineers. In this article, I'll be looking at the basic skills you need to know in order to get a mix together, and drawing on advice from some of the biggest names in the business. In this month's Mix Rescue, meanwhile (see pages 44-53), Mike Senior introduces some of the more advanced techniques that are used in modern rock music.
Whether you're using a hardware recorder or a computer-based DAW, your first job is to play through the recorded material, soloing individual tracks to check there are no problems such as clicks, pops, buzzes or overload distortion. If you still use analogue tape (we know where you live!), you may also want to check for dropouts. While doing this, you'll need to create a track sheet if one doesn't already exist, listing what parts are on what tracks. If you did the original recording, you may have done this already, though in the case of a DAW, the arrange window usually serves well enough as a virtual track sheet if you remember to label the tracks. If the track uses software instruments, it makes sense to freeze them, or bounce them to audio, once you've verified the sound is OK, as that frees up CPU power for the plug-ins you may need while mixing.
My next step is to mute or delete any unwanted sections, such as the chair squeaking before the acoustic guitar starts or the finger noise on the electric guitar before the first note is played. Where a real drum kit is part of the mix, either gate the tom mics or use your waveform editor to physically cut out all the space between tom hits. It's usually easy to identify the 'real' hits in the waveform display even when there is a lot of spill, and if you're unsure, you can always audition the section just to confirm you're not cutting something you should be keeping. Toms tend to resonate all the time, so this stage is important. Any gated drum track tends to sound very unnatural in isolation as the spill comes and goes, but once the overheads and other close mics are added in, you'll find you can't hear the gates or edits at all.
Now that modern DAWs are capable of recording huge numbers of tracks, modern productions seem to want to use them all! You might find your mix initially seems unmanageable, but you can make life much easier by separating key elements of the mix into logical subgroups that can be controlled from a single fader. The obvious example is the drum kit, which may have as many as a dozen mics around it or multiple tracks of supplementary samples, and you clearly don't want to have to move a dozen faders every time you wish to adjust the overall drum kit level. There are two ways to do this in a typical DAW, one of which is to group the faders so that when you move one, the others move proportionally. The other way is to create an audio subgroup and route all your drums via that group, just as you would on a typical analogue studio console. If you do this in a DAW, remember to ensure that there is plug-in delay compensation for the groups as well as the individual tracks.
If you wish to add some form of global processing to the drums (I often use Noveltech'sCharacter enhancer plug-in and maybe some overall compression), the subgroup option may be best, but keep in mind that if you have a reverb unit or plug-in being fed from sends on the individual drum tracks, this will need to be returned to the same group, otherwise the reverb level won't change when you change the overall drum part level. If you don't plan to use any global drum processing, the fader-grouping option is simpler, as you don't have to do anything special with your effects sends, and the drums can share the same reverb that you use on everything else if you want them to. Of course, once the faders are grouped, they will all move together, so if you need to make a subsequent balance change within the drum kit, you need to know the key command that disengages the currently selected fader from the group while you adjust it.
Other obvious candidates for grouping are backing vocals, additional percussion, keyboard pads and any doubled guitar parts. With any luck, you'll be able to get your unwieldy mix down to eight main faders or fewer. Some or all of these might require stereo groups, of course.
The subgroup mixing process is simply to balance the components within each group, so that you can then balance the groups with each other. I often start out with everything in mono (panned centre) so that I don't rely too heavily on stereo spread to keep the sounds separate. I would also recommend that you don't go to town on processing and EQing individual tracks at this stage, as the requirements are invariably different when all the faders are up. Another very important tip here is not to set the track levels too high, otherwise you'll run out of headroom while mixing. A track level peaking at -10dB should be fine, and DAW mixes always sound cleaner to me if you do leave plenty of headroom.
A good pair of headphones can be invaluable, both when checking tracks for clicks and pops, and for ensuring that your mix will work on iPods and Walkmans.
|
It is interesting to note that different engineers have different approaches to setting up the initial balance. Some, including me, build a mix from the rhythm section up, while others push up all the faders and then start to balance all the parts with each other. I think the latter way takes a lot of experience, so if you're not an old hand at mixing then perhaps the building approach is more logical. For pop, rock and dance mixes, starting with the bass and drums makes perfect sense, and in my own projects I'll often bring in the vocals next to see what space is left for the other mid-range instruments. On the other hand, if it is an acoustic instrument ensemble with relatively few parts, I'll probably set up each instrument to be the same nominal level and then tweak the balance from there.
If the mix starts to sound good as soon as you push up the faders, you can be pretty certain the mixing process won't be too arduous. If it sounds messy from the start, then you may have some remedial work ahead. If you've read any of my previous articles touching upon the subject of mix balance, you'll probably already know that I recommend you listen to your mix from outside the room with the door open, as this somehow makes it more obvious if any element is too loud or too quiet. I find the same is true if you turn the monitors down quite low — if the vocals are too low in the mix, or the kick drum EQ is wrong, this will quickly reveal it! Do the same trick with commercial records and see how their balance sounds to you. Mixing is an art and as such has a degree of subjective leeway, but if something is clearly wrong, this trick will give you the best chance to hear it. It also helps to check both your mix and your commercial references on headphones, as a lot of people listen to their music on iPods and other portable players. Furthermore, headphones help you focus on little details such as clicks or patches of distortion that you might miss on speakers.
Once you have a reasonable initial balance, listen to the mix and also compare it with some commercial material in the same genre to see how the balance stands up; only once you've identified potential problems should you resort to compression or EQ.
Level And Dynamics
Compression is a valuable mixing tool, but it should only be applied where there is a clear need either to stabilise levels or to add thickness and character. If the track was miked, compression will tend to make the room character more noticeable, so it can be counterproductive to use too much, especially if your recording was made in the type of room most home recordists have to work in. Compressing the drum overheads when you've made the recording in a great-sounding live room can really enhance the room's contribution to the sound, but if you struggled to record the kit in your bedroom or garage, excess compression will add a roomy character that probably doesn't flatter the kit sound. Far better to keep the sound as dry as possible by using absorbers and then add a good convolution room ambience afterwards.
Especially on very dynamic sources such as vocals, you can often achieve more transparent level control by using volume automation instead of, or as well as, compression.
|
Compression also brings up background noise, such as clothing rustles in pauses, and in the case of acoustic instruments, players' breathing. It's frequently the case that the less EQ and compression you use, the more natural and open the mix sounds, so it's often a good idea to use track level automation to smooth out the largest level discrepancies instead of lowering the compression threshold or boosting the ratio. This is particularly true of vocals. You can also use track level automation to duck loud backing parts to make more space for the vocals — often a dB or two of gain change is all that's needed to give the mix more space. As rule, though, don't change the drum and bass levels once you have your balance, as this will make the mix sound unnatural. Dynamics processing, like reverb, is subject to changes in fashion, and obvious compression that would be out of place in some musical styles can be a feature of others.
Vocals and acoustic guitar will probably need some degree of compression to keep the level consistent and to add density to the overall sound, but don't overdo the gain reduction — you may need as little as a couple of dBs, especially if you've also used level automation to deal with the worst excesses. Even where you want a hard-compressed sound for artistic reasons, you're unlikely to need more than 10dB of gain reduction on the loudest peaks.
Getting It Right At Source (2): Guitars
Electric guitars deserve a feature-length article all on their own, but one of the most common mistakes is to use too much distortion on a continuous rhythm part. The resulting broad spectrum of noise soaks up all the space in your mix and makes it hard to find a good balance. If you can arrange to record a clean DI feed at the same time as the miked amp or DI'd amp simulator, that can be processed later on to replace or augment your original guitar part if things don't work out. If you're miking an amp, spend a while trying different mics and mic positions to ensure that you have the best possible sound at source.
Bass guitars are also trickier than they may seem. If you use a straightforward DI box, you may end up with a sound that seems great when you hear it on its own, but it simply disappears when the rest of the mix is playing. In pop and rock music, bass guitars benefit from a bit of dirt, so a miked amp or an amp simulator will probably give a more usable sound than a straight DI. In fact a lot of what you hear from a bass guitar on a record is often really lower-mid, not just bass, which is why you can still hear the bass line on a small transistor radio that probably doesn't reproduce frequencies much lower than 150Hz or so.
|
Spectral Spread And EQ
Getting the right balance is also a matter of thinking about how you want the different instruments and voices to work together. With most types of music, you have rhythm, melody and chordal parts, with drums and bass guitar providing the low end in conventional bands. These elements are spread across the audio spectrum, so you have kick drums, bass guitars and so on at the very low end, voices and guitars somewhere in the middle, and cymbals at the top. Some instruments cover a wide range, such as the acoustic guitar, which can produce a lot of high-frequency harmonics. A triangle, by contrast, has a fairly restricted frequency spectrum.
Try not to reach for EQ unless there's a definite need for it.
|
The conventional tools for adjusting the spectral balance of a mix and its individual elements are parametric EQs and filters. Like compression, EQ is something that shouldn't be applied automatically; it should be used only where you have a clear aim in mind, whether that aim is to fix a problem, sweeten a sound or introduce a creative effect. In most cases, problems are best addressed by cutting frequencies that you don't want, rather than boosting the ones you want more of. Where you must use boost, it sounds most natural if you use a fairly wide bandwidth and use absolutely no more than necessary. Cuts can be made much narrower and deeper without sounding unnatural.
As a rule, those instruments that occupy only a narrow part of the spectrum are easiest to place in a mix, as they don't obscure other parts. Rich synth pads and distorted guitars, on the other hand, cover a lot of spectral real estate and so are harder to place effectively. Bear in mind that distortion (of whatever kind) adds higher-frequency harmonics to the basic sound, so a distorted bass guitar will have harmonics across the mid-range, and a distorted electric guitar will have harmonics that reach into the higher end of the mid-range before the frequency response of the guitar speaker or speaker simulator rolls it off.
The mid-range is the most vulnerable part of the audio spectrum, where our ears are very sensitive and where many different instruments tend to overlap in the frequency domain. You will often find that some of these broad-spectrum sounds can be squeezed into a narrower part of the spectrum using high and low cut filters to trim away unwanted very low or very high frequencies. For example, a miked acoustic guitar with no EQ produces quite a lot of low-mid that sounds great in isolation, but might cloud a more complex pop mix. Taking out some of the low end makes the guitar sit better in the mix without spoiling its perceived character, even though you might think it now sounds thin in isolation. To some extent, a mix is an illusion and it is about what you believe you hear rather than what you actually hear.
It's often the case that filtering out the low frequencies on some of the elements in a mix can make the overall sound clearer, without noticeably affecting the sound of those instruments in the mix.
|
Where the mid-range is overpowering the rest of the mix, there's scope for cutting frequencies between 150Hz and 500Hz to take out any boxy congestion. By contrast, bass guitars often need a little boost in this area, especially if they've been DI'd, as that's the part of their spectrum that contains their sonic character. If a part of the mix, or the whole mix for that matter, needs a bit more high-end clarity, try combining a broad, gentle boost (+3dB) at around 12kHz with some subtle cutting in the 150 to 300 Hz region (perhaps just 1 or 2dB). You'll also find that some bass sounds can sometimes be made more manageable by actually taking out some of the very low end altogether using a high-pass filter; frequencies below 50Hz are rarely reproduced by domestic playback systems, yet they still take up headroom and can place unnecessary stress on loudspeakers. High and low cut filters are also ideal for narrowing the frequency range that a particular instrument occupies.
In addition to using filtering to narrow the space taken up by specific mix elements, you can also try creating more space between notes and phrases, for example, by muting or editing sustained guitar chords instead of just letting them decay naturally. Usually it is possible to arrive at a compromise solution combining EQ with cleaning up the arrangement to get all the parts sit in the mix without getting in the way of each other. Again, use your CD library as a reference to see what sounds work on other people's records. Once you analyse them in detail, you'll quite often find they are rather different to how you originally perceived them.
It pays to be aware that not all equalisers sound the same, so try out whatever you have to hand to see which gives the most musical sound. It's usually a good idea to clean up the unused low extremes of the frequency range for non-bass instruments anyway. It's surprising how much LF rubbish can be present in recordings; even if this cannot easily be heard on small monitors, removing it will often make the mix sound much more open and cleaner!
Getting It Right At Source (3): Drums
Drums are particularly problematic in small rooms. If the kit is tuned well and played well, the close mics will probably sound pretty good but the overhead sound suffers, especially in rooms with low ceilings. Putting some acoustic absorbers between the mics and the ceiling will reduce the amount of coloration, and in most cases, you can roll out some low end from the overhead mics after recording to take away the low-frequency mush that tends to accumulate in bad rooms.
|
Adding Reverb
Because rock and pop music tends to be recorded in fairly dead acoustic spaces, it is nearly always necessary to add artificial reverb, but today's styles use a lot less obvious reverb than those of a couple of decades ago. Convolution reverbs that 'sample' real spaces can sound exceptionally good on acoustic instruments, whereas we've got so used to classic digital reverbs and plates on vocals that we tend to consider that as being the 'right' sound. This is a subjective decision, though, so if you don't have the confidence to decide what works best, go back to commercial mixes in the same genre, and try to hear what kind of reverb they've used. As a rule, don't add reverb to bass sounds or kick drums, with the possible exception of very short ambience treatments, and don't let the reverb fill in all the spaces in your mix, because the spaces are every bit as important as the notes that surround them.
A useful rule of thumb is to set the reverb level where you think it needs to be, then just back it off another 3 or 4 dB or so. And always check the reverb level in both mono and stereo — there can be a big difference if the reverb is a particularly spacious one. In that case, you'll have to judge the best compromise between the mono mix sounding too dry and the stereo mix sounding too wet. If all else fails, change the reverb program! I find that for most reverb plug-ins, the right starting point is with the reverb set at around the equivalent of a 20 percent wet mix. You can fine-tune either way from there. Combining a longer reverb at a lower level with a louder but shorter reverb can also create a nice effect without flooding all your space with unwanted reverb. It is not uncommon to filter out some low end from a reverb to prevent it clouding that vulnerable low-mid range, and I'll often apply some low cut starting at 200Hz or so.
Getting It Right At Source (4): Separation
Poor separation between instruments is often the enemy of a smooth mix, and achieving separation is always difficult when musicians are playing together in the same small room. Spill not only makes a mix more difficult to balance, but such spill as you do pick up will include more room coloration as the source is further from the mic. Add to this the fact that cardioid mics tend to sound quite dull off-axis anyway, and you'll appreciate why too much spill ends up making the mix sound muddy, like a painting where the colours have been worked too much. (Of course if you do have a great-sounding room and the right mics, spill can help contribute to the character of a recording, and on many early records it did exactly that.)
There are two main ways to maintain separation between miked instruments that are playing at the same time: one is to put lots of space between them, the other is to use acoustic screens or multiple rooms to separate them. It is also important to consider what the null of a mic's polar pattern is pointed at as well as what the front of the mic is aimed at! In other words, use the null of the polar pattern to reject the sound you don't want, reducing the spill by placing the mic with that in mind. For example, cardioid mics are least sensitive directly behind them, hypercardioid mics least sensitive around 30 degrees off the their rear axis and figure-of-eight mics 90 degrees off their main axis.
Miked acoustic instruments and voices can usually be added after recording the main tracks, and for some styles of music, guitars and bass can be recorded via power soaks or Pod-style amp simulators, in which case the player can probably sit in the control room alongside the engineer. It's also true that you generally get a bigger and better sound miking a small guitar combo than a large stack in a smallish room. All this may seem a bit far removed from the actual art of mixing but trust me, it's a bit like painting and decorating: the preparation is the most important part of the job.
|
Separation Via Panning
At this stage you can start to think about pan positions. Depending on how the pan controls work on your system, the subjective balance may change very slightly when you adjust them, so recheck the balance after you've decided on your final pan positions. Bass sounds and lead vocal tend to stay pinned to the centre of the mix; vocals because they are the centre of attention and that's where you expect them to be, and bass sounds because it makes sense to share this heavy load equally between both playback speakers. If you're planning to cut vinyl from your final mix, keeping the bass in the centre will help avoid cutting problems.
Combining different reverbs can give results that wouldn't be possible with a single one. This mix uses a large hall sound as a basic instrumental reverb, with an early-reflections 'ambience' patch operating almost as a loudness enhancer, and a plate reverb and short delay to add richness to the lead vocal.
|
Stereo instruments, and I include drum kits in this category, can be panned to a suitable width, but ideally not entirely left and right as they will sound unnaturally wide. If you have a stereo piano and want to place it to one side of the mix, you could, for example, set the right channel's pan control fully clockwise and the left channel's to between 12 o'clock and two o'clock. Note that when you have a single pan control on a stereo channel in a DAW mixer, this won't do quite the same thing as a pair of offset mono pans. If you do as I've described with offset mono pan pots, the left and right piano tracks will be equally loud, but the left one will be panned centrally, making the piano as a whole sit somewhere between the centre and right of the stere field. A single balance pot, by contrast, will keep the two tracks fully left and right, but offset their relative levels, in this example reducing the level of the left channel. This is a small but often important distinction.
Backing vocals can also be spread out, as can doubled guitar parts; again, look to commercial records for guidance if you're not sure how far to go. If you're mixing typical band material, trying to emulate the approximate on-stage positions of the band members is a good way to start — but keep that bass in the middle no matter where the bass player normally stands! Often, stereo effects such as reverb and delay add a lot of width to a mix, even when most of the raw mix elements are panned close to the centre. Headphones tend to exaggerate the stereo imaging of a mix, so it's a good idea to check that it works on phones too.
By now you may have listened to the song so many times that you're not quite sure what you're listening to any more, so burn off a test CD, play it on as many different systems as possible and make notes about what you do and don't like. Don't worry if it sounds quieter than commercial mixes, because the pumping up of loudness is generally done at the mastering stage, but try to ensure that you are achieving a similarly satisfying overall tonal balance. Mastering can also make a mix sound a bit more dense and airy, so don't worry if you're a bit short of the mark there, but you should be aiming to get as close as possible. Come back to the mix after a day or two, make changes according to your notes and then repeat the process. If at all possible, try to live with the mix for a few days before calling it finished, rather than trying to do a final mix after a busy all-day session! 
Mixing Tips From The Pros
Pip Williams (Status Quo, Moody Blues, Elkie Brooks)
Michael Brook (Composer and world music producer)
Louis Austin (Fleetwood Mac, Queen, Thin Lizzy, Leo Sayer, Deep Purple, Nazareth, Clannad, Alvin Lee, Slade, Judas Priest)
Malcolm Toft (Head engineer at Trident Studios, mixed the Beatles track 'Hey Jude')
|
No comments:
Post a Comment