sábado, 23 de outubro de 2010

Tools for Mixing: Levels & Panning


By Ernie Rideout
It feels great to finish writing a song, right? It feels even better when your band learns the song well and starts to sound good performing it. And it’s even more exiting to get in a studio and record your song! What could possibly be better?
Mixing your song, of course. Nothing makes you feel like you’re in control of your creative destiny as when you’re in front of a mixing board — virtual or physical — putting sounds left, right, and center, and throwing faders up and down.
Yeah! That’s rock ’n’ roll production at its best!
Except for one or two things. Oddly enough, it turns out that those faders aren’t meant to be moved all over the place. In fact, it’s best to move them as little as possible; there are other ways to set the track levels, at least initially. And those pan knobs are handy for placing sounds around the sound stage, but there are other ways to get sounds to occupy their own little slice of the stereo field that are just as effective, and that should be used in conjunction with panning.
Here at Record U, we’re committed to showing you the best practices to adopt to make your recorded music sound as good as it possibly can. In this series of articles, we draw upon a number of tools that you can use to make your tunes rock, including:
Faders: Use channel faders to set relative track levels.
Panning: Separate tracks by placing them in the stereo field.
EQ: Give each track its own sonic space by shaping sounds with equalization.
Reverb: Give each track its own apparent distance from the listener by adjusting reverb levels.
Dynamics: Smooth out peaks, eliminate background noises, and bring up the level of less-audible phrases with compression, limiting, gating, and other processes.
In this article, we’re going to focus on two of these tools: gain staging (including setting levels with the faders) and panning. As with other tools we’ve discussed in this series, these have limitations, too:
  1. They cannot fix poorly recorded material.
  2. They cannot fix mistakes in the performance.
  3. Any change you make with these tools will affect changes you’ve made using other tools.
It’s really quite easy to get the most out of gain staging and panning, once you know how they’re intended to be used. As with all songwriting, recording, and mixing tools, you’re free to use them in ways they weren’t intended. In fact, feel free to use them in ways that no one has imagined before! But once you know how to use them properly, you can choose when to go off the rails and when to stay in the middle of the road.
Before we delve into levels, let’s back up a step and talk about how to avoid the pitfall of poorly recorded material that we at Record U keep warning you about.

Pre-Gain Show

Before your mixing board can help you to make your music sound better, your music has to sound good. That means each instrument and voice on every track must be recorded at a level that maximizes the music, minimizes noise, and avoids digital distortion.
If you’re a guitarist or other tube amp-oriented musician, you’re likely to use digital distortion every day, and love it. That’s modeled distortion, an effect designed to emulate the sound of overdriven analog tube amplifiers — which is a sound many musicians love.
The kind of digital distortion we don’t want is called clipping. Clipping occurs when a signal is overdriving a circuit, too, but in this case the circuit is in the analog-to-digital converters in your audio interface or recording device. These circuits don’t make nice sounds when they’re overdriven. They make really ugly noise — garbage, in fact. It can be constant, or it can last for a split-second, even just for the duration of a single sample. If that noise gets recorded into your song, you can’t get rid of it by any means, other than to simply delete the clipped sections. Or re-record the track.
The way to avoid clipping is to pay close attention to the sound as it’s coming in to your recording device or software, and then act to eliminate it when it occurs. There are several things that can indicate a clipping problem:
1. Your audio interface may have input meters or clipping indicators; if these go in the red, you’ve got a problem. Clip indicators usually stay lit up once they’re been triggered, so you know even if you’ve overloaded your inputs for a split second.
Fig 01 Meter Clip Fig. 1. Having multi-segment meters on an audio interface is handy, like these on the MOTU Traveler; if they look like this as you’re recording tracks, though, you probably have a serious clipping problem.
Fig 02 Toneport KB37 Fig. 2. Many audio interfaces have a simple LED to indicate clipping, as on the Line 6 Toneport KB37; here the left channel clip indicator is lit, indicating clipping has occurred. Bummer.
2. Your recording device or software probably has input meters; if the clipping indicator lights up on these, you’ve got a problem. In Record, you have two separate input meters with clip indicators.
Fig 03 Record Input Meters Clipped Fig. 3. In Record, each track in the Sequencer window has an input meter. As with clip indicators on your recording hardware, these clip indicators stay lit even if you’ve gone over the limit for a split second — this input looks kind of pegged, though.
Fig 04 Record Global Input Meter Clip Fig. 4. The Transport Panel in Record has a global input meter with a clip indicator as well.
3. The waveform display in your recording software’s track window can indicate clipping. If your waveforms look like they’ve gotten a buzz haircut, you may have a problem.
Fig 05 Clipped Waveform Fig 5. If you see a waveform in a track that resembles the one on the left, you probably have a clipping problem. But if it’s this bad, you’ll probably hear it easily.
These are helpful indicators. But the best way to avoid clipping is to listen very carefully to your instruments before you record, or right after a recording a sound check. Sometimes clipping can occur even though your input meters and audio waveforms all appear to be fine, and operating within the boundaries of good audio. Other times you may see your clip indicators all lit up, but you might not be able to detect the clipping by ear; this can happen if the clipping was just for an instant. It’s worth soloing the track to see if you can locate the clipping; if you don’t, it may turn up as a highly audible artifact when you’re farther along in your mixing process, like when you add EQ.
How do you crush clipping? If you detect clipping in a track during recording, eliminate it by doing one of the following:
  1. Adjust the level of the source. Lower the volume of the amplifier, turn down the volume control of the guitar or keyboard.
  2. If the tone you’re after requires a loud performance, then lower the levels of the input gain knobs on your audio interface until you’re getting signal mixer that is not clipping.
  3. Use the pad or attenuator function on your audio interface. Pads typically lower the signal by -10, -20, or -30dB, which might be enough to let you get your excellent loud tone but avoid overloading the inputs. Usually the pad is a switch or a button on the front or back panel of the audio interface.
  4. Sometimes the overloading is at the microphone itself. In this case, if the mic has a pad, engage it by pushing the button or flipping the switch. This will usually get you a -10dB reduction in signal.
  5. Sometimes the distortion comes from a buildup of lower frequencies that you may not need to make that particular instrument sound good. In this case, you can move the mic farther away from the source, which will bring down the level. If the mic has a highpass filter, engage it, and that will have a similar effect.
The reverse problem is just as bad: an audio signal that’s too quiet. The problem with a track that’s too soft is that the ratio between the loudest part of the music and the background noise of the room and circuitry of the gear isn’t very high. Later, when you’re running the track through the mixer, every stage that amplifies the track will also amplify the noise.
The fix for this is simpler: Move the mic closer to the source, turn the source up, turn off the pad and highpass filters on the mic, or turn up the gain controls on your audio interface.
The goal is to make the loud parts of each track as loud and clean as possible as you record, while avoiding clipping by any means. That doesn’t mean the music has to be loud; it just means that the loudest part of each track should get into the yellow part of the input meters.

Gain Staging

Now that you’ve spent all that time making sure each track of your song is recorded properly, you’d think the next thing we’d tell you is to start adjusting the relative levels of your tracks by moving those gorgeous, big faders. They’re so important-looking, they practically scream, “Go on, push me up. Farther!”
Don’t touch those faders. Not yet. You heard me.
At this point in the mixing process, those big, beautiful faders are the last things you need. What you need is far more important: the gain knob or trim control. And it’s usually hidden nearly out of sight. It certainly is on many physical mixers, and it is on the mixer in Record as well. Where the heck is it?
Fig 06 Gain Staging Fig. 6. The gain knob or trim control is often way at the top of each channel on your mixer. In Record, it’s waaaaaaaay up there. Keep scrolling, you’ll find it.
This little dial is usually way the up at the top of the channel strip. Why is way the heck up there, if it’s so important?
It has to do with signal flow, for one thing. When you’re playing back your recorded tracks, the gain knob is the first stage the audio passes through, on its way through the dynamics, EQ, pan, insert effects, and channel fader stages.
You use the gain control to set the levels of your tracks, prior to adding dynamics EQ, panning, or anything else. In fact, when setting up a mix, your first goal is to get a good static mix, using only the gain controls. A static mix is called that because the levels of all the track signals are static; they’re not being manipulated as the song plays, at least not yet.
Those beautiful big channel faders? They should all be lined up on 0, or unity gain. All tidy and shipshape.
Instead of using the faders, use the gain controls to make the level of each track sound relatively equal and appear in the channel meter as though it’s between -5 dB and -10 dB below 0. Using your ears as well as the meters, decrease or increase the gain of each track until most of its material is hitting around -7 dB. You can do this by soloing tracks, listening to tracks in pairs or other groups, or making adjustments while you listen to all the tracks simultaneously.
The gain control target is -7 dB for a couple reasons. Most important, as you add EQ, dynamics, and insert effects to each track, the gain is likely to increase, or at least has the potential to increase. Starting at -7 dB gives each track room to grow as you mix. Even if you don’t add any processing that increases the gain, the tracks all combine to increase the gain at the main outputs, and starting at this level helps you avoid overloading at the outputs later.
Why shouldn’t you move the faders yet? After all, they sure look like they’re designed to set levels!
Hold on! The faders come in later in the mixing process, and we want them to all start at 0 for a couple reasons. The scale that faders use to increase or decrease gain is logarithmic; you need larger increases in dB at higher volume levels to achieve apparent increases that equal the apparent increases made at lower levels. In other words, if your fader is down low, it’s difficult to make useful adjustments to the gain of a track, since the resolution at that end of the scale is low. If the fader is at 0, you can make small adjustments and get just the amount of change you need to dial in your mix levels.
The other reason is headroom. You always want to have room above the loudest parts of your music, in case there are loud transients and in case an adjustment made to EQ, dynamics, or effects pushes the track gain up. Plus, moving a fader up all the way can increase the noise of a track as much as the music; using EQ and dynamics on a noisy track can help maximize the music while minimizing the noise; the fader can stay at 0 until it’s needed later.
Once you have each track simmering along at -7 dB, you’re ready to move on to the other tools available for your mix: EQ, dynamics, effects, and panning. As you make changes using any of these tools, you may want to revise your static mix levels. And you should; just keep using the Gain control rather than the faders, until it’s time to begin crafting your final mix.

It’s more than a phase

As you’re checking the level of each track, you may find the little button next to the gain control useful: It’s the invert phase control. In Record, this button says “INV,” and by engaging it, you reverse the phase of the signal in that channel. It’s good that this button is located right next to the gain knob, because it’s during these first step that you might discover a couple tracks that sound too quiet, softer than you remember the instrument being. Before you crank the gain knob up for those tracks, engage the INV, inverting the phase, and see if the track springs back to life.
Fig 07 Invert Phase Fig. 7. It’s small, but it comes in handy! The invert phase control can solve odd track volume problems caused by out-of-phase recording.
If so, it’s because that particular sound was captured by two or more mics, and because of the mic location, they picked up the waveform at different points in its waveform cycle. When played back together, these tracks cancel each other out, partially or sometimes entirely, since they play back the “out of phase” waveforms. The INV button is there to make it easy to flip the phase of one of the tracks so that the waveforms are back in phase, and the tracks sound full again.

Perspective

All of the tools available to you for mixing your song are there to help you craft each track so that it serves the purpose you want it to for your song. For some tracks, this means creating a space just for that particular sound, so it can be heard clearly, without the interference of other sounds. For other tracks, that might mean making the sound blend in with others, to create a larger, aggregate sound.
Just as with EQ, dynamics, and effects, panning is one of the tools to help you achieve this. And just as when you use EQ, dynamics, or effects, any change you make in panning to a track can have a huge effect on the settings of the other tools.
Unlike the other tools, though, panning has just one control, and it only does one thing. How hard could it be to use?
As it happens, there are things that are good to know about how panning works, and there are some things that are good to avoid.
The word “panning” comes from panorama, and it means setting each sound in its place within the panorama of sound. It’s an apt metaphor, because it evokes a wide camera angle in a Western movie, or a theater stage. You get to be the director, telling the talent — in this case, the tracks — where to stand in the panorama; left, right, or center, and downstage or upstage.
Panning handles the left, right, and center directions. How can two-track music that comes out of stereo speakers have anything located in the center? It’s an interesting phenomenon: When the two stereo channels share audio information, our brains perceive a phantom center speaker between the two real speakers that transmits that sound. Sometimes it can seem to move slightly, even when you haven’t changed the panning of any sounds that are panned to the center. But usually it’s a strong perception, and it’s the basis of your first important decisions about where to place sounds in the stereo soundscape.
Once you’ve established the sounds you want to be in the center, you’ll walk straight into one of the great debates among producers and mixing engineers, which is where to place sounds to the left and right. The entire controversy can be summed up in this illustration courtesy of the amazing Rick Eberly:
Fig 08 Perspective FPO Fig. 8. This is controversial? Whether the drums face forward or back? You’re kidding, right?
Couldn’t be more serious.
Well, more to the point, the debate is about whether you want your listeners to have the perspective of audience members or of performers. This decision is kind of summed up in the decision of where to place the hi-hat, cymbals, and toms in your stereo soundscape. Hi-hat on the left, you’re thinking like a drummer. Hi-hat on the right, you’re going for the sound someone in the first row would hear.
You really don’t need to worry about running afoul of the entire community of mixing engineers. You can do whatever you want to make your music sound best to you. But it’s good to keep in mind the concept of listener perspective; being aware of where you want your audience to be (front row, back row, behind the stage, on the stage, inside the guitar cabinet, etc.) can help you craft the most effective mix.

Balance

Just as important as perspective is the related concept of balance. In fact, many mixing engineers and producers refer to the process of placing sounds in the stereo soundscape as “balancing,” rather than “panning.” Of course, they include level setting in this, too. But for now, let’s isolate the idea of “balance” and apply it to the placing of sounds in the stereo soundfield.
Here’s the idea. In a stereo mix of a song or melodic composition, the low frequency sounds serve as the foundation in the center, along with the main instrument or voice. On either side of this center foundation, place additional drums, percussion, chordal accompaniment, countermelodies, backing vocals, strumming guitars, or synthesizers. Each sound that gets placed to one side should be balanced by another sound of a similar function panned to a similar location in the opposite side of the stereo field.
Here’s one way this could look.
Fig 09 Stereo Mix 1 GOOD Fig. 9. There are many ways to diagram sounds in a mix. This simple method mimics the pan pot on the mixer channels in Record. At the center: you (sorry about the nose). In front of you are the foundation sounds set to the center. To either side of you are the accompanying sounds that have been placed to balance each other. In this song, we’ve got drums, bass, two electric guitars playing a harmonized melody, organ, horns, and a couple of percussion instruments. Placing sounds that function in a similar way across the stereo field equally (snare and hi-hat — cymbal and tom; horns — organ; shaker — tambourine) make this mix sound balanced from left to right; when we get to setting levels, we might choose to reinforce this by matching levels between pairs.
And now you know where we stand on the perspective debate . . . at least for this clip.
Fig 10 Stereo Mix 2 GOOD Fig. 10. Here’s another approach to a mix. We’ve put the horns and organ in the center. This is still balanced, but this approach may not give us one critical thing we need from everything we do in a mix: a clear sonic space for each instrument. We’ll hear how this sounds in a bit.
Fig 11 Stereo Mix 3 GOOD Fig. 11. Here’s yet another balanced approach, this time putting the horns as far to the left and right as possible. Though valid, this also presents certain problems, as we’ll hear shortly.
Fig 12 Stereo Mix 4 GOOD Fig. 12. This diagram represents a mix that looks balanced, but when you listen to it, you’ll hear that it’s not balanced at all. The foundation instruments are not centered, for one thing, and this has a tremendous impact. For most studio recordings, this approach might be disconcerting to the listener. But if your instrumentation resembles a chamber ensemble or acoustic jazz group and you’re trying to evoke a particular relationship between the instruments, this could be just the approach your composition needs. We’ll see how it works out in the context of a rock tune a little later.

Set up a static mix

Let’s go through the process of setting up a static mix of a song, using the steps and techniques we’ve talked about to set the gain staging, the levels, and the balance. As it happens, the song we’ll work on is just like the hypothetical song on which we tried several different panning scenarios.
A couple of interesting things to know about this song. All the instruments come from the ID8, the sound module device in Record, except for the lead guitars, which are from an M-Audio ProSessions sample library called Pop/Rock Guitar Toolbox. The drum pattern is from the Record Rhythm Supply Expansion, which is available at no charge in the Downloads section of the Propellerhead website — click here to get it.
The Rhythm Supply Expansion contains Record files each with a great selection of drum patterns and variation in a variety of styles, at a range of tempos. The really cool thing about the Expansion files is that they’re not just MIDI drum patterns, they include the ID8 device patches, too — just select “Copy Device and Track” from the Edit menu when in Sequencer view, then paste the track and ID8 into your song file. With the Rhythm Supply Expansion tracks, your ID8 becomes a very handy songwriting and demoing tool.
All right. We’ve completed our tracking session, and we’re happy with the performances. We’re satisfied that we have no clipping on any of the tracks, since we had no visual evidence of clipping during the tracking (e.g., clip LEDs or pegged meters on the audio interface, clipped waveforms in the Sequencer view) and we heard no evidence of clips or digital distortion when we listened carefully. Now let’s look at our mixer and see what we need to do to set the gain staging and levels.
Fig 13 Bypass Master Inserts Fig. 13. First things first, though: Bypass the default insert effects in the master channel to make sure you’re hearing the tracks as they really are (click on the Bypass button in the Master Inserts section).
Fig 14 Initial Levels Fig 14. While the levels of each track seem to be in the ballpark, it’s clear that there is some disparity between the guitars (too loud, what a surprise) and the rest of the instruments. Quick! Which do you reach for, the faders or the gain knobs? Let’s collapse the mixer view by deselecting the dynamics, EQ, inserts, and effects sections of the mixer in the channel strip navigator on the far right of the mixer. Now we can see the Gain knobs and the faders at the same time. Still confused about which to reach for to begin adjusting these levels? Hint: Leave the faders at 0 until the final moments of your mixing session! Okay, that was a little more than a hint. Have a listen to the tracks at their initial levels.
FIg 15 Adjusted Levels Fig. 15. Using only the Gain knobs, we’ve adjusted the level of each track so that, a) we can hear each instrument equally, and b) the levels of each track centers around -7 dB on its corresponding meter. Even though we brought up the levels of the non-guitar tracks, the overall master level has come down, which is good because it gives us more headroom to work with as we add EQ, dynamics, and effects later. Ooh, and look at how cool those faders look, all centered at 0! Let’s hear the difference now that the levels are all in the ballpark.
Since a bit part of this process is determining exactly where each drum and percussion sound is to go, let’s take that Rhythm Supply Expansion stereo drum track and explode it so that each instrument has its own track. This is easy to do: Select the drum track in Sequencer view, open the Tool window (Window > Show Tool Window), click on Extract Notes to Lanes, select Explode, and click Move. Presto! All your drum instruments are now on their own lanes (watch out, your hi-hat has been separated into two lanes, one containing the closed sound, and the other containing the open sound, just something to keep in mind or combine them into a single track). Copy each lane individually, and paste them into their own sequencer tracks. Now each drum instrument has its own track, and you can pan each sound exactly as you want.
Let’s listen to the process of balancing. We’ll build the foundation of our mix first, starting with the drums, then adding the bass, then the lead instruments, which are the guitars. Let’s mute all tracks except the drums and then pan the drum tracks to the center.
Sounds like a lot of drums crammed into a small space. There is a shaker part that you can’t even hear, because it’s in the same rhythm as the hi-hat. Let’s pan them, as you see in Fig. 9. Hold on tight, we’re taking the performer’s perspective, rather than the audience’s!
That opens up the sound a great deal. You can hear the shaker part and the hi-hat clearly, since they’re now separated. Even a very small amount of stereo separation like this can make a huge difference in the audibility of each instrument. Now let’s add the bass, panned right up the center, since it’s one of our foundation sounds.
The bass and kick drum have a good tight sound. Now let’s un-mute the two lead guitar tracks. We’ll pan these a little to the left and right to give them some separation, but still have them sound clearly in the center.
So far, we’ve got a nice foundation working in the center. All the parts are clearly audible. Sure, there’s a lot of work we could do even at this stage with EQ, dynamics, and reverb to make it sound even better. Let’s resist that urge and take the time to get all the tracks balanced first. Now we’ll un-mute the organ and horn tracks. These two instruments play an intertwining countermelody to the lead guitars. They’re kind of important, so let’s see what they sound like panned to the center, as in Fig. 10.
Wow. There is a lot going on in this mix. The parts seem to compete with each other to the point where you might think the horn and organ parts are too much for the arrangement. Let’s try panning the horns and organ hard left and hard right — in other words, all the way to the left and right, respectively.
Well, we can hear the horns and organ clearly, along with all the other parts. So that’s good. But they sound like they’re not really part of the band; it sounds unnatural to have them panned so far away. Let’s try panning them as you see in Fig. 9.
Fig 16 Static Mix Fig. 16. This screenshot shows our static mix, with all track levels adjusted, all sounds balanced, and all faders still at 0. Now we’re ready to clarify and blend the individual sounds further with EQ, dynamics, reverb, and level adjustments, which you can read about in the other articles here at Record U.
Wait! What about the balancing example that was out of balance, in Fig. 12? How does that sound? Check it out.
The big problem is that the foundation has been moved. In fact, it’s crumbled. The sound of this mix might resemble what you’d think a live performance would sound like if the performers were arranged across the stage just like they’re panned in this example. But in reality, the sound of that live performance would be better balanced than this example, since the room blends the sounds, and the concert goers would perceive a more balanced mix than you get when you try to emulate a stage setup with stereo balancing.
That’s not to say you shouldn’t take this approach to balancing if you feel your music calls for it. Just keep in mind what you consider to be the foundation of the composition, and make sure to build your mix from those sounds up.
And don’t touch those faders yet!

segunda-feira, 11 de outubro de 2010

Mixing Tools

Preparing a Space for Recording (often on a budget)

By Gary Bromham
When preparing a space for recording and mixing we enter a potential minefield, as no two areas will sound the same, and therefore no one-solution-fits-all instant fix is available. There are, however, a few systematic processes we can run through to facilitate vastly improving our listening environment.
When putting together a home studio, it is very easy to spend sometimes large sums of money buying equipment, and then to neglect the most important aspect of the sound; namely the environment set up and used for recording. No matter how much we spend on computers, speakers, guitars, keyboards or amps etc., we have to give priority to the space in which they are recorded.
Whether it be a house, apartment, or just a room, the method is still based on our ability to soundproof and apply sound treatment to the area. It is extremely difficult to predict what will happen to sound waves when they leave the speakers. Every room is different and it’s not just the dimensions that dictate how a room will sound. Assorted materials which make up walls, floors, ceilings, windows and doors - not to mention furniture - all have an effect on what we hear emanating from our monitors.

Fig 1. A vocal booth with off the shelf acoustic treatment fitted.
Whether we have a large or a small budget to sort out our space, there are a number of off-the-shelf or DIY solutions we can employ to help remedy our problem. It should be pointed out at this stage that a high-end studio and a home project studio are worlds apart. Professional studio design demands far higher specification and uses far narrower criteria as its benchmark, and therefore costs can easily run in to hundreds of thousands!

Why do we use acoustic treatment?

An untreated room - particularly if it is empty - will have inherent defects in its frequency response; this means any decisions we make will be based on the sound being ‘coloured’. If you can’t hear what is being recorded accurately then how can you hope to make informed decisions when it comes to mixing? Any recordings we make will inherit the qualities of the space in which they are recorded. Fine if it’s Abbey Road or Ocean Way, but maybe not so good if it’s your bedroom.
No matter how good the gear is, if you want your recordings or mixes to sound good elsewhere when you play them, then you need to pay attention to the acoustic properties of your studio space.

Begin with an empty room

When our shiny new equipment arrives in boxes, our instinct is always to set it up depending on whether it ‘looks right’, as if we are furnishing our new apartment.
Wrong!
Beware. Your main concern is not to place gear and furniture where they look most aesthetically pleasing, but where they sound best. The most important consideration is to position the one thing that takes up zero space but ironically consumes all the space. It is called the sound-field, or the position in the room where things sound best.
One of the things I have learned is that the most effective and reliable piece of test equipment is - surprise surprise - our ears! Of course we need more advanced test equipment to fine-tune the room but we need to learn to trust our ears first. They are, after all, the medium we use to communicate this dark art of recording.
Listen!
Before you shift any furniture, try this game.
Ask a friend to hold a loudspeaker, and whilst playing some music you are familiar with, use a piece of string or something which ensures he or she maintains a constant distance from you of say 2-3 metres. Get them to circle around you whilst you stand in the centre of the room listening for the place where the room best supports the ‘sound-field’. The bass is usually the area where you will hear the greatest difference. As a guide listen for where the bass sounds most solid or hits you most firmly. Why am I focusing on bass? Because, if you get the bass right, the rest will usually fall into place.
Also, avoid areas where the sound is more stereo (we are after all holding up just one speaker, a mono source); this is usually an indication of phase cancellation. Beware of areas where the sound seems to disappear.
Finally, having marked a few potential positions for speaker placement, listen for where the speaker seems to sound closest at the furthest distance. We are looking for a thick, close, bassy and mono signal. When we add the second speaker this will present us with a different dilemma but we’ll talk about speakers later.
Remember: Though you may not have any control over the dimensions of your room, you do have a choice as to where you set up your equipment, and where you place your acoustic treatment. As well as the above techniques there are other things to consider.
  • It is generally a good idea to set up your speakers across the narrowest wall.
  • As a rule, acoustic treatment should be as symmetrical as possible in relation to your walls.
  • Ideally your speakers should be set up so that the tweeters are at head height
  • The consistency of the walls has a huge bearing on the sound. If they are thin partition walls then the bass will disperse far easier and be far less of a problem than if they are solid and and prevent the bottom end from getting out. (This is a Catch-22 as thin walls will almost certainly not improve relations with neighbours!)
Audio 1.'Incredible' Front Room
Audio 2.'Incredible' Center Room
Audio 3.'Incredible' Back Room
Three audio examples demonstrating the different levels of room ambience present on a vocal sample played 0.5 m/2.5m/5m from the speakers in a wooden floored room.

The Live Room

If you are lucky enough to have plenty of space and are able to use a distinct live area the rules we need to observe when treating a listening area don’t necessarily apply here. Drums, for example, often benefit from lots of room ambience, particularly if bare wood or stone make up the raw materials of the room. I’ve also had great results recording guitars in my toilet, so natural space can often be used to create a very individual sound. Indeed, I’ve often heard incredible drum sounds from rooms you wouldn’t think fit to record in.

Fig 2. Reflexion Filter made by SE Electronics.

‘Dead Space’

It is often a good idea to designate a small area for recording vocals or instruments which require relative dead space. It would be unnatural (not to mention almost impossible) to make this an anechoic chamber, devoid of any reflections, but at the same time the area needs to be controllable when we record. Most of us don’t have the luxury of having a separate room for this and have to resort to other means of isolating the sound source like the excellent Reflexion Filter made by SE Electronics. This uses a slightly novel concept in that it seeks to prevent the sound getting out into the room and subsequently cause a problem with reflections in the first place. Failing this, a duvet fixed to a wall is often a good stopgap and the favourite of many a musician on a tight budget.

Time for Reflection

Every room has a natural ambience or reverb, and it should be pointed out at this stage that it is not our aim to destroy or take away all of this. If the control room is made too dry then there is a good chance that your mixes will have too much reverb, the opposite being true if the room is too reverberant.
The purpose of acoustic treatment is to create an even reflection time across all - or as many as possible - frequencies. It obviously helps if the natural decay time of this so called reverb isn’t too excessive in the first place.
Higher frequency reflections, particularly from hard surfaces, need to be addressed as they tend to distort the stereo image, while lower frequency echoes, usually caused by standing waves, often accent certain bass notes or make others seem to disappear. High frequency "flutter echoes", as they are known, can often be lessened by damping the areas between parallel walls. A square room is the hardest to treat for this reason, which is why you generally see lots of angles, panels and edges in control room designs. Parallel walls accentuate any problems due to the sound waves bouncing backwards and forwards in a uniform pattern.

Standing waves


Fig 3. A graph showing different standing waves in a room
Standing, or stationery, waves occur when sound-waves remain in a constant position. They arise when the length of this wave is a multiple of your room dimension. You will hear an increase in volume of sounds where wavelengths match room dimensions and a decrease where they are half, quarter or eighth etc. They tend to affect low end or bass (because of the magnitude of the wavelength). For this reason they are the hardest problem to sort out, and because of the amount of absorption and diffusion needed generally the costliest to sort out. Further explanation is required.
Suppose that the distance between two parallel walls is 4 m. Half the wavelength (2m) of a note of 42.5 Hz (coincidentally around the pitch of the lowest note of a standard bass guitar-an open ’E’) will fit exactly between these surfaces. As it reflects back and forth, the high and low pressure between the surfaces will stay constant – high pressure near the surfaces, low pressure halfway between. The room will therefore resonate at this frequency and any note of this frequency will be emphasized.
Smaller rooms sound worse because the frequencies where standing waves are strong are well into the sensitive range of our hearing. Standing waves don't just happen between pairs of parallel surfaces. If you imagine a ball bouncing off all four sides of a pool table and coming back to where it started; a standing wave can easily follow this pattern in a room, or even bounce of all four walls, ceiling and floor too. Wherever there is a standing wave, there might also be a 'flutter echo'.
Next time you find yourself standing between two hard parallel surfaces, clap your hands and listen to the amazing flutter echo where all frequencies bounce repeatedly back and forth. It's not helpful either for speech or music.
Audio 4. Subtractor in Record
Here’s an ascending sequence created in Record using Subtractor set to a basic sinewave. Whilst in the listening position play back at normal listening level. In a good room the levels will be even but if some notes are more pronounced or seem to dissapear this usually indicates a problem at certain frequencies in your room.

Fig 4. A chromatic sequence using Subtractor created in Record.
Download this as a Record file and convert the notes in the file to frequency and wave length.

Absorption or Diffusion...that is the question?

The two main approaches when sorting out sound problems are finding the correct balance between absorption and diffusion. While absorbers, as their name suggests, absorb part of the sound; diffusers scatter the sound and prevent uniform reflections bouncing back into the room.
Absorbers tend to be made of materials such as foam or rockwool. Their purpose is to soak up sound energy. Foam panels placed either side of the listening position help with mid or high frequencies or traps positioned in corners help to contain the unwanted dispersion of bass.
Diffusers are more commonly made of wood, plastic or polystyrene. By definition they are any structure which has an irregular surface capable of scattering reflections. Diffusers also tend to work better in larger spaces and are less effective than absorbers in small rooms.

Off-the-Shelf solutions

Companies such as Real Traps, Auralex and Primacoustic offer one-stop solutions to sorting out acoustic problems. Some even provide the means for you to type in your room dimensions and then they come back with a suggested treatment package including the best places to put it. These days I think these offer excellent solutions and are comparatively cheap when you look at the solutions they offer. What they won’t give you is the sound of a high end studio where huge amounts of measurement and precise room tuning is required but leaving science outside the door they are perfect for most project studios.

DIY

The DIY approach can be viewed from two levels. The first, a stopgap, where we might just improvise and see what happens. The second, a more methodical, ‘let’s build our own acoustic treatment because we can’t afford to buy bespoke off the shelf tiles and panels’ approach.
  • This could simply be a case of positioning a sofa at the back of the room to act as a bass trap. Putting up shelves full of books which function admirably as diffusers. Hanging duvets from walls or placing them in corners for use as damping. I even know of one producer who used a parachute above the mixing desk to temporarily contain the sound!
  • Build your own acoustic treatment. I personally wouldn’t favour this as it is very time consuming and also presumes a certain level of abilty in the amateur builder department. The relative cheapness of ‘one solution’ kits where all the hard work is done for you also makes me question this approach. However, there are numerous online guides for building your own acoustic panels and bass traps which can save you money.

Speakers

Though speakers aren’t directly responsible for acoustic treatment their placement within an acoustic environment is essential. I’ve already suggested how we might find the optimum location in a room for the speakers; the next critical thing is to determine the distance between them. If they are placed too close together the sounds panned to the centre will appear far louder than they actually are. If they are spaced too far apart then you will instinctively turn things panned in the middle up too loud. The sound is often thin and without real definition.
Finally, speaker stands or at least a means of isolating the speaker from the surface on which it rests is always a good idea. The object is to prevent the speaker from excessive movement and solidify the bass end. MoPads or China Cones also produce great results

Headphones

The role of headphones in any home studio becomes an important one if you are unsure of whether to trust the room sound and the monitors. This, in essence, removes acoustics from the equation. Though I would never dream of using them as a replacement for loudspeakers, they are useful for giving us a second opinion. Pan placement can often more easily be heard along with reverb and delay effects

Summary

With only a small amount of cash and a little knowledge it is relatively easy to make vast improvements to the acoustics of a project studio. A science-free DIY approach can work surprisingly well, particularly if you use some of the practical advice available on the websites of the companies offering treatment solutions. Unfortunately, most musicians tend to neglect acoustic treatment and instead spend their money on new instruments or recording gear. When we don’t get the results we expect it is easy to blame the gear rather than look at the space in which they were recorded or mixed. Do yourself a favour - don’t be frightened, give it a go. Before you know it you’ll be hearing what’s actually there!
Gary Bromham is a writer/producer/engineer from the London area. He has worked with many well-known artists such as Sheryl Crowe, Editors and Graham Coxon. His favorite Record feature? “The emulation of the SSL 9000 K console in 'Record' is simply amazing, the new benchmark against which all others will be judged!”

segunda-feira, 4 de outubro de 2010

Mixing 2

Tools for Mixing: EQ, part 2

By Ernie Rideout

EQ types: high pass and low pass filters

Fig9 Tone Knob You're probably very familiar with the simplest type of EQ:
Fig. 9. Though an electric guitar's tone knobs can be wired to apply many different types of EQ to the sound of a guitar, at its most basic, turning the knob applies a low pass filter to the sound, gradually lowering the level of harmonics at the higher end of the frequency spectrum.
Here's a low E on an electric guitar, with the tone knob at its brightest setting. This allows all the frequencies to pass through without reduction.
Crank the tone knob down, and higher frequencies are blocked — or rolled off — while lower frequencies are allowed to pass through; hence the name, low pass filter.
Just as the low pass filter attenuates (reduces) high frequencies and allows low frequencies to pass through, there is another EQ type that rolls off (reduces) low frequencies while allowing high frequencies to pass: the high pass filter. It's not just guitars that utilize this simple EQ type. When you're mixing, usually you'll use high pass and low pass filters that are built into each channel of your mixer, which allow you to set the frequency at which the attenuation begins (also called the cutoff frequency).
Although their structure may be simple, low pass filters can be very effective at quickly providing solutions such as:
  • Removing hiss in a particular track
  • Isolating and emphasizing a low-frequency sound, such as a kick drum
Similarly, high pass filters can quickly give you remedies such as:
  • Eliminating rumble, mic stand bumps, or low frequency hum on tracks that primarily have high-frequency energy in the music
Here's how Record's low pass and high pass filters work.
Fig 10 HPF LPF Fig. 10. Record provides a low pass filter (LPF) and a high pass filter (HPF) on each mixer channel, at the very top of the EQ section. You engage them simply by clicking their respective “on” buttons, and then you can set the frequency above or below which the attenuation begins.
Fig11 Fig. 11. Record's low pass filter lets you set the rolloff point from 100 Hz up to 20 kHz. The angle of the roll-off has particular terminology and characteristics: The angle itself is called the “cutoff” or “knee,” and the “slope” or “curve” reduces the signal by 12 dB per octave. Here you can see how much is allowed to pass at the lowest setting (yellow area only) and how much passes at the highest setting (orange and yellow).
This is the sound of a low pass filter sweeping over a drum loop.
Fig12 Fig. 12. Record's high pass filter rolls off frequencies below 20 Hz at its lowest setting (allowing frequencies in the yellow and orange areas to pass) to 4 kHz at its highest (the orange area only). The slope attenuates 18 dB per octave.
This is the sound of a high pass filter sweeping over a drum loop.

EQ types: shelving

If you've ever had a stereo or car radio with bass and treble controls, shelving is another type of EQ you're already familiar with. (If all you've ever had is an iPod, we'll talk about the type of EQ you use the most — graphic EQ — in a bit.) Shelving EQ is usually used in the common “treble and bass” configuration. It's also used at the upper and lower ends of EQ systems that have more than just two bands. Like low pass and high pass filters, shelving EQ works on the “from here to the end” model: Everything below a particular point (in the case of a bass control) is affected, and everything above a particular point (in the case of a treble control) is affected. The difference is that shelving EQ boosts or cuts the levels of the affected frequencies by an amount that you specify; it doesn't just block them entirely, which is what a pass filter does.
Shelving EQ is the perfect tool to use when a track has energy in one of the extreme registers that you want to emphasize (boost) or reduce (cut), but you don't want to target the specific frequencies or eliminate them entirely. It lets you keep the overall level of the track at the level you want compared with your other tracks in the mix, while giving you a quick way to distinguish or disguise the track. Some useful applications include:
  • Percussion tracks often have energy in the extreme low and extreme high frequency areas; shelving EQ can easily bring that energy to the fore of your mix or cut it to make room for the sound of another track
  • Synth bass parts make or break a dance track; a little boost with shelving EQ can quickly transform a dull track to a booty-shaker
  • Adding a high shelf to a drum kit and then cutting by a few dB gives the kit a muffled, alternative mood
Let's take a look at and give a listen to the ways that shelving EQ works.
Fig13 2band Shelving Fig. 13. The two-band EQ on the Record's 14:2 Mixer device is a classic example of treble and bass shelving EQ. Turning a knob clockwise boosts the affected frequencies, and turning a knob counterclockwise cuts the affected frequencies.
Fig14 Fig. 14. This diagram gives you an idea of how shelving EQ differs from simple pass filter EQ, using the specs of the 14:2 EQ. The middle of the yellow area is the part of the track that is unaffected by the EQ; you'll still hear the frequencies in this area at the same level, even if you boost or cut the frequencies in the shelving areas. The blue area shows the frequencies affected by the bass control: Below 80 Hz, you can cut or boost the frequencies by up to 24 dB (dark blue lines). The orange section shows the area affected by the treble control: Above 12 kHz, you can boost or cut by 24 dB (red lines).
On this drum loop, first we'll cut the high shelving EQ, then we'll boost it. Next we'll cut the low shelving EQ, then we'll boost it.
Fig15 Shelving EQ Fig. 15. Record also gives you shelving EQ on each mixer channel. This EQ has a bit more control than that on the 14:2 mixer, as it allows you to specify the frequency above or below which the track is affected. For the high shelf, you can adjust the cutoff frequency from 1.5 kHz to 22 kHz. The bass shelf cutoff can move from 40 Hz to 600 Hz. In both cases, you can cut or boost by 20 dB.

EQ type: parametric EQ

So far, our EQ tools have been like blunt instruments: Effective when fairly wide frequency bands are your target. But what if you have just a few overtones that you need to reduce in a single track, or an octave in the mid range that you'd love to be able to boost just a tad across the entire mix?
That's where parametric EQ comes in. Parametric EQ usually divides the frequency spectrum up into bands. Some EQs have just two bands, like the PEQ-2 Two-Band Parametric EQ device. Some EQs have as many as seven or eight bands. Usually, three or four bands will give you all the power you need.
For each band of parametric EQ, you can select three parameters: the center frequency of the band, the amount of cut or boost, and the bandwidth, which is often referred to as the “Q”. Adjustments in Q typically are expressed on a scale that approaches 0.0 and 3.0 at either end, with 0.0 being the widest bandwidth and the gentlest slope and 3.0 being the narrowest bandwidth with the steepest slope. Let's see what shapes parametric bands can get.
Fig16 Channel Para EQ Fig. 16. This is Record's channel strip EQ, highlighting the parametric EQ.
Fig17 Fig. 17. This is the curve of a band of parametric EQ, centered on 1 kHz and given its highest boost with the widest Q setting. Looks like a nice, graceful old volcano.
Fig18 Fig. 18. This curve has the same center frequency and boost amount as the previous diagram, but with the narrowest bandwidth, or Q setting. This is what's known as a spike.
The illustrations above are meant to show you the full extent of the power that parametric EQ can offer. Normally, there's no need to use all of it. In the vast majority of cases, just a tiny bit of EQ adjustment will have a huge effect on your music, no matter what type of EQ you use.
But for now, let's use this power to explore the concept of overtones a little further. Sometimes it's hard to believe that every musical sound you hear — with the exception of the simplest waveforms — consists of a fundamental frequency and lots of overtones, each at their own frequency. An instrument playing a single note is just a single note, isn't it?
Let's take a listen to a single pitch played on a piano. We'll hit each note hard and let it decay for just a moment. While we play, we'll take one of those narrow bands of parametric EQ, the one with the spiky shape, and we'll boost it and sweep it up and down the frequency spectrum. Listen closely, and you'll hear several overtones of this single pitch become amplified.
Sweeping a piano with an EQ spike.
Sweeping the EQ spike across the single piano pitch makes it sound more like an arpeggio than a single pitch. All those “other notes” are the overtones. Even in one single note, you have a multitude of frequencies that give the sound its character. In fact, when you hear a person describe a sound as something that “cuts through in the mix,” they're usually referring to a sound that's rich in overtones, more than a sound that's simply louder than all the rest of the sounds in the mix. Usually it refers to a live performance P.A. mix, but the idea applies to what we're doing here, too.
Let's experiment with cutting overtones, using the same EQ curve, but reversed. Listen to the same repeated piano note as we sweep the parametric EQ across the frequency spectrum.
Sweeping a narrow parametric cut on a piano sound.
As we swept the EQ cut up and down, you could hear the piano tone change radically. Sometimes it sounded hollow, sometimes as though it had too much bass. But there were other times when it still sounded good, even though we were reducing the sound of particular overtones.
The key to creating a great sounding mix is to know how to cut or boost overtones on each track to make each instrument have its own sonic space, so that its overtones don't interfere with other tracks, and other overtones don't interfere with it. Using parametric EQ to cut particular overtones in one track to make room for the overtones in another track is the one of the most effective ways to do it.
Let's create a mix right now, using only the types of EQ we've discussed so far. We won't touch any faders, nor will we pan any tracks. Just by manipulating overtones by cutting with EQ — we're not even going to boost anything — we'll turn a muddy mix into a clear one.

Create a mix using only EQ

Here's an excerpt of the raw, unmixed multitrack of a blues session, recorded using Record. Check out the raw tracks.
Raw blues tracks excerpt.
Usually when you think of Muddy and the blues, you think of the great Muddy Waters. This session, on the other hand, was the other kind of muddy. The kick drum, bass, and rhythm guitar are overwhelmed by the thick sound of the horn section. The solo organ and tenor sax parts don't overlap, but their tone sure does. Sounds like we have our work cut out for us.
Where do you start? Since the bass and kick drum are the least audible, you might be tempted to bring up their levels with a shelving EQ. Don't bring any levels up! We're on a mission to cut and carve out space for each part.
In fact, just to get you comfortable with using EQ, we're not only going to cut, we're going to cut a lot, using as much as the full 20 dB of range! Normally, just a couple dB will do the trick. But let's go hog wild, just to prove that EQ can't hurt you.
Let's start with the mud. The horn section sounds great, but they're not the most important thing going on. We'll solo them along with the part they seem to be obliterating the most: the guitar.
Guitar and horn section soloed.
Yes, that's muddy. The guitar itself is boomy as heck, too. Using the low mid frequency parametric EQ band on each track's channel strip, let's employ our narrow-Q parametric EQ cut-sweeping trick. The guitar sounds much better with a big cut at 260 Hz, and the horns open up for the guitar when they're cut at 410 Hz for the tenor sax, 480 Hz for the alto sax, and 1 kHz for the trumpet. The saxes benefit from a slightly wider Q. Here's how just these four sound, with EQ cuts.
Guitar and horn section with EQ.
Fig19 Gtr Horn EQ Fig. 19. These are the low mid frequency EQ settings for the guitar, tenor sax, alto sax, and trumpet, from left to right, skipping over the darker channel. The amount of cut is radical, but it's for educational purposes. Plus, it sounds pretty good in the mix, as you'll hear.
Overall, these four tracks sound a bit thin, especially the horns. Keep in mind we're overdoing the EQ amount. But the horn section still sounds like a horn section, the guitar is audible finally, and you can hear the bass guitar now, too. The organ and solo tenor sax are way out in front now, which gives us some room to maneuver with them.
Guitar and horn section with all tracks.
Since we want the tenor sax to be the more prominent of the two, let's solo the two tracks and then carve some space for them.
Tenor sax solo and organ fills.
It's obvious these two instruments are stepping on each other's sonic toes, so to speak. Let's apply our narrow-Q parametric EQ cut-sweeping trick to the organ, and cut it at 412 Hz or so, which sounds the best after the up-and-down sweep. That opens up the tenor sax, and yet the organ is still very audible. Since the tenor sax has such a rich sound, let's apply a low shelf filter cut to it, right at 155 Hz. Now the balance sounds right between the two solo instruments, and there's probably more room for the rest of the mix, too.
Solo sax and fill organ, EQed.
Fig20 Sax Organ EQ Fig. 20. These are the low mid frequency EQ settings for the organ fills on the right, and the low filter shelving settings for the tenor sax on the left.
Now the mix is really opening up. You can hear the electric bass and drums much more clearly, and though the tenor sax is definitely front-and-center, you still hear the horns, guitar, and organ.
Solo sax and fill organ with all tracks.
Since the drummer is using the ride cymbal, there is a lot of high-mid energy in the drum part. Let's see if we can cut some of that and still have the nice “ping” of the stick on the cymbal — using the narrow-Q parametric EQ cut-sweeping trick, of course. First, the drums, soloed:
Raw drums, soloed.
After sweeping up and down on the soloed drums, we didn't really find a setting that removed the “hiss” but preserved the “ping” of the ride cymbal. So we applied a low pass filter instead, with a cutoff setting of 4.9 kHz. This got us the sound we were after.
Drums with low pass filter.
Fig21 Drum LPF Fig. 21. A low pass filter worked best on these drums; this is the setting.
Now let's listen to the mix with all of our EQ work in place.
All tracks with EQ.
All the tracks have their own sonic space. Every instrument is audible, and no track is obscuring any other track. It's a pretty good rough mix, and we didn't even make use of panning, reverb, dynamics, or level settings! That means that the distance from this mix to a final mix is much closer than it would have been if we'd started with adjusting levels — and we still have all the fader headroom to work with!
To be sure, we applied EQ in far too generous amounts. But that was just to show you how powerful yet easy it is to make a muddy mix into a transparent one.
It's also good to note that Record's mixer is perfect for this kind of EQing: The controls are all visible at all times, and you can see which track might still have room for a center frequency adjustment.
But even more impressive is what you can't see, but you can certainly hear: Record's EQ section sounds fabulous. It's very smooth, and very musical.

EQ types: graphic EQ

If you're an iTunes user, you know all about graphic EQ. It's similar to parametric EQ in that you target specific bands of frequencies to cut or boost. But the frequencies are hard-wired. Some graphic EQs have as few as five or six bands, while others divide the audio spectrum up into 30 or more bands.
Fig22 Graphic EQ
Fig. 22. This is the graphic EQ in Apple iTunes. Graphic EQ is great for applying an overall setting to an entire concert, or to a lot of songs in a particular genre.
Graphic EQ is most often used in live P.A. work, where the room is subject to resonance points that are constant. A graphic EQ lets the engineer identify those points quickly and then cut those risky frequencies from the entire mix for the rest of the night.
In mixing, you need EQ that you can tailor more to the music itself, which is why parametric EQ is what you find on great mixing boards. Like the one in Record.

EQ types: mastering EQ

Once your mix has been perfected by your skillful use of EQ, dynamics, reverb, panning, and level settings, it's ready to be mastered — which is another way of saying it's ready for the final polish. Usually, parametric EQ is used at the mastering stage, but it's used very sparingly. It's great for bringing out some of the middle frequencies that perhaps seem to ultimately get lost in the mix after all your work.
Fig23 Masteringeq Record has a special EQ device for mastering: the MClass Equalizer. It's automatically patched in to the insert effects in the master channel, along with the other MClass devices: Stereo Imager, Compressor, and Maximizer. Used together with subtle settings, the MClass devices can add a magical, professional polish to your mix.
Ernie Rideout is currently writing Reason Power!, scheduled to be published in early 2010 by Cengage Learning. He grapples with words and music in the San Francisco Bay Area.

Mixing

Tools for Mixing: EQ, part 1

By Ernie Rideout
For a songwriter or a band, is there anything more exciting than having finished recording all the tracks for a new song? Hardly. The song that existed only in your head or in fleeting performances is now documented in a tangible, nearly permanent form. This is the payoff of your creativity!
Assuming that all your tracks have been well recorded at fairly full levels, without sounds that you don’t want (such as distortion, clipping, hum, dogs barking, or other noises), you’re ready for the next stage of your song’s lifecycle: mixing.
If you haven’t mixed a song before, there’s no need to be anxious about the process. The goal is straightforward: Make all of your tracks blend well and sound good together so that your song or composition communicates as you intended. And here at Record U, we’ll show you how to do it, simply and effectively.
Regardless of whether you’ve recorded your tracks in computer software or in a hardware multitrack recorder, you have several tools that you can use to create everything from a rough mix to a final mix.
Faders: Use channel faders to set relative track levels.
Panning: Separate tracks by placing them in the stereo field.
EQ: Give each track its own sonic space by shaping sounds with equalization.
Reverb: Give each track its own apparent distance from the listener by adjusting reverb levels.
Dynamics: Smooth out peaks, eliminate background noises, and bring up the level of less-audible phrases with compression, limiting, gating, and other processes.
As you learn more about mixing here at Record U, you’ll learn how these tools interact. This article focuses on the most powerful and — for those new to recording, at least — the most intimidating of them: EQ.
In fact, in the course of this article we’re going to create a mix using nothing but EQ, so you get comfortable using it right away. But before we go on, we must make you aware of its main limitations:
  • It cannot improve tracks that are recorded at levels that are too low.
  • It cannot fix mistakes in the performance.
Hopefully you’ll discover that EQ can be a tremendously creative tool that can improve and inspire your music. Let’s see how it works.

What is it we're mixing?

Before we start tweaking EQ, let’s look at exactly what it is we’re trying to blend together, which is essential to understanding how EQ works. Let’s say we’re going to mix a song consisting of drum, electric bass, electric guitar, organ, and female vocal tracks — a very common configuration. Let’s focus on just one beat of one bar, when all that’s happening is the bass and guitar playing one note each an octave apart, the organ playing a fourth fairly high up, the vocalist sustaining one note, and the drummer hitting the kick drum and hi-hat simultaneously. Here are the pitches on a piano keyboard:
Record U part 2 Fig. 1. Here are the fundamental pitches occurring on one beat of our hypothetical multitrack session. Kick drum and hi-hat are dark blue, the bass is red, the guitar is blue, vocals are yellow, and the organ notes are green. With all the space between these notes, what could be so hard about mixing these sounds together?
Let’s take a different look at the fundamental pitches of our hypothetical multitrack moment. In this diagram, the musical pitches are expressed as their corresponding frequencies.
Record U part 2 Fig. 2. Here are the fundamental pitches of our hypothetical recording session again, this time displayed as frequencies on a logarithmic display. New to logarithmic displays? The lines represent increases by a factor of 10: To the left of the 100 mark, each vertical line represents an increase of 10 Hz; between the 100 and 1k mark, the vertical lines mark off increases of 100 Hz; between the 1k and 10k mark, the increases are by 1,000 Hz; above 10k, the marks are in 10,000 Hz. This is to accommodate the fact that each note doubles in frequency with each higher octave; the frequencies add up fast over an eight-octave span. No matter how you count , it still doesn’t look like this would be tough to mix. Or does it?
Ah, if only that were so. The fact is, one of the reasons that music is so interesting and expressive is that each instrument has its own distinctive tone. You can see why some instruments sound unique: Some are played with a bow drawn across strings, others have reeds that vibrate when you blow into them, some vibrate when you hit them, and others make their sound by running voltage through a series of electronics. But why would that make a group of instruments or human vocalists any harder to mix?
Instruments sound different because the notes they make contain different patterns of overtones in addition to the fundamental frequency: Each instrument has its own harmonic spectrum. Sometimes the overtones are not very loud and not numerous; with some instruments the overtones can be just as loud as the fundamental, and there can be upwards of a dozen of them. Let’s take a closer look at the notes of our hypothetical recording session, this time with all of the overtones included.
Record U part 2 Fig 3. Here’s what the harmonic spectrum of that single bass guitar note looks like.
Record U part 2 Fig 4. The harmonic spectrum of our electric guitar note might look like this — and this is through a clean amp!
Record U part 2 Fig. 5. The fourths the organ player is holding yield a harmonic spectrum that’s even richer in overtones than the guitar.
Record U part 2 Fig. 6. Though they’re not necessarily tuned to a particular pitch, the kick drum and hi-hat have a surprising number of overtones to their sound.
Record U part 2 Fig. 7. If our vocalist sings an “oo” vowel, her note will have the overtones in yellow. If she sings an “ee” vowel, she’ll produce the orange overtones.
Record U part 2 Fig. 8. Let’s put the whole band together for our hypothetical one-note mixing job. Yikes. That’s a lot of potentially conflicting overtones, and none of the tracks are even similar to each other in tone! It looks like this is going to be one muddy mix, unless we apply some EQ!
It seems that our simple hypothetical multitrack mix assignment might not be so simple after all. All of those overlapping overtones from different instruments might very well lead to a muddy, indistinct sound, if left alone. Fortunately, even a seemingly cluttered mix such as this can be cleared up in a jiffy by applying the right kind of EQ techniques.
There are several types of EQ, each of which applies a similar technique to achieve particular results. However, the terms that you may hear or read about that describe these results can vary widely. You’ll often hear the following terms used, sometimes referring to particular EQ types, at others referring to generic EQ applications.
  • Attenuate
  • Bell
  • Boost
  • Carve out
  • Curve
  • Cut
  • Cutoff
  • Filter
  • Flat
  • Response
  • Rolloff
  • Slope
  • Spike
  • Sweep
As we go through the various types of EQ, we’ll define exactly what these terms mean and get you acclimated to their usage. We’ll also illustrate each EQ type with audio examples, harmonic spectra, and plenty of sure-fire, problem-solving applications.

Guitar Record

Recording Electric Guitar

By Matt Piper Welcome to the first article in the Record U series – this article will teach basic techniques for recording electric guitar, with information about mic’ing guitar amps, and also recording directly into Record with no microphones or amplifiers at all, using Record’s built-in Line 6 Guitar Amp device. Lets get started with some general tips for recording your guitar amp!

Use a flashlight

In many combo amplifiers, the speaker is not actually in the center of the cabinet, and may not be easily visible through the grill cloth. In this case, shining a flashlight through the grill cloth should allow you to easily see the position of the speaker so you can place your microphone accurately.

Recording in the same room as your amplifier

Recording close to your amplifier (especially with your guitar facing the amplifier) can have the benefits (especially at high gain settings) of increasing sustain and even achieving pleasant, controlled harmonic feedback. This is due to a resonant feedback loop between the amplifier, the speaker, your guitar pickups, and your guitar strings. You may also be able to achieve this effect when recording directly into the computer, if you are playing in the control room with the studio monitors (not headphones!) turned up loud enough. While it may sometimes be desirable to play guitar while separated from the amp (perhaps because the amp is in a bathroom, other separate room, or isolation box to keep it from bleeding into other instrument microphones when recording a live band), you will lose the opportunity for this particular feedback effect.

My guitar tone sounds so much brighter on the recording than it does when I listen to my amp!

Quite often (especially with combo amplifiers, where the amp and speaker are both in the same cabinet), guitarists get used to the sound of standing several feet above their speaker, while the speaker faces straight out parallel to the floor. Because of this, much of the high frequency content coming from the speaker never reaches your ears. If you start making it a habit to tilt the amp back, or put the amp up on a stand or tilted back on a chair so that the speaker is pointed more toward your head than your ankles, you will begin becoming accustomed to the true tone of your amp, which is the tone the microphone will record, and the tone that members of a live audience will hear. This may initially require you to make adjustments to your tone—but once you have achieved a tone you like with this new setup, you can be confident that the sound you dialed in will be picked up by a properly placed microphone.

Proximity effect

When a microphone with a cardioid pattern (explained later in the Large Diaphragm Condenser Microphones section of this article) is placed in very close proximity to the sound source being recorded, bass frequencies become artificially amplified. You may have heard a comedian cup his hand around the microphone with the mic almost inside his mouth to make his voice sound very deep when simulating “the voice of God” (or something like that). This is an example of the proximity effect. This effect can result in a very nice bass response when placing a microphone close to the speaker of your guitar cabinet or combo amplifier. In fact, the Shure SM57 is designed to make use of this effect as part of its inherent tonal characteristics.

On-axis vs. off-axis

You will see these terms mentioned later in my descriptions of the mic setup examples. On-axis basically means that the microphone element is pointed directly at the sound source (sound waves strike the microphone capsule at 0 degrees). Off-axis means (when mic’ing a speaker) that the microphone element is aimed at an angle rather than straight at the speaker (so the sound waves strike the microphone capsule at an angle). On-axis will give you the strongest signal, the best rejection of other sounds in the room, and a slightly brighter sound than off-axis. I usually mic my amp on-axis, with the mic (my trusty Shure SM57) somewhere near the edge of the speaker. However, this is not “the one right way.” The right way is the way that sounds best to you. I hope that the following examples will help you find your own path to the tone you are looking for.

Recording Guitar through an Amplifier with Different Mic Setups (Examples)

The following recordings were made with a Gibson Les Paul (neck pickup) played through a Fender Blues Junior combo tube amp sitting on a carpeted floor (propped back a bit). A looping pedal was used so that the exact same performance could be captured with each microphone setup. The amp was turned up to a somewhat beefy, but not deafening volume.

The Shure SM57 Microphone: A Guitarist’s Trusty Friend

For recording guitar amplifiers, it is hard to find better bang-for-your-buck than the Shure SM57 dynamic microphone. It is an extremely durable microphone, and it can handle very high volume levels. When recording guitar amps, I recommend placing this mic just as close as you can to the speaker. In the following recordings, I have placed it right up against the speaker grill cloth.
— This recording was made with the SM57 facing the speaker straight on (on-axis), with the microphone placed at the outer edge of the speaker.
Edge Mic — This recording was made with the SM57 facing the speaker straight on (on-axis), with the microphone placed directly in the center of the speaker. You should be able to easily notice that this recording has a brighter tone than the recording made at the edge of the speaker. Since more high frequency content has been captured, the slight noise/hiss from the looping pedal is more noticeable.
Center Mic — This recording was made with the SM57 placed directly in the center of the speaker, but facing the speaker off-axis at a 45-degree angle. When compared to the preceding center/on-axis recording, this recording is a bit warmer. The noise floor is a bit less noticeable, and the high frequency content is dialed down a bit.
— This recording was made with the SM57 placed at the edge of the speaker, at a 45-degree angle (pointing toward the center of the speaker). This is the warmest of all the microphone positions recorded here. It has slightly less high-end “sizzle” than the first recording (SM57_edge_straight.mp3).
Though I have not included an example, I will mention that some people even simply hang the microphone so that the cable is draped over the top of the amp and the microphone hangs down in front of the speaker pointing directly at the floor. In this position, the type of floor makes a difference in the tone (wood, concrete, tile, or carpeted floors will result in different sounds), as well as the distance between the speaker, mic, and floor. This arrangement is likely to have the least amount of highs of anything discussed thus far, and will have less rejection of other sounds in the room (if you are recording in the same room as the rest of a band, this method would pick up more sound from the other instruments). I tend to avoid this method myself, but if you are short of mic stands, it could be helpful—and I would not discourage you from experimenting. It may turn out to be just the sound you were looking for.

Large Diaphragm Condenser Microphones

For comparison, I have also recorded the same performance with an M-Audio Sputnik tube microphone. Though not as famous as some microphones by Neumann or AKG, this is a high-quality, well-reviewed large diaphragm tube-driven condenser microphone.
M-Audio states that what they were going for was something with tonal characteristics somewhere between a Neumann U47 and an AKG C12. For the following recording, I have placed the Sputnik directly in front of the center of the speaker, on-axis, 10 inches from the front of the speaker.
— This microphone has switchable polar patterns (including omindirectional, figure eight, and cardioid). For the recording above, I used the cardioid pattern. This means that the microphone favors sounds directly in front of it, while picking up less sound from the sides and rejecting sounds coming from behind it. The SM57, as well as the AKG C 1000 S microphones used later in this article also have cardioid pickup patterns.
What I hope will impress you here is how for this application (recording a guitar amp), the $100 SM57 compares quite favorably $800 Sputnik! For recording vocals or acoustic instruments, the Sputnik wins hands down—but for recording guitar amps, the SM57 is an amazing value and hard to beat at any price.

Room mics

Though one might often mic a room with a single microphone (perhaps a large diaphragm condenser), I have opted to use a stereo pair of small diaphragm condensers: specifically the AKG C 1000 S (an older dark gray pair). These microphones currently have a street price around $280/each USD. I attached them to a stereo mount on a single mic stand, one microphone just on top of the other, with the microphone elements arranged 90 degrees from each other (an XY pattern). This is to minimize the chance of phase problems I might otherwise encounter that could potentially cancel out some frequencies and change the tone of the recording in strange ways.
XY Pattern For the following recording, the mic stand was placed about 8 feet from the amplifier, at roughly the height a standing human’s ears would be.
— This recording is not immediately as pleasing to my ears as the other recordings. However, when mixed with one of the close-mic’ed recordings, its usefulness becomes clearer. Before listening to the next MP3, I suggest prepping your ears by listening again to the SM57 45 degree edge recording — .
— In this recording, the SM57 recording and the stereo room mic recording are mixed almost evenly (weighted just slightly in favor of the SM57 recording). By adding the room mic pair, there is a bit of depth and also a bit of high-end definition added to the recording. There is more of a sense of space. Of course, this effect could be simulated by adding a digital room reverb (such as the RV7000 Advanced Reverb that comes with Record). Digital reverbs allow you to simulate rooms that may have more pleasing acoustics or larger dimensions than those of a “bedroom studio” or other room in a house or apartment.

Recording guitar directly into Record using the built-in Line 6 Guitar Amp device

This is where things get really simple, quick, and easy!
The built-in Line 6 Guitar Amp device has emulations of all types of different amps and speaker cabinets—much more variety than a single tube amp could provide. And of course you can record at any volume level (or through headphones), so apartment-dwelling musicians won’t be visited by the police in the middle of the night for disturbing the peace!
I recorded the same part (actually ditched the looping pedal and played a fresh take for this) into Record. I plugged the same guitar directly into my audio interface. No effects pedals of any kind were used. Here are the steps I followed, which basically explain how to record a guitar track in Record:
1. Open a new song in Record, and click the Create Audio Track button in the Tool Window.

Tecnology and music

Tools for Mixing: Reverb

By Ernie Rideout
Grandcanyon 255px Of all the tools we talk about in the Tools for Mixing articles here at Record U, reverb is unique in that it's particularly well suited to make it easy for you to create clear mixes that give each part its own sonic space.
Reverb derives its uniqueness from the very direct and predictable effect it has on any listener. Since we humans have binaural hearing, we can distinguish differences in the time between our perception of a sound in one ear and our perception of the same sound in our other ear. It's not a big distance from ear to ear, but it's enough to give our brains all they need to know to immediately place the location of a sound in the environment around us.
Similarly, our brains differentiate between the direct sound coming from a source and the reflections of the same sound that reach our ears after having bounced off of the floor, ceiling, walls, or other objects in the environment. By evaluating the differences in these echoes, our brains create an image accounting for the distances between the sound source, any reflective surfaces, and our own ears.
The good news for you: It's super easy to make your mixes clearer and more appealing by using this physiological phenomenon to your advantage. And you don't even need to know physiology or physics! We'll show you how to use reverb to create mixes that bring out the parts you want to emphasize, while avoiding common pitfalls that can lead to muddiness.
All the mixing tools we discuss in this series — EQ, gain staging, panning, and dynamics — ultimately have the same goal, which is to help you to give each part in a song its own sonic space. Reverb is particularly effective for this task, because of the physiology we touched on earlier. As with the other tools, the use of reverb has limitations:
  1. It cannot fix poorly recorded material.
  2. It cannot fix mistakes in the performance.
  3. Any change you make to your music with reverb will affect changes you've made using the other tools.
As with all songwriting, recording, and mixing tools, you're free to use them in ways they weren't intended. In fact, feel free to use them in ways that no one has imagined before! But once you know how to use them properly, you can choose when to go off the rails and when to stay in the middle of the road, depending on what's best for your music.
Before we delve into the details of using reverb in a mix, let's back up a step and talk about what reverb is.

Reverb: Cause and Effect

At its most basic, a reverberation is an echo. Imagine a trombonist standing in a meadow, with a granite wall somewhere off in the distance. The trombonist plays a perfect note, and an echo follows: