segunda-feira, 21 de fevereiro de 2011

Mixing Techniques

What is mixing anyway? If you're in a recording studio mixing an album, or feeding the sound to a video tape or to an audio cassette for example, then you are performing a sound reproduction task. If you are mixing the sound for a P.A. system in your church, then you are performing a sound reinforcement task.
Last year I was hired as a consultant for a major Christmas musical being presented at a large auditorium in St. Louis. I was to watch over the actions of the union sound crew to be certain that the producer would get the sound I knew he wanted. I had a fairly amicable relationship with the sound crew. Generally their miking choices were proper, however at one point I suggested a different approach to miking the piano - boy did that cause a problem. What it came down to was this - I learned most of my miking techniques in the studio where I could take the time to hear what differences moving a mike six inches would make. I had worked in sound reinforcement for several years as well, and knew that the technique I was suggesting would cause no problems in this application either, and would provide a far better, more natural piano sound than his choice would. His background was primarily in sound reinforcement. In fact, he had engineered many events similar to the one we were involved in here. But to my surprise, he had never tried miking the piano in the manner I was suggesting. I proceeded to raise his dander even more by suggesting that "good audio is good audio, whatever the circumstances are". He probably thought I was a real twit for saying that. After all, he didn't know anything of my background, and it was a dangerous blanket statement. But it is in fact true. At the same time, I made the comment with the assumption that we both understood that there are certain techniques appropriate for recording that could be entirely inappropriate and indeed disastrous if used in a sound reinforcement application. To his credit, after the soundcheck he came to me and told me how much he liked the sound of the piano. I appreciated that.
When I engineer in a sound reinforcement setting, my ultimate mixing goal is to present an album mix to the audience. That includes everything from microphone choice and placement (within reason) to using digital effects processors for delays and reverb effects. I learned that from my teacher - Bill Porter. Bill came out of the recording side too. He owned a studio in Las Vegas, and among his list of clients was Elvis Presley. Elvis was performing in town when something happened which prevented his engineer from being there. They decided to call and ask Bill to come in and mix for the concerts. Later, people were ecstatic about the sound. They kept coming up to Bill saying how much the live show sounded "just like the album". Well, as Bill once said to me, he didn't know any better. No one ever told him that sound reinforcement should sound different than the album. He just did what he would have done in the studio. Of course the rest is history - Bill went on to engineer for Elvis for years afterwards.
Sherman Keene suggests in his book, Practical Techniques for the Recording Engineer, that there are eight properties of a good mix. They are:
1. Powerful and solid lows
2. Proper use of the very powerful mid range areas
3. Clear and clean highs
4. Proper but not overburdening effects
5. Dimension - some sense of depth
6. Motion - movement of the instruments using pans to heighten the music
7. At least one true stereo track (e.g., strings, piano, hopefully something used "up front" in the mix)
8. Some acoustic information - not just delays and reverb
Although his comments are directed at doing an album mix, they are true for a sound reinforcement mix in a church as well. Only items six and seven are slightly irrelevant for our typically mono sound systems. And although his comments are somewhat subjective (I couldn't think of a better way to say it either), if you'll sit down with this list in front of you and listen to a few of your favorite albums, what he is trying to say will begin to sink in. Then you can apply that concept to your approach to your mix.
By way of elaboration, how much weight the mix has is another subjective term for his powerful and solid lows. Also, the human ear is most sensitive to midrange frequencies, and this is why he cautions us to use any EQ in this area properly. Clear and clean highs, as well as any of the above, are as much a result of proper miking technique as they are judicious use of equalization.
This is not an absolute rule, but the best mixing engineer is often a musician. This is primarily because a musician knows what to listen for. He has spent years developing this sense. He also understands exactly what is happening on stage, and can relate to it and to the players from firsthand experience. I've used those abilities to develop my sensitivity to the needs of the audience. In it's most basic sense, for instance, I understand the trauma that can be caused by feedback, particularly at an especially reverent moment. So, I've developed my hearing enough to pick up on feedback very quickly. My understanding of what is happening on stage combined with my knowledge of microphones and signal flow logic all swing into gear in an instant when I recognize the beginning stages of feedback, and I can usually head it off before anyone else recognizes it. Of course, I've had my share of major feedback zzzingggs like everyone else. I've engineered many services in a church not far from a major airport - sometimes, especially with live music being played on stage, it's been difficult to tell at first if I was hearing a low frequency feedback problem or a four-propeller cargo plane. Sometimes it gets away from you. My only point is that that's the process that I go through to head off feedback before it becomes a problem. That sensitivity carries right through from their need to understand the lyrics of the songs to their desire to feel the backbeat.
One very sensitive moment in a service is just after the worship team has perhaps sung several songs. The songs may have been rather energetic or they may have been majestic and contemplative. As they wind down to an ending, the worship leader begins to exhort the congregation. Depending on the exact setting, I may choose to help the music fade down under the worship leader faster than the players chose to. Many times this has been because although most of the band and singers have stopped, the keyboardist continued to play. If he is still playing fairly energetically, he is probably not aware that his playing is interfering with the ability of the congregation to hear what the worship leader is saying. Then it is part of my job to see to it that the congregation does in fact hear what is being said. Usually, if I fade the keyboard in the house mix, there will be enough level still coming from his stage monitor to provide a nice keybed level under the worship leader. Of course, you must stay on your toes in this kind of situation - the worship leader may jump back into a song without notice, and the keyboard level along with everyone else will need to be back up at their downbeat, not several beats or even measures in!
One way to look at it is that you are constantly shaping the overall dynamics of the music. For example, here's one technique I often use when working with someone using an accompaniment track. Probably nine times out of ten the song they choose to sing will have a big finale ending. Once the singer reaches their last line of the song, the ending may carry on a bit longer. For one thing, they're not sure what to do with themselves after that, and the audience is hanging on their last line. So I generally help define the finale by pushing up the level of the track in the house. Another spot to help it along is if there happens to be a instrumental solo section between vocal parts. The singers almost never know what to do with themselves here, and you can take attention away from them and shift it back toward the worshipful message originally intended by the songwriter, the players and the producer, by lifting the level in the house slightly. There's no rule that says the track must stay at one preset level for the entire song. Sometimes that does work. But it's very difficult in the studio to mix a soundtrack from an album project that predicts and provides for every setting that the song will be presented in. You should feel free to carefully, cautiously, musically help it along wherever appropriate. If you're unaccustomed to this type of fussing with the track, be sure to rehearse with the performer before service.
Mr. Keene suggests that you should use proper but not overburdening effects. You can't get much more subjective than that, but his point is well taken. And if you are working in a reverberant church building, you are already rather limited in what kinds of effects you can add, and how much as well. For instance, if the acoustics of the sanctuary provides a strong reverb sound with a reverb time of 3 seconds, you probably have absolutely no need to add any artificial reverb to the vocals. To do so could totally wipe out any chance for intelligibility of those vocals in that room. There is a slight chance in this setting, however, that an engineer mixing the same group of singers and players for the video tape or audio cassette may indeed need the aid of an artificial reverb unit to better blend the vocals.

Digital delays are another subject. If the room already presents a slap echo of say 120 milliseconds, then why on earth would you add to the confusion by putting a digital delay into the mix of the vocals!?! No one in the congregation will have any chance of understanding the words if you confuse them further with another delay. It seems so academic, yet I mention it because I have seen some engineers do this same type of thing - folks who should have known better.

One final brief overview. Learn the signal flow of your console so well that you can freely operate it instead of allowing it to control - spelled L I M I T - you. Think through all of the routing possibilities.
Here are some reminders:
Microphone Inputs are primarily best kept to use with microphones because of the fairly major gain stage associated with them. In a pinch, however, you can plug the output of a line level device into them. Just remember that this will present more noise (a worse signal to noise ratio) than would feeding that same device into a line input for example.
Line Inputs are intended to receive the output of any line level device, such as a tape player, effects device, and so on.
Echo Returns are intended to receive the output of any line level device. Their only shortcoming is that they typically do not have a channel equalizer as part of their circuit. This is because oftentimes the return from an effects device does not need to be equalized.
Monitor and Echo Buss Train yourself to think of these as pre-fade or post-fade auxiliary busses. This may help free your thinking to consider other possibilities for their use, including feeding an audio cassette recorder or video tape recorder, and so on.
Submasters - remember that even on a simple stereo console you can set up the console to submix different segments of your mix. This may save your life as a mixer someday and at the very least will make life as an engineer much simpler.
Well, obviously we've not said everything that can be said about consoles. We have however given you a broad overview of what they do, how they do it, along with some suggestions for operating them "musically". If you have any questions, or would like to suggest a topic for further discussion, feel free to contact us. We appreciate you. Many thanks.

sexta-feira, 3 de dezembro de 2010

Cubase Tecnics

Cubase: Building Your Own Multi-articulation Instrument

Cubase Notes & Techniques

Technique : Cubase Notes
 
Last month, we introduced Cubase’s VST Expression facility. Now it’s time to build your own multi-articulation instrument.
John Walden
Five string sounds loaded into Kontakt Player 4, ready for some Expression Map magic.
Five string sounds loaded into Kontakt Player 4, ready for some Expression Map magic.
In last month’s column, I introduced Cubase’s VST Expression system and looked at how Expression Maps can be used to adjust MIDI data in real time. With this follow-up article, I want to show how you can use Expression Maps to enhance your simple sample-based instruments — by combining them to create multi-articulation instruments with keyswitching.
This technique has three benefits: first, a more sophisticated and expressive version of an instrument can be created and controlled from a single MIDI track, so you’ll no longer require separate MIDI tracks for different articulations; second, you’ll be able to add and edit performance variations after the performance has been recorded; and third, because you’re now using a single MIDI track, you’ll be able to add expression marks to a printed score. The most obvious context in which you might wish to do this is with orchestral sounds, such as strings — so I’ll use this as my main example — but the same principles can be applied to any instrument.
Simple Doesn’t Mean Bad
A top-of-the-range orchestral sample library with keyswitching built in doesn’t come cheap. A cheaper instrument is likely to be simpler, but that doesn’t mean it can’t get the job done. In fact, many decent ‘all-in-one’ libraries or hardware synths now include some very respectable orchestral sounds, and you may have access to some perfectly good ones already.
For example, if you have any orchestral instruments, you’ll probably have at least some of the following string performance styles: arco (normal bowing), legato (where notes run smoothly into one another), staccato (short, clipped notes), pizzicato (plucked with the fingers) and tremolo (a rapidly repeated note). The problem is that these are likely to be single instruments, and won’t be key-switchable — which is where VST Expression comes in, because by constructing a suitable Expression Map you can use the different expressions together as if they were a single instrument.
Fully Loaded
Let’s break the process down into steps. The first requires that you have both a suitable set of sampled instruments and a multitimbral sample-playback tool (one where different instruments are accessed via different MIDI channels). This rules out Cubase’s HalionOne, which only allows one instrument per instance, but the full version of Halion would be fine, as would many third-party instruments.
I’ll base my example around Native Instrument’s widely used Kontakt Player 4 (which is available as a free download from NI’s web site). As shown in the first screenshot, I’ve loaded five string patches, and in this case I’ve used ‘light’ versions of each patch from Peter Siedlaczek’s String Essentials 2 library. Don’t worry if you don’t have it, because the whole process could just as easily be based around five patches from a basic GM-style synth. If you want to replicate my example on your own system, simply match the performance articulations and MIDI channel numbers that I’ve used: arco (channel 1), legato (channel 2), pizzicato (channel 3), staccato (channel 4) and tremolo (channel 5). I chose these simply because they cover the most obvious styles for a general string performance.
The next step is to create an empty MIDI track and set its output routing to your multitimbral sample-playback tool (ie. Kontakt Player in this example). It’s probably best to set the output MIDI channel to that of your default sound, although the Expression Map we’re about to create will change the final MIDI channel sent to the sample player, according to the articulation we wish to play.
On The Map
A single MIDI track can be used to control all five performance styles.
A single MIDI track can be used to control all five performance styles.
Of course, the next step is the creation of the Expression Map. As described last month, go to the VST Expression panel in the Inspector and open the VST Expression Setup window, then start a new Expression Map. The screen opposite shows the Map I created for this example, which uses the five sampled performance articulations and, for each one, defines five levels of dynamics (going from a relatively soft pp up to a loud fff). This gives a total of 25 sound slots used in the central panel and 10 entries in the Articulations panel.
The dynamics levels have been created using the same approach as last month, so, for each level, the Output Mapping panel’s MIDI velocity setting is used to adjust the actual velocity of the note by a fixed percentage (I used a range from 70 percent for the soft pp up to 160 percent for the loud fff, but the exact settings are a matter of personal taste). For some articulations, you can also use the MIDI note Length setting to change the note length. For example, I used 150 percent for all the legato articulations, as this seemed to work nicely with my samples, and seemed to help them ‘run together’. In contrast (and unlike last month’s example), the staccato samples I used were suitably short and snappy already, so I didn’t need to use the Length setting in this case.
The key element in completing this Expression map is the Output Mapping panel’s Channel setting. For each of the five performance styles, the Channel setting must match the MIDI channel number for the sampled instrument in your playback tool. This allows the Expression Map to automatically remap the incoming MIDI data and send it out to the right MIDI channel, in order to select the performance style required.
Directions & Attributes
The completed Expression Map. Note the use of the Channel, Length and Velocity settings in the Output Mapping panel for the currently selected Legato fff Sound Slot.
The completed Expression Map. Note the use of the Channel, Length and Velocity settings in the Output Mapping panel for the currently selected Legato fff Sound Slot.
The only other key consideration is what to define as a ‘Direction’ and what as an ‘Attribute’, and I’ve tried to follow convention. When notating string parts, performance styles such as arco, legato and pizzicato tend to be written as ‘directions’ — and once you see the symbol for one of these styles, it will apply to all subsequent notes until you see a different symbol. In contrast, staccato and tremolo are more commonly written as ‘attributes’: they apply only to the notes that are marked, after which the player will return to the previous playing style.
With the exceptions of features such as accents (which I’ve avoided here to keep the example relatively straightforward), dynamic levels such as pp, mp and f are always noted as ‘directions’, which apply until the next dynamic level is noted.
Remote Control
The final step — which is optional — is to define Remote Keys for each articulation. If you intend to add your expression via one of the MIDI editors after playing the part, rather than during performance, you can leave the Remote Key settings blank, but if you want to be able to switch between articulations via your MIDI keyboard (that is, create key switches), then a note can be assigned to a particular Sound Slot in the central panel of the VST Expression Setup window. As these keyswitches are only likely to be used while playing ‘live’, there’s no need to define one for every Sound Slot (although you can if you want to). In this case, I’ve simply defined one key switch for each of the five main performance styles and done this for the mp dynamic level in each case. These would be perfectly adequate while playing in a part, allowing me to switch between performance styles, and then add my full range of dynamics expression after recording, using one of the MIDI editor windows.
Usefully, once a note is used as a Remote Key, it doesn’t generate a sound in the sample player (the Expression Map automatically mutes it): this is helpful if your sampled instrument has sounds mapped across the entire key range but you still want to use key switches. I also tend to engage Latch Mode, as this means you don’t have to hold down the key switch: just press it once, then release, and it will stay active until the next key switch is pressed. Finally, if you want to move your key switches to another area of the keyboard (perhaps to use them with a different MIDI keyboard controller), the Root Note setting allows this to be done automatically, without remapping the individual switching notes.
No Strings Attached
Once the Expression Map is in place, the Key Editor’s Articulation lanes can be used to add expression to the performance.
Once the Expression Map is in place, the Key Editor’s Articulation lanes can be used to add expression to the performance.
The example uses orchestral strings, but there’s no reason to limit yourself to orchestral instruments, and a good candidate for this technique is electric bass. There are lots of good, single-articulation, sampled bass instruments that could be used to create a comprehensive, keyswitched version. To get you started, I’ve put a map based on four playing styles (sustained, muted, staccato and slapped), along with my main strings example,  Simply add your own samples and experiment!  0

segunda-feira, 22 de novembro de 2010

Compositors





How to Write a Hit
If you love music and you’re determined to succeed as a songwriter, you have the poten-
tial to do it, even if you have no musical training. The first and most important step
is to learn everything you can about songwriting and what makes a song a hit. Part 1
begins with ways to go about getting this education by finding work that can finance
some musical training while also increasing your knowledge of the music business.
Understanding how historical events and social attitudes influenced which songs became
hits during the twentieth century sets the stage for exploring what hits are made of.
Yip (E. Y.) Harburg, who wrote the lyrics for The Wizard of Oz, once said, “It doesn’t
matter what else you have if you don’t have the idea.” You’ll discover how to find and
tap into all the great ideas around you. You’ll learn about the many ways that words and
music can come together and find out how to meet the right partner, establish a success-
ful working routine, and keep the partnership going no matter what pressures arise.
Most of the major songwriters feel that their work is half done if they come up with
an exciting title. Why do some titles instantly grab the public’s imagination? I ana-
lyze hit titles, past and present, and determine which ones have built-in hit potential.



I remember my mother standing over me, urging me to practice the Beethoven piece
my teacher had assigned me for that week. I was bored and rebellious, and finally I
shouted, “I want to be a songwriter. How can studying Beethoven help?” She shrugged
her shoulders and said, “How can it hurt?”
Years later, I’m grateful for my mother’s advice. Admittedly, formal knowledge of
music isn’t necessary to become a hit songwriter. Songwriting, as multimillion-selling
composer Barry Mann has said, is an inborn ability. You may hit Billboard’s top spot
without a single lesson of any kind. But a musical background can make songwriting
success easier.
Looking at the Hit-Makers
Songwriters can be trained by teachers or be entirely self-taught. The question is: If
you want to make a career of songwriting, how much training do you need? Let’s look
at some chart-topping writers for an overview.
These songwriters studied music from the time they were young:
➤ Sheryl Crow received a degree in classical music from the University of Missouri
and taught music at a St. Louis elementary school.