Can you hear what you’re doing? (Part 1)

If there’s one thing the advent of digital audio has accomplished, it is what I call the democratization of record making.  Unlike the days when musicians needed to interest a large company in order to get a record made, today many have small studios of their own and with the help of the Internet, self-release their music to the world.

While the technology and the means have certainly become much more widely available than they used to be, the information on how to use the technology has not proliferated to anything like the same degree.  There are plenty of magazines, both print and Web-based and several Internet fora where recording enthusiasts gather and these provide “how to” instructions. What is missing however, is the reasoning behind the how: the why.

So the new studio owner buys the hardware and software they read about and proceeds to turn the knobs, real and virtual, and then wonders what went wrong.  Not in every single case of course but from what I’ve heard over the years, the ones who are truly pleased are the exceptions.  Perhaps they sought something that did not sound like a true representation of themselves and their instrument(s).  Nothing wrong with that.  What is “good” or “better” or “best” depends entirely upon precisely what one seeks.

For those who seek to make recordings that sound less like recordings and more like musical performances (real or imagined), the standard recipes won’t work.  They are designed to achieve certain types of sound.  They are not designed to “get out of the way”.  (I use that phrase often lately when discussing audio gear or setups or recordings that I’ve found particularly involving. To my mind, they work because they “get out of the way” and allow the listener better access to the music.)

This blog entry will be the first in a series written with the hope of helping musicians and other recordists who are interested, like myself, in studio setups and recordings that get out of the way.  The series will not necessarily be consecutive in terms of publication (there may be other topics interspersed along the way) but the goal will be to raise some issues not raised elsewhere.  If these provide food for thought and perhaps inspiration for trying something different, I’ll consider them successful.  For those that don’t make records or don’t play instruments but who comprise the audience, the listeners, I hope there is something here of interest for you as well.

Above all, my recommendation is to not simply take my word for what you can expect to hear, since I can only report on how sounds strike my ears.  I encourage all to listen for themselves and draw their own conclusions.  Remember that asking any three audio folks a question will result in at least four different answers (five of which may well be wrong).  Only listening for yourself will tell you how something sounds to you.

In some earlier entries in this blog, I’ve mentioned something I’ve called “The Questions”.  To quote from one of those entries, “These are questions that need to be asked if one is ever to arrive at answers.  They are the questions I’d never seen mentioned in any of the books on recording I’d ever read or in any of the magazines.  They are the questions I was never taught to ask when I was an assistant engineer, the questions that students in today’s “audio engineering” schools never encounter.”

Let’s start our exploration by asking the first question I always ask about any studio: “Can you hear what you’re doing?”  This can be rephrased to accommodate listening setups as well as recording setups: “Can you hear past the system, all the way to the recording itself?”  Seems like an obvious question – at least it should be – but the fact is, in my experience, monitoring is all too often the weak link in most studios I’ve visited.  Since every decision regarding the sound at every step in the process of record making is based on what the monitors tell us, if you can’t hear past the monitoring all the way to the recording, if you can’t hear what you’re doing, you can’t determine how your recording is going to sound.  You can’t make it sound the way you want it to because you don’t know how it sounds.

Many studios have different sets of monitors and these all have very different presentations.  (This is discussed in the entry called Why doesn’t it sound (in here), like it sounds out there?)  Folks will often take a reference out of the studio to “see how it sounds” on some other system or even in the car(!).  Each provides its own view, like lenses with different tints or like prisms but more often than not, none simply gets out of the way.  From the blog entry cited above: “After all, if the engineer can’t hear what they are doing, the best they can do is attempt to blindly steer in the desired direction but the results are effectively left to happenstance.  It occurred to me that adjusting sound while referencing typical studio monitoring is like mixing paint colors while wearing sunglasses.  Over the years, a few folks have claimed to be able to hear “around” the monitors but the audible evidence always tells a different story.”

Certain types of systems will always apply certain types of colorations to how the recording is presented.  That will not change.  A system that gets out of the way, allowing access to the recording itself, removes any questions about what has been captured and how well (or not) it has been captured.  A recording that sounds right on such a system will sound its best on the greatest number of other systems.

Does that mean that everyone needs to buy a certain type of speaker and all will be well?  That would be nice but unfortunately, it doesn’t work that way.  Monitoring is more than the speakers.  It is the room in which one listens.  It is where in the room the speakers are located.  And where in the room the listening position is located.  And where just about everything else in the room is too.  The good news is that by paying attention to all these things, just about any speaker can be helped to get just a little more out of the way.  While the basic character, the basic potential of a given speaker design won’t be changed, in most instances, whatever that potential is can be a lot more fully realized.

This is a big subject and there is much to be said.  This time out, we’ll just start with a few ideas to experiment with.  To start, let’s talk about acoustics. To keep it simple, we’ll break the subject into two areas: bass acoustics and treble acoustics.  Every enclosed space, meaning every listening room, every studio and control room that isn’t outdoors, will have resonant modes.  These are frequencies in the bass where the room tends to “sing”.  When the speakers present content with these frequencies (or their harmonics), the room will tend to “hold onto” these parts of the sound, even after they have stopped in the input signal.  In addition to causing these frequencies to linger too long, filling what should be quieter parts of the signal, some of these resonances will cause certain frequencies to be disproportionately louder (or softer) than they are in the input signal.

While proper acoustic treatments can make important differences in the sound (and will be covered in a future entry in this series), the starting point will determine their effectiveness.  If the goal is to get the monitoring out of the way, a key part of this is getting the room itself out of the way.  Better to minimize any excitation of room resonances from the start.  Placement of the monitors plays a big part here.  As we approach a room boundary, resonant excitation increases.  As we approach places where boundaries meet, such as corners, excitation increases further.  In most studios, we find the listening position behind the console (i.e., mixing board) and the console placed toward the front of the room.  This common placement tends to put the monitors in positions that are very good at stimulating room resonances.  Moving the monitors away from boundaries results in less interference from the room.  I have often said “Every foot from the wall adds $1000 to the sound.”

In terms of acoustics, bass issues manifest themselves in the room’s resonant modes.  In the treble, it is reflections that cause acoustic issues.  The most harmful are called “early reflections” because they arrive at the listening position just after the arrival of the direct sound from the loudspeakers.  These slightly delayed sounds will alter instrumental timbres and smear stereo imaging, in effect, defocusing the audio “picture”.  Here again, proper acoustic treatment of early reflections can make significant differences in the perceived sound but here too, placement is the first step in ensuring the system and room get out of the way to the greatest degree.

Early reflections can occur from room boundaries and from objects in the room, especially from objects between the monitors and the listening position.  Consider the large reflective surface that is the console in most studios.  It is common to see loudspeakers placed atop the meter bridge of the console.  Sounds bouncing off the console reach the engineer’s ears just slightly behind the direct sound from the speakers.  The reflected sound combines with the direct sound and at these distances, one of the results will be a dip in the midrange (a weakening of sounds in the “presence region”).  In an effort to remedy this, the engineer tends to reach for the equalization controls to boost the level of midrange frequencies and “restore” the missing presence.  The problem is, the “remedy” is being applied to a recording that isn’t missing anything.  Because the monitoring has not gotten out of the way and is instead providing false information, something that is not contained in the recording but is in fact an artifact of the monitoring setup, the engineer is being misled and a recording that doesn’t need a thing is being arbitrarily brightened.  Played on a system that doesn’t suffer from the same reflections, the recording now has an artificial, hardened “edge”.

With all the above in mind, we’ve started to answer the questions “Can the room affect what I hear from the speakers?” and “Can where I place the speakers and what I place near them affect what I hear from the speakers?”  If monitoring is the crucial aspect of setting up a studio, where to start?  My experience has been that it is best to start with a clean slate.  For any studio or listening space, rather than fill the space and see what’s left for the monitoring, I find it best to start with the monitors themselves and place everything else afterward.  I’ve already mentioned staying well away from room boundaries.  In the middle of the last century, engineer Peter Walker determined that room excitation can be minimized by placing the monitors near 1/3 points along the room’s diagonals.  In other words, as a start, find the points that divide the room’s length and the room’s width in three.  Placing the monitors near these points will excite the room the least.  I have had good success in several rooms and studios by leaving 1/3 the room’s width between the speakers and 1/3 the room’s length behind them.  (For more on this subject, see Setting up your monitoring environment.)

For now, before placing other items in the room, set the listening position at a point just slightly farther from a line drawn between the speakers, than the center of each speaker is from the center of the other speaker.  In other words, if the center of the left speaker is for example, 72 inches (~1.8 meters) from the center of the right speaker, place the listening position so that your head is slightly further than this distance from either one of the speakers, say perhaps, 80 inches (~2 meters).  Aim the speakers at a point just behind the listening position.

To those not familiar with such a setup, having speakers near the 1/3 points can seem like the speakers are “in the middle of the room”.  But listen to how much easier it is to hear past the speakers, to the recording itself.  Now you hear the bass contained in the recording and not the sympathetic, out of tune “woof” of room resonance.  The sound becomes freed from the confines of the speakers and has a depth dimension (if the recording contains this — more on the subject in a future entry).  The sense of the speakers getting out of the way is increased as the speakers themselves become less obvious sources of the sound.  The part of the room behind the speakers simply comes alive with the stereo “soundstage”, as determined by the recording itself.

Having a monitoring setup like this doesn’t just increase how much you can hear from the recording.  It changes how you go about making recordings.  Now you can hear what you’re doing.

Advertisements

Two worlds, heroes and real stereo

In the September 26, 2013 entry entitled Why doesn’t it sound (in here) like it sounds out there? I mentioned a growing awareness in my earliest days as an engineer that what I’d previously thought of as the world of audio was in fact two different worlds.  On the one hand there was the audiophile world that suggested recordings could be made and played back with the aim of recreating the sound of real musicians, playing real instruments in real spaces.  On the other hand was the professional world in which I’d been working, where the idea of sounding real did not seem to be a frequent consideration.  Some of the audiophile gear, particularly among the loudspeakers, was fantastic at approaching the sound of real music.  In contrast, the studio monitors were capable of playing at extraordinary volume levels.  Many of the recordings popular among audiophiles excelled at capturing a sense of real life, including not only the sounds of the instruments but the spaces in which the recorded performances occurred.  In contrast, the studio recordings, many of which captured their own magic, never allowed the listener to suspend disbelief.  They sounded like recordings where some audiophile recordings sounded instead like the music itself.

As my skills and experience grew, I came to feel each of these worlds could learn something from the other, to the mutual benefit of all concerned and especially the listener.  Yet how often I was reminded of how little interaction there tends to be between pro and audiophile worlds.  To this day, it seems to me that all too many pros often miss opportunities for much higher quality audio, while at the same time, all too many audiophiles lack an understanding of how their records are made.

One of the best examples of the latter is the oft mentioned desirability in the audiophile Internet fora of “flat transfers”, the idea being that the finished masters used as sources for replication of the finished product should have no equalization or other processing applied.  (The term “flat” suggests no frequency equalization is applied – nothing to “tilt” the frequency response – but the term can also imply no other processing as well.)  In my early days as an engineer, I felt the same way.  Then I had the opportunity to hear how most master tapes actually sound.  When one considers the types and number of microphones, where they are typically placed, the signal path in most studios and the monitoring (of which I have already spoken in earlier entries), it should come as no surprise that most master recordings need help.  Put another way, if a recording was made with sufficient treble energy to bring on a headache in the listener and if I can make that recording hurt less by the judicious application of frequency equalization, I would think EQ is in fact, a good thing.  Indeed, a tiny minority of very well made recordings are best served with no EQ at all but most studio recordings will benefit quite significantly from some well considered equalization.

Now I can understand that sometimes, the reason a recording might hurt so much is precisely because of EQ – bad EQ, perhaps because the engineer was trying to make bad monitoring sound right.  This goes back to The Questions I mentioned in the previous entry.  Is the EQ being applied to address a flaw in the recording or is it mistakenly being used to address bad monitoring?  This question should have been preceded by “How trustworthy is the monitoring in this studio?”  Whatever the reason, when heard on a system capable of getting out of the way sufficiently to allow one to hear the recording itself, careful application of EQ can be used to repair at least a good part of the damage done the first time.  If the recording can be made more listenable with EQ, should EQ be avoided simply because it has been misused elsewhere?  I’d first ask “How artifact free is this equalizer and the settings I intend to apply?”  (That last question, like most of the others, turns out to be critical for those interested in making high quality recordings.  A big part of the reason many have come to think of EQ as bad is that among those who use EQ, the question is almost never asked.  Even on the best equalizers, the wrong settings can cause sonic problems.)  With a positive answer, I would elect to apply the EQ.  My experience has supported this approach over the course of hundreds if not thousands of recordings.

I have always very much enjoyed it when an audiophile sensibility entered into the pro audio world and pointed to what could be.  Some of my fondest memories of my years at Atlantic involve the time I spent with the great mastering engineer George Piros.  George would often tell me of his early days working with Bob Fine and Wilma Cozart on their recordings for Mercury Living Presence.  I had not heard of these recordings before and when I followed up to find some of them, I found a great many joys both musical and sonic.  C. Robert (“Bob”) Fine was to become one of my engineering heroes and one of my inspirations insomuch as he made his stereo recordings with only three microphones.  George was to become one of my mastering heroes for his preservation of musical dynamics.  He remains one of the tiny handful of mastering engineers I can name who did not routinely apply dynamic compression to his signal path.  Everything George mastered, from Bob Fine’s classics to much of Atlantic’s classic jazz, rhythm and blues, and rock recordings has a sense of Life to it.  One of my favorite memories was made one day when walking past the outside of his mastering room.  I heard loud music through the heavily padded, “soundproof”, double-door “airlock”.  I decided to visit and upon entering the room, saw George leaning over the microscope of his lathe, examining the fresh groove he was cutting into the lacquer disc, while AC/DC’s music virtually peeled the paint from the mastering room walls.  For me, it remains one of the great moments in rock.

Not that George was an audiophile.  He was just one of those folks who could accurately intuit what a recording needed and apply it to get the results he wanted.  He did not mince words with regard to the program material or some of the tools popular in audio engineering circles.  George was famous among those who knew him for his “Piros-isms”, his unabashed commentary that would sometimes include language that would make a marine drill sergeant blanch.  But he was more famous for the same honesty he brought to his work and that honesty brought some audiophile “names” up to his mastering room.  Through George, I was introduced to Bert Whyte, whose monthly “Behind the Scenes” column was a favorite of mine in all the years I’d been reading Audio magazine.  Bert was the engineer on the great recordings for the Everest label.  (Like Bob Fine, he too used only three microphones and created fabulous results.)  I got to take home a test cut of a Whyte recording of Stravinsky’s “L’Histoire du Soldat”, a personal favorite, though I’d never heard it sound so good before.  On another occasion, another “name” from the audiophile world dropped by to work with George: Joseph Grado, inventor of the stereo moving coil phono cartridge, creator of the moving magnet cartridge I was using at home at the time and, as I learned, an operatic tenor too!  I remember being in the mastering room with both discussing monitors.  Joe said “You see Barry, George uses these studio monitors with no complaints” to which George responded “You mean those pieces of #$^%?”

I feel more than fortunate to stand astride both audio worlds and to have learned a great deal from each.  In an effort to find a synthesis of both worlds, to find the underlying unity which I felt to be larger than either, in 1987 I decided to become an independent engineer and formed Barry Diament Audio.  The learning opportunities once more expanded geometrically.

As I did not yet have my own studio and mastering work was starting to come in, I sought out studios where I could rent time, my prime criterion being monitoring I could trust.  As may be concluded from what I’ve said in this and previous entries, this was not an easy task.  In the end, I found a small number that were willing to accommodate my request for certain monitoring arrangements.  They would have to do until the time came when I designed my own room.

In keeping up with contacts in both the pro and audiophile worlds, an opportunity arose to visit with the editor of one of the audiophile publications I was reading.  I knew his reference playback system was reputed to be among the best.  What I was not at all prepared for was the fact that after being an avid music lover and audio enthusiast since childhood, after having done pro audio work in a number of studios and after having read all the books and journals on the subject I could find, I was going to hear stereo for the first time.

My conception of stereo before that evening was probably a lot like that of other folks, based on what we’d been “taught” over the years and what we’d heard on the old stereo demonstration recordings.  There might be a piano on the left and a guitar and bass in the center and drums on the right.  There might be a marching band proceeding across the room from one speaker to the other.  Most of the time whether in folks’ homes or in audio dealerships, I’d seen stereo speakers placed as far apart as a room would allow, often in the corners.  I thought I’d made great progress when I found that moving the speakers out of the corners and away from the walls resulted in much improved sonics.  What I didn’t realize is that I was still listening to a pair of what might effectively be mono sources, playing together.  There was “sound from the left speaker” and “sound from the right speaker” and some sound in between.  It was nice and it was fun but as I came to learn, it wasn’t stereo.

Among the first things I noticed when I visited that evening, aside from the jaw-dropping gear I’d only seen in magazine photographs, was that the speakers were, to my mind, “in the middle of the room”.  They weren’t just out of the corners and off the wall, they were well out into the space.  There was lots of room all around them.  I’d never seen a setup done this way.  One of the first records played that evening (we were listening to vinyl) was an old, well worn Leonard Cohen album.  The track featured his voice, accompanied by acoustic guitar.  This was not a super record, just an ordinary studio production and an old copy of the record too.  What I heard was something entirely unexpected on my part and something entirely new to me.

First, there was no “sound from the left speaker” and “sound from the right speaker” and some sound in between.  To my ears and brain, there were no speakers at all.  I had a scarily distinct sense of the artist and the air in the studio around him.  It was almost as if I could see him sitting on a tall stool in a large room with the lighting turned down.  (Of course, I have no idea whether he was sitting or standing and what level the lights in the room were set to but this was the impression created in my mind by the sound alone.)  Now some of that experience must be attributed to the gear itself.  The loudspeakers were outstanding at “disappearing” for sure, as they rightly should have been for their six-figure price and the commensurate associated gear to which they were connected.  But as I was to learn, their placement played a commanding role in allowing the system to achieve its potential.  And applying what I learned that evening to other, less extravagant speaker designs would similarly unleash their potential in ways that were new to me.

Stereo by definition implies solidity and hence three dimensions.  Properly done, the listener does not hear sound from the speakers.  On the contrary, the speakers seem to disappear and the entire part of the room in which they reside comes alive with the audio equivalent of a hologram.  The sound occurs on a stage (a soundstage) and the images upon that stage too, occur in three dimensions.  On the finest recordings containing such information, the listener can perceive a sense of depth, with for example, the instruments in the back row of an orchestra seeming to emanate from well behind the wall behind the speakers.  (In order for the listener to perceive them from a good pair of properly placed loudspeakers, these spatial cues must of course be captured in the recording.)

While this was all news to me, I later found the idea for proper placement of stereo loudspeakers dated back to the 1950’s in an article Peter Walker wrote for the English journal Wireless World.  Many know Walker as the designer of amplifiers and electrostatic loudspeakers marketed under the Quad name.

Monitoring was something I’d long recognized as critical in any recording or playback situation, yet this recognition existed for many years before I had the opportunity to hear real stereo for the first time.  Now I was starting to apply this newfound knowledge in my own listening room and in the studios I worked in.  (I wrote a bit more about speaker placement in an article called Setting up your monitoring environment.)  This was also making me think anew about how to capture real stereo in recordings.