Two worlds, heroes and real stereo

In the September 26, 2013 entry entitled Why doesn’t it sound (in here) like it sounds out there? I mentioned a growing awareness in my earliest days as an engineer that what I’d previously thought of as the world of audio was in fact two different worlds.  On the one hand there was the audiophile world that suggested recordings could be made and played back with the aim of recreating the sound of real musicians, playing real instruments in real spaces.  On the other hand was the professional world in which I’d been working, where the idea of sounding real did not seem to be a frequent consideration.  Some of the audiophile gear, particularly among the loudspeakers, was fantastic at approaching the sound of real music.  In contrast, the studio monitors were capable of playing at extraordinary volume levels.  Many of the recordings popular among audiophiles excelled at capturing a sense of real life, including not only the sounds of the instruments but the spaces in which the recorded performances occurred.  In contrast, the studio recordings, many of which captured their own magic, never allowed the listener to suspend disbelief.  They sounded like recordings where some audiophile recordings sounded instead like the music itself.

As my skills and experience grew, I came to feel each of these worlds could learn something from the other, to the mutual benefit of all concerned and especially the listener.  Yet how often I was reminded of how little interaction there tends to be between pro and audiophile worlds.  To this day, it seems to me that all too many pros often miss opportunities for much higher quality audio, while at the same time, all too many audiophiles lack an understanding of how their records are made.

One of the best examples of the latter is the oft mentioned desirability in the audiophile Internet fora of “flat transfers”, the idea being that the finished masters used as sources for replication of the finished product should have no equalization or other processing applied.  (The term “flat” suggests no frequency equalization is applied – nothing to “tilt” the frequency response – but the term can also imply no other processing as well.)  In my early days as an engineer, I felt the same way.  Then I had the opportunity to hear how most master tapes actually sound.  When one considers the types and number of microphones, where they are typically placed, the signal path in most studios and the monitoring (of which I have already spoken in earlier entries), it should come as no surprise that most master recordings need help.  Put another way, if a recording was made with sufficient treble energy to bring on a headache in the listener and if I can make that recording hurt less by the judicious application of frequency equalization, I would think EQ is in fact, a good thing.  Indeed, a tiny minority of very well made recordings are best served with no EQ at all but most studio recordings will benefit quite significantly from some well considered equalization.

Now I can understand that sometimes, the reason a recording might hurt so much is precisely because of EQ – bad EQ, perhaps because the engineer was trying to make bad monitoring sound right.  This goes back to The Questions I mentioned in the previous entry.  Is the EQ being applied to address a flaw in the recording or is it mistakenly being used to address bad monitoring?  This question should have been preceded by “How trustworthy is the monitoring in this studio?”  Whatever the reason, when heard on a system capable of getting out of the way sufficiently to allow one to hear the recording itself, careful application of EQ can be used to repair at least a good part of the damage done the first time.  If the recording can be made more listenable with EQ, should EQ be avoided simply because it has been misused elsewhere?  I’d first ask “How artifact free is this equalizer and the settings I intend to apply?”  (That last question, like most of the others, turns out to be critical for those interested in making high quality recordings.  A big part of the reason many have come to think of EQ as bad is that among those who use EQ, the question is almost never asked.  Even on the best equalizers, the wrong settings can cause sonic problems.)  With a positive answer, I would elect to apply the EQ.  My experience has supported this approach over the course of hundreds if not thousands of recordings.

I have always very much enjoyed it when an audiophile sensibility entered into the pro audio world and pointed to what could be.  Some of my fondest memories of my years at Atlantic involve the time I spent with the great mastering engineer George Piros.  George would often tell me of his early days working with Bob Fine and Wilma Cozart on their recordings for Mercury Living Presence.  I had not heard of these recordings before and when I followed up to find some of them, I found a great many joys both musical and sonic.  C. Robert (“Bob”) Fine was to become one of my engineering heroes and one of my inspirations insomuch as he made his stereo recordings with only three microphones.  George was to become one of my mastering heroes for his preservation of musical dynamics.  He remains one of the tiny handful of mastering engineers I can name who did not routinely apply dynamic compression to his signal path.  Everything George mastered, from Bob Fine’s classics to much of Atlantic’s classic jazz, rhythm and blues, and rock recordings has a sense of Life to it.  One of my favorite memories was made one day when walking past the outside of his mastering room.  I heard loud music through the heavily padded, “soundproof”, double-door “airlock”.  I decided to visit and upon entering the room, saw George leaning over the microscope of his lathe, examining the fresh groove he was cutting into the lacquer disc, while AC/DC’s music virtually peeled the paint from the mastering room walls.  For me, it remains one of the great moments in rock.

Not that George was an audiophile.  He was just one of those folks who could accurately intuit what a recording needed and apply it to get the results he wanted.  He did not mince words with regard to the program material or some of the tools popular in audio engineering circles.  George was famous among those who knew him for his “Piros-isms”, his unabashed commentary that would sometimes include language that would make a marine drill sergeant blanch.  But he was more famous for the same honesty he brought to his work and that honesty brought some audiophile “names” up to his mastering room.  Through George, I was introduced to Bert Whyte, whose monthly “Behind the Scenes” column was a favorite of mine in all the years I’d been reading Audio magazine.  Bert was the engineer on the great recordings for the Everest label.  (Like Bob Fine, he too used only three microphones and created fabulous results.)  I got to take home a test cut of a Whyte recording of Stravinsky’s “L’Histoire du Soldat”, a personal favorite, though I’d never heard it sound so good before.  On another occasion, another “name” from the audiophile world dropped by to work with George: Joseph Grado, inventor of the stereo moving coil phono cartridge, creator of the moving magnet cartridge I was using at home at the time and, as I learned, an operatic tenor too!  I remember being in the mastering room with both discussing monitors.  Joe said “You see Barry, George uses these studio monitors with no complaints” to which George responded “You mean those pieces of #$^%?”

I feel more than fortunate to stand astride both audio worlds and to have learned a great deal from each.  In an effort to find a synthesis of both worlds, to find the underlying unity which I felt to be larger than either, in 1987 I decided to become an independent engineer and formed Barry Diament Audio.  The learning opportunities once more expanded geometrically.

As I did not yet have my own studio and mastering work was starting to come in, I sought out studios where I could rent time, my prime criterion being monitoring I could trust.  As may be concluded from what I’ve said in this and previous entries, this was not an easy task.  In the end, I found a small number that were willing to accommodate my request for certain monitoring arrangements.  They would have to do until the time came when I designed my own room.

In keeping up with contacts in both the pro and audiophile worlds, an opportunity arose to visit with the editor of one of the audiophile publications I was reading.  I knew his reference playback system was reputed to be among the best.  What I was not at all prepared for was the fact that after being an avid music lover and audio enthusiast since childhood, after having done pro audio work in a number of studios and after having read all the books and journals on the subject I could find, I was going to hear stereo for the first time.

My conception of stereo before that evening was probably a lot like that of other folks, based on what we’d been “taught” over the years and what we’d heard on the old stereo demonstration recordings.  There might be a piano on the left and a guitar and bass in the center and drums on the right.  There might be a marching band proceeding across the room from one speaker to the other.  Most of the time whether in folks’ homes or in audio dealerships, I’d seen stereo speakers placed as far apart as a room would allow, often in the corners.  I thought I’d made great progress when I found that moving the speakers out of the corners and away from the walls resulted in much improved sonics.  What I didn’t realize is that I was still listening to a pair of what might effectively be mono sources, playing together.  There was “sound from the left speaker” and “sound from the right speaker” and some sound in between.  It was nice and it was fun but as I came to learn, it wasn’t stereo.

Among the first things I noticed when I visited that evening, aside from the jaw-dropping gear I’d only seen in magazine photographs, was that the speakers were, to my mind, “in the middle of the room”.  They weren’t just out of the corners and off the wall, they were well out into the space.  There was lots of room all around them.  I’d never seen a setup done this way.  One of the first records played that evening (we were listening to vinyl) was an old, well worn Leonard Cohen album.  The track featured his voice, accompanied by acoustic guitar.  This was not a super record, just an ordinary studio production and an old copy of the record too.  What I heard was something entirely unexpected on my part and something entirely new to me.

First, there was no “sound from the left speaker” and “sound from the right speaker” and some sound in between.  To my ears and brain, there were no speakers at all.  I had a scarily distinct sense of the artist and the air in the studio around him.  It was almost as if I could see him sitting on a tall stool in a large room with the lighting turned down.  (Of course, I have no idea whether he was sitting or standing and what level the lights in the room were set to but this was the impression created in my mind by the sound alone.)  Now some of that experience must be attributed to the gear itself.  The loudspeakers were outstanding at “disappearing” for sure, as they rightly should have been for their six-figure price and the commensurate associated gear to which they were connected.  But as I was to learn, their placement played a commanding role in allowing the system to achieve its potential.  And applying what I learned that evening to other, less extravagant speaker designs would similarly unleash their potential in ways that were new to me.

Stereo by definition implies solidity and hence three dimensions.  Properly done, the listener does not hear sound from the speakers.  On the contrary, the speakers seem to disappear and the entire part of the room in which they reside comes alive with the audio equivalent of a hologram.  The sound occurs on a stage (a soundstage) and the images upon that stage too, occur in three dimensions.  On the finest recordings containing such information, the listener can perceive a sense of depth, with for example, the instruments in the back row of an orchestra seeming to emanate from well behind the wall behind the speakers.  (In order for the listener to perceive them from a good pair of properly placed loudspeakers, these spatial cues must of course be captured in the recording.)

While this was all news to me, I later found the idea for proper placement of stereo loudspeakers dated back to the 1950’s in an article Peter Walker wrote for the English journal Wireless World.  Many know Walker as the designer of amplifiers and electrostatic loudspeakers marketed under the Quad name.

Monitoring was something I’d long recognized as critical in any recording or playback situation, yet this recognition existed for many years before I had the opportunity to hear real stereo for the first time.  Now I was starting to apply this newfound knowledge in my own listening room and in the studios I worked in.  (I wrote a bit more about speaker placement in an article called Setting up your monitoring environment.)  This was also making me think anew about how to capture real stereo in recordings.

Perfect Sound Forever? (Part 2)

There I was in 1984, Atlantic Records’ “CD mastering department”, responsible for creating a good portion of the masters used to replicate the monthly CD releases for the label and associated divisions (Atco, Elektra, etc.).  Demand for CD was on the increase and it was clear this was where recorded music was going.  The small CD section at the local Tower Records store was a bit larger every time I visited, slowly but surely encroaching upon the real estate that was, for the moment, dominated by vinyl LPs.  I saw customers so eager for new CDs, I got the impression even a disc of dog barks would be a hot sales item.

The manufacturers behind the format proclaimed “Perfect Sound Forever”, distortion-free music on a medium that would not wear out.  It sounded too good to be true.  Like most things that sound too good to be true, it wasn’t true.  I remember the expectation with which I first listened to digital masters and to the earliest CDs.  Despite the raves of my colleagues and those in the press, what I heard every time I listened sounded to me not like an evolutionary step forward for audio but like an electronic equivalent of fingernails on a blackboard, an irritating harshness that felt like a good deal of the music had been replaced by something unnatural, something mechanical, something cold.

A number of colleagues I spoke with did not seem to have the same experience.  In fact, they looked at me askance when I expressed great disappointment in what I’d heard, as if I was missing something so obvious, they couldn’t believe it.  They would point out how flat the frequency response measurements were, that the wow-and-flutter (a measure of speed inaccuracy) was virtually unmeasurable.  They would say “Just listen to the noise!”, amazed to have a medium that did not add any hiss.  I would respond “Just listen to the music!”

Yes, piano recordings did display a steadiness of pitch devoid of the indeterminacy sometimes engendered by analog media (played on less than great tape machines or turntables, or when either the tape was stretched or the vinyl pressing suffered a slightly off-center hole).  If any hiss was audible at all, it was the hiss from the original analog recording.  The digital medium wasn’t adding any that I could detect.  Yet, what good were rock steady speed and dead silent backgrounds when the piano sounded like it was made of aluminum?  And the cello sounded like a cousin of the kazoo?  Instrumental harmonics were bleached into thin, pale ghosts of themselves and the very air around the players (on recordings that had such) seemed to have been sucked from the room.  A great rock record invites the listener to turn up the volume.  Doing so with a rock CD just brought on the headache that much sooner.  What was wrong?

I had done everything I knew to ensure the highest possible quality.  I set up the CD mastering room with the audiophile sensibilities I sought to bring to my work.  I created CD masters bypassing most of the electronics in the room, keeping the signal path as short as possible, introducing only what was absolutely necessary and avoiding extra switches, wires, patch bays, consoles, etc.  I even took to carrying my own cables to work every day, replacing the generic studio cables connecting the output of the tape machine to the analog-to-digital converters with one of the best audiophile designs of the day, one that had repeatedly shown me it was capable of passing more of the musical information, with less degradation than the regular studio cabling.  Still, even with the CD masters created this way, a comparison with their vinyl counterparts, made using a far less purist approach, showed just how much more of the musical information on the master tape made it to the finished LP than ever made it to the CD.  There were no exceptions.  This was the case every single time.  Digital acolytes in the press attributed any favor shown the LP to euphonic (i.e., pleasant sounding) colorations in the medium, where CD was supposedly truer.  But as is often the case, the audible evidence said otherwise.  A well set up $100 turntable/cartridge combination would, in terms of bringing back the sound of the master recording, sonically wipe the floor with a $1000 CD player.

A fellow mastering engineer, one whose work I had admired for years, called one day and invited me to sit on a panel of mastering engineers to discuss CD at a meeting of the Audio Engineering Society in New York City.  I gratefully accepted and not long afterward, found myself sitting at a long table on stage in an auditorium, next to four other colleagues, all of us involved in CD mastering.  When I spoke, I felt quite alone in that my colleagues all sang the praises of the new medium while I (quite shyly at the time) said “I just don’t feel it sounds as good as my vinyl yet.”  (Yet?!?)  I explained how I felt vinyl was revealing much more of the musical information contained in the master tapes.  Despite any technical flaws or issues in manufacturing and playback, things that did not at the time seem to plague CD (at least not when one just looked at the surface of things), vinyl was providing more music and to my ears, that was more important.  When I left that evening, I thought folks were looking at me as though I had two heads.

What we came to learn as time passed and more audiophile companies got involved with digital and CD, was that a major part of that bad sound in the early days was due to the digital recording and playback gear itself, perhaps most specifically in the filtering that is an essential part of these mechanisms but also in the converter chips at their core.  I found it interesting that when folks like Bob Stuart started writing articles about jitter (timing irregularities between samples in the stream of digital data), a number of folks who had previously raved about CD (seemingly because of the “good” specifications they’d read) now found issues with the format.

With the advent of new knowledge came new filter designs and new converter chips.  The players were starting to get better.  Even the Sony 1630 converters I used in the studio got new retrofit filters that made for noticeable sonic improvements.  The CD format was growing in popularity every day and clearly was going to be around for a while.  The thought occurred that vinyl mastering engineers were routinely credited for their work on albums but no one as yet (at least to my knowledge) had been credited with CD mastering.  I spoke about this with management and after a conversation with the art department, saw the first CD booklet with my name in it.  As the format continued to grow and demand for more releases increased, outside facilities were contracted to create masters in addition to the ones that were keeping me busy full-time.  The only problem was the art department was not always informed when a master was going to be done by a third party.  As a result, some CDs I mastered did not have a credit and some CDs mastered by others have my name on them.  (In a way, I came to know whence the phrase “Be careful what you wish for” comes.)

I made some other observations regarding the digital audio of the day.  First, the playback and record sides of the Sony DAE-1100 digital audio editor did not sound the same.  The official word was that a digital tape could be cloned (“clone” being the term used to describe a digital copy) to create an identical copy.  Yet, when I cloned a digital tape and played it back to compare it with the original, the original always sounded cleaner.  Was there some degradation in the copy?  I found it interesting that when I took the tapes out of their respective machines and swapped them, putting the copy in the “playback” machine and the original in the “record” machine, the original now sounded degraded.  It turned out (for reasons I’m still not sure of) that playing back a tape from playback side of the editor just sounded better than playing the same tape from the record side.

As CD grew, we started using more and more replication facilities.  When sales for a particular release were expected to be large, often a single replicator could not produce a sufficient quantity of discs, so I’d create a CD master and then send clones of that master to different replicators.  When the discs came back, I made another discovery.  The discs from all the replicators sounded different from each other, sometimes subtly so and other times not so subtly.  And none of the discs sounded indistinguishable from the master used to make it.

It was plain to see there was much more to be learned about this digital juggernaut.  My thinking was that we’d had vinyl for about a hundred years.  In another hundred years, I expected CD would be pretty good.  Happily, it hasn’t taken nearly as long as that.  Today, CD can be “pretty good” if not exactly competitive with fine vinyl, despite what is said in some quarters.  Perfect sound forever?  Not to my ears.  It is more like “Decent sound, once in a while” but I can see how that is a bit less catchy as a marketing phrase.

Sonically, there was lots of room for digital to grow.  As futuristic as the equipment seemed at the time, it too, along with many of the very techniques involved in recording and editing, would soon undergo a revolution, as recording and mastering began to take advantage of the nascent world of desktop computing.

Perfect Sound Forever? (Part 1)

In early 1983, I created my first master for Compact Disc.  I first heard of the format nearly a decade earlier, while still in college.  I remember a promotional mock-up, looking very much like a miniature LP jacket.  Inside, was a cardboard disc printed with the distinctive rainbow reflections of the real thing.

Atlantic’s west coast affiliate, Warner Brothers, was already creating CD masters when it was decided that Atlantic would open its own CD mastering room.  I was to be the CD mastering “department” and was sent to Los Angeles to spend a few days with my counterpart, learning the procedures Warner Brothers had in place for creating CD masters.

At this point, the only CD mastering rooms I knew to exist were at Sony in Japan, Polygram in Germany, Warner in California, perhaps DADC in Terre Haute and now, Atlantic.  To my knowledge, I was one of the first engineers to do CD mastering.  Technically, the process of creating a master for CD replication is referred to as “premastering”.  To the replication facilities, the term “mastering” refers to the first stage of manufacturing, when the glass master is “cut”.  Glass mastering is the creation of a glass disc, etched by a laser beam recorder.  This disc is electroplated and used as the first part in the process that yields the injection molded finished CD.  Still, in terms of the creative process, which occurs prior to manufacturing, creating a CD master is still referred to as “mastering”.  Mastering, for any format, not just CD, has always been the last step in the creative process and also, the first in the manufacturing process.  It is the last chance to make any adjustments to the sound and it is where the “part” used to initiate manufacturing is created.

In those days, the CD master sent to the replication facility was recorded on a U-Matic video tape cartridge, housing ¾” (~19 mm) wide tape.  It was recorded using the video capacity to store the digital audio signal.  A parallel track stored the usual time code used by both video and digital audio.  The system was built around two U-Matic machines (one to play, one to record), the 1610 (later 1630) analog to digital (and digital to analog) converters, and the DAE-1100 digital audio editor.  Ancillary gear included another Sony device, the DTA-2000, to analyze finished tapes and provide a printout of error occurrences per minute.  This, along with a written “table of contents” indicating start and end time code locations for every track and other incidental details was sent to the replication facility with the CD master.  A pair of U-Matic machines, the 1630, the analyzer and the electronics associated with the editor filled an equipment rack several feet tall.

The editor itself was a small console, a few feet wide.  It contained controls for up to three tape machines (two for playback, one to record), readouts of the time code indicating the location of the tape in each machine, controls to perform editing, and a fader used for gain (i.e., level) adjustments.  Editing in the digital domain no longer involved using a razor blade to physically alter the original tape, as we had always done with analog tape.  (There were some short-lived exceptions in the form of the digital multitrack reel-to-reel recorders that were to come later.)  Digital editing was now effected by playing the original tape while recording the edits onto a new tape.  The finished result needed to be created sequentially.  If, upon listening to the results of an editing session, the producer decided to add to or remove anything from the middle of the program, a new tape was created, requiring the entirety of the program prior to the new edit to be copied first.

As the music was playing and the engineer heard the section where the desired edit point was located, the press of a button on the editor would store a 6-second sample of the music — the three seconds before the button press and three seconds after.  The playback and record machines would stop.  A small wheel in the middle of the editor was used to manually move forward and backward in the captured sample of audio, so the engineer could precisely locate the edit point on the newly recording tape.  Turning this wheel accomplished what used to be done with analog tape by having one hand on each reel and manually “rocking” the tape past the analog machine’s playback head in order to locate the desired edit point.  Where the edit point used to be marked with a grease pencil, all the engineer needed to do now was press another button on the editor.  Now that the “out” edit point was selected on the record machine, a similar process of location would be done on the playback machine to find the “in” point from which the new tape was to continue.  Once the edit point on each tape was selected, a preview button started a process where both tape machines would shuttle backward a predetermined amount of time, still synchronized with each other and then both started to play.  The audio would be from the record machine (i.e., what had already been recorded prior to the edit point) until the edit point was reached, when audio would switch to the playback machine, in effect, allowing the engineer to hear the edit before committing to it.

If a recording or mixing studio console was reminiscent in some way of an airplane or Space Shuttle cockpit, my first look at the DAE-1100 editor reminded me of Star Trek.  It felt like the future, with its smooth, uninterrupted surface of subtle grey, with darker gray, red, orange and blue “buttons”.  Being able to test an edit without committing to it meant all sorts of edits could be attempted without fear of having to splice together a missed edit.  I used to describe the precision of the edits as allowing me to “get in and out within the width” of the razor blade cuts we used to make.  In comparison, I described the thought of editing with a razor blade as now feeling much like editing with a hammer.

Having long experienced what seemed to me to be the inadequate monitoring in the studios I’d worked in, visited and read about in the professionally oriented magazines, I sought to do something different in the new CD mastering room at Atlantic.  Rather than loudness optimized speakers, placed against the wall, near the corners, over the engineer’s head, or small, dynamically challenged speakers placed where they would create a midrange dip at the listening position – both commonly seen in every studio in my experience, I wanted to bring some audiophile sensibilities into the room.  At my request, studio management agreed to install a pair of Dahlquist DQ-10 speakers (my favorites at the time).  These I placed a few feet off the wall behind them, in free space, with nothing else near the speakers.

Once the room was set up properly and known master tapes played back to my satisfaction, it was time to get my first really good listen to digital audio.  The advance word from the hobbyist and professional magazines, as well as from colleagues who’d already gotten to listen to a bunch of the earliest CD samples, was very positive.  Everyone was enthusiastic.  I was going to hear what had widely been touted as “Perfect Sound Forever”.  With great enthusiasm and anticipation, I listened to my first sample.  Then I listened to another one.  And another one.  I listened to all the samples we had.  I went back and listened to some analog master tapes and vinyl LPs to make sure the monitoring was what I expected it to be.  With the analog sources, it was.  With the digital sources, I wondered just what everyone had been raving about.

Into the Majors

While keeping up with the recording studio scene in New York City, I heard there might be an opening at Atlantic Studios for a music editor.  In the two and a half years since I got my first job as a studio assistant, I had been involved with recording, overdubbing, mixing, editing and mastering.  The promotion to chief engineer brought with it a catalog of opportunities to experiment and learn, in which I immersed myself every day.  Now, Atlantic Studios beckoned!

I called the studio manager and much to my joy, an interview was scheduled.  We met, spoke and he offered me the position of music editor.  I accepted.  Atlantic Studios!  Atlantic Records!  First entry into studio A, the largest of the three studios on the premises, was a visit to hallowed musical ground.  So many records I’d grown up with, and others that were significant parts of the soundtrack of my life, were made in this room.  So many musical heroes created magic in this space.  Names sped through my mind:  Ray Charles, the Coasters, the Drifters, the Rascals, Aretha Franklin, Doctor John (the Night Tripper), John Coltrane, Ornette Coleman, Charles Mingus, Crosby, Stills and Nash (and Young), Buffalo Springfield, Cream, the list goes on and on.  The roster also included a long list of artists who recorded elsewhere but whose work was released by Atlantic, among them, artists such as Led Zeppelin, Yes, King Crimson, Emerson, Lake and Palmer, Genesis, AC/DC, Phil Collins, Robert Plant, the Rolling Stones – a dizzying array of musical delights for the new employee.

To friends, I summed up my primary responsibilities as music editor as being to make long songs shorter and short songs longer.  Despite the exception a few decades before, when radio stations played Bob Dylan’s “Like A Rolling Stone” single which clocked in at over six minutes, it was common practice to edit album length songs down to somewhere around three or at most, three and a half minutes in length in the hope that this made them more likely to get played on the radio.  When management at the record label decided that a certain album track would become a single, my job was to create a copy of the album master and cut the copy to a shorter duration.  (Since editing in those days was accomplished with a razor blade, and since the master mix used for the album was needed for the album, it was necessary to create a tape copy in order to create an alternate version of a song.)  On some occasions, the record producer would already have an idea of what parts of the song they wanted to remove in order to create the single but in most instances, I was left with this creative decision.  Of course, approval (or rejection) of the edit, was the producer’s call.  In a typical single edit, a verse might get removed.  If the song contained a long instrumental break, this was shortened.

The Procrustean task of the editor was a bit more complicated when a song needed to be lengthened.  Recall that these were the days of the “dance single”, versions of songs longer than those on the album, that had become popular in the dance clubs.  How to lengthen a five minute song to eight minutes or more?  Where radio singles involved removing verses or shortening instrumental breaks, dance singles would have verses (or choruses) repeated and instrumental breaks doubled (by repeating sections or using sections within the break to build a longer, more complex break).  All this, in those days, accomplished with a fresh razor blade, a grease pencil and an Edit-All bar (a metal block with a tape-width groove to secure the section of tape being edited, and angled slots through which to pass the cutting blade).  There was no “Undo” button.  There was no “Nudge” button to move an edit point.  Instead, the engineer manually “rocked” the tape back and forth past the playback head, a hand on each reel, listening to the slow-motion playback for the point at which they would make the edit.  When the engineer thought they had the point, they’d carefully mark the tape with the grease pencil, loosen the reels and place the section of tape in question into the Edit-All bar.  If an edit didn’t work, the tape had to be spliced back together and a new cut attempted.

For today’s users of DAWs (digital audio workstations), where one uses a computer mouse to select a musical passage and make a menu selection to alter said passage, imagine this:  On one occasion, we needed to “censor” one word the vocalist sang and the decision was to reverse it — make that one word occur backwards, while the rest of the music played normally. As our “workstation” of the day was nothing more than a razor blade and a stout heart, we needed to figure out where on the width of the 2” (~5 cm) wide multitrack tape the vocals were recorded.  Manually rocking the reels on the multitrack machine, we could find the start and end of the word in question.  Then, with a ruler, lines were drawn along the length of the tape to “outline” the location of the word.  Using that ruler, the razor and some determination, the word was cut from the tape and the excised section physically inverted, then re-taped in place.  And it worked!  What involved some time and a lot of sweat back then can be accomplished in a second today.

The editing room in those days was also a tape duplication room.  In addition to the reel-to-reel decks, there were racks of cassette decks.  Cassettes had replaced the 7” (~18 cm) reels of tape provided to producers as “refs” (reference copies) of a day’s work in the studio.  Cassettes were also made for the label’s promotion department, in order for the folks there to become familiar with each month’s album releases.  The reel-to-reel decks were also used to create sub-masters, which were formatted copies of albums, sent to tape duplication facilities for mass production of pre-recorded cassettes and (yes) 8-track cartridges.  Cassette sub-masters were pretty straight ahead copies of each album side.  The sub-masters used for 8-track cartridges got a unique treatment.  For those not old enough to remember the format, it was comprised of a continuous loop of tape inside a plastic cartridge.  As the program played the first stereo pair of tracks and reached the end of the loop, the playback device would switch to the next stereo pair of tracks for the next pass of the tape loop, then switch again to the third and four pairs of tracks each time the loop reached its end.  Having the four programs on adjacent pairs of tracks allowed for keeping the tape loop relatively short.  It also meant that sub-masters required an album be divided into four “programs”, each program destined for its own two tracks of the available eight.

Things got complicated when, for example, the first two or three songs on an album might total 10 minutes in duration and the next two or three songs might total 12 minutes and then next group of songs might total 8 minutes and the last group say, 9 minutes.  The goal was to build the four programs to be as equal in duration as possible.  Program 1 might have three songs, program 2 might have just 2 songs, etc.  The loop of tape put into an 8-track cartridge had to be long enough to accommodate the longest of the four programs.  In the example cited here, we’d need to have enough tape for the 12 minute program.  That means at the end of the 10 minute program, there would be a 2 minute wait until the player got to the end of the tape loop and advanced to the next program.  The wait between songs could be a long one and needless to say, completely discarded the spacing decided upon by the artist, producer and engineer when they assembled the album master.  From my own experience, I know that a difference of half a second in spacing between songs can change how an album feels when listened to from start to end.

Some record labels would opt to maintain the album sequence, equalize program lengths as best they could and the listener got to wait until the tape got to the end of the loop before the album proceeded.  Others would re-sequence the album — change the song order from that decided upon by the artist, producer and engineer — to arrive at the most equal program lengths possible for songs of the given durations.  (If a change in spacing between songs of half a second can change how an album feels, changing the order of songs can create what is essentially a different album.)  Still other labels would simply divide the total album duration by four and if a song was still in progress when the tape loop reached its end, it would continue when the playback device switched to the next program – often with several seconds of music simply missing. (!)  The technique at Atlantic was to maintain the sequence if possible, but rather than have songs interrupted when the tape loop reached its end, the songs would be faded for a gentler transition to silence.  Then, the tape would be backed up about 10 seconds and to start the next program, the song faded in from silence, picking up where it left off and continuing to its end.  The word in the studio was that one of the label’s major artists once said “Anyone that buys an 8-track deserves whatever they get”.  Its compromised sonics aside, it was for the obvious musical reasons that I was never a fan of the format.

Just a few short years later, the world of editing was going to be revolutionized. So was the world of recording, as new technology came to the fore, bringing with it new possibilities and new adventures.