MSTRCLSSNOTEXT.jpg

MSTRCLSS

noah mintz unfucks mastering

By Noah Mintz, Senior Mastering Engineer, Lacquer Channel Mastering

Mastering is the final stage of the recording process. It's also the first step in the manufacturing/distribution. Old school mastering, by traditional definition, is the process of physically putting the music onto a master ready for copying. So technically the only real mastering engineers are the people cutting lacquers or DMM (Direct Metal Masters) for vinyl production. However, we've evolved past that definition. Mastering now, as we know it today, is the balancing of the overall tonality of the music. This is achieved by the Mastering Engineer using frequency-based amplitude (EQ) and selective band or wide-band dynamic amplitude adjustment (Compression). Any processing beyond that is not mastering. I won't die on that hill of a statement. I'm not a gatekeeper of what is and what is not mastering. I will say if you can't regularly master with just an EQ and a Compressor/Limiter you're not really a mastering engineer. You're just an audio engineer doing mastering. Fight me.


Before you hit that all-caps in reply let me explain on what I think creative mastering is in the modern age. It's a best guess. There is no right way to master a song. There are a lot of wrong ways. Mastering, as an engineering art, has become so diluted over time that what would have typically been part of the recording or mixing process (or even sound design), is now routinely practiced in mastering studios.


Also, there now is "AI" and actual fucking robots doing mastering. Literally, robot arms turning analog EQ knobs. Soon Skynet will be eliminating all the human mastering engineers in a global plot to take over the mastering racket.


A well-mastered song should sound good on every playback device. From phone speaker to Bluetooth to Mac-Truck sized sound system. Often, after the mixing process, the mix engineer will provide a 'reference master' this is a version of the mix approximating the sound of the master so the artist can hear what the mix might sound like after mastering. Sometimes the artist will say they like 'the ref' better than the master. Which makes sense. It's usually just the mix made louder and both the artist and mixing engineer are used to its sound. More often than not though, if that ref-master is played on multiple playback systems it won't sound consistent. A well-mastered track may not sound as hype as the ref-master, maybe not even as loud, but the mastering engineer would have made tonality decisions based on a wide range of playback scenarios. We're not going for the best sounding master in our studio, we're going for the best overall sounding master.


Loudness is important in mastering. The mastering engineer will set the overall loudness of the song and the album as a whole. There is no standard for this and it's still the wild-west of loudness when it comes to music. There is no one answer to how loud should it be. Every song/album has a loudness sweet spot. A mastering engineer should be able to find that and a lot of time will go into achieving it. Can a robot understand 'the sweet spot'? That's purely an earthing, a 'feeling' that plays an important role in mastering.


Loudness wars are still a thing but mastering engineers are not frontline fighters anymore. The mix engineers often set the loudness even before they hit a mastering studio. Some mixes come in louder than a mastering engineer would make it after mastering. A mix with dynamic range allows me to do my job better. That's the rule but there are always exceptions. Sometimes it's the mixing engineer's vision of the song for it to be loud and I have to respect that and work within it. I can ask them to turn it down but only if I think it's been smashed to shit by mistake or if I know without a doubt that I can't make it sound better (or the same) after mastering because of the loudness. Sometimes, if I ask for a quieter mix I lose the job and that always sucks. So I have to be mindful of this. The bottom line is if it comes it loud it stays loud. A mastering engineer can adjust the volume but if it's already loud, that's permanent.


Many mastering engineers will try to impart a sound of their own via tape, tubes transformers, and various forms of saturation processing. I understand that. Mastering can be boring and uncreative especially if the mix is already near perfect. Mastering Engineers should put their ego aside. We're there to be seen not heard more-or-less. Do what's best to see through the vision of the artist, producer, recording, and mix engineer. We don't add a sound we enhance it by doing as little as possible. Lemme repeat that for those in the back. Mastering Engineers are to do a little as possible every time. We are not creatives. That's not to say mastering doesn't require creative thinking and engineering, of course, it does, but we don't add something new to the creative process. Again, there are always exceptions but as a Mastering Engineer, 'your sound' is the sound of the audio you're working on. Start with a little EQ. Then, add some compression. Does it help? If not, take it out. Is the EQ enough? If so, you're done, just adjust the loudness and then adjust the EQ again. If not, start adding all those tubes and such but only if they improve the sound not just to make your mark.


So if it's that simple, why can't robots master? Well, the answer is they can. The basics of mastering are super simple. Robots can do a pretty good job at basic mastering and automated mastering is a great tool to get an approximation of what your mix might sound like after mastering or for a mastering engineer to compare. What robots can't do is 'feel'. Feeling in audio engineering is hokey-pokey but your emotions and your ability to connect with the music have a direct psychological connection to the frequencies choices you'll choose to attenuate or accent. Robots won't use feelings they will just use math. Math will always tell you to fuck your feelings. Sometimes I add supersonic (16khz+) frequencies above human hearing, not because I can hear what it's doing but because of how it affects the feeling of the music heard in the changes in lower frequencies. Sometimes I run it through a compressor at a 1:1 ratio (no compression) because of the 'sound' of the compressor. That sound is hard to measure and quantify. It's especially present on analog devices. Each analog device sounds different. Why is hard to explain. It's a 'feel' thing especially in devices that offer non-linearities in sound, like tubes. That feel is ephemeral and important in mastering. No robot or machine learning can express it. We don't need to use those 'feel' devices every time but when we do they are invaluable.  


Despite some engineers' and manufacturers' claims, analog is not better than digital. Digital is not better than analog. They sound different. Even the best digital modeled analog doesn't sound like analog. It's great but it's something else. Every analog device is different every time you use it. Its components are slightly older, the power is different every day, the temperature of the room is slightly different. Plug-ins are the same thing every day in the hands of every engineer.  Analog has a 'je ne se quo'. Digital has perfect repeatability. Both of those traits are equally a bug and a feature. I'm biased but when choosing a mastering studio I feel it's super important to use a studio that can offer both digital and analog. If a studio is digital-only, while they can get great results, they can only ever offer digital. From experience, I can tell you that some mixes benefit greatly by going through an analog mastering stage. Every tube, transformer, EQ filter, and compression circuit sounds different and at that, it sounds different depending on how hard you drive it. Analog gain staging will change the sound just as much as choosing the different frequencies on an EQ. Digital doesn't quite have that trait but new DSP is being designed all the time to approximate the sound of analog. And it's amazing what digital can do but as good as digital is getting I can't see myself ever fully giving up analog.


A well-designed room and full-range speakers are probably the most important tool in mastering. Often, more than 50% of a mastering studio's build budget will be just in design and build. That said, we're in a new age of headphone design. My headphones were expensive and paired with an equally expensive headphone amp, challenging the need for a proper studio. I could easily, and sometimes do, master just on headphones. That said, some jobs must be done in a proper room and I couldn't imagine mastering without it. If a mastering engineer is only using headphones they can only ever use headphones. Some music you just need to feel with your whole body to make the right mastering decisions. Good headphones are really great at playing back bass but nothing can replace the moving air of large speakers in a well-designed room. My mastering room is flat down to 27hz and you can feel those sub frequencies as much as you can hear them. That connection to ears, mind, and body is not the full experience in headphones.


Lastly, it's a mastering engineer's job to be up on all the sound technology, distribution formats, musical genres, trends in audio production, and all things music-related. This is the new QC/QA. We are the last people to manipulate the music. It's not enough just to do equalization. We have to be aware of where the music will go. Not just the physical playback systems but who will be listening and on what platforms. From audiophile to TikToker these are the people who will be hearing our masters and each ear matters. The music is the message and the tonality will help the listener hear and connect with it. Knowing where the music will go between the mastering studio and the final listener is just as important as what frequency or amplitude we manipulate.  




 




  

By Noah Mintz, Senior Mastering Engineer, Lacquer Channel Mastering & Railtown Mastering

I attended the AES Immersive conference and here’s my takeaway;


Immersive content cannot be mastered. At least not yet and not how we traditionally view mastering. This is more than evident if you listen to, for example, the ATMOS mix of Ed Sheeran - Overpass Graffiti from his new album. The ATMOS version sounds completely different from the stereo version. Not just the mix but the sounds of the instrumentation. It’s not necessarily worse but like Justin Bieber and Kid Leroy's Stay, the ATMOS version lacks the sonic impact of the stereo version.

If there ever was a case for the importance of mastering, immersive audio for music is making it. We’ve been listening to professionally mastered music for the past 60 years. There is a reason why 90% of all the songs in the top 100 of all genres, save classical, have known professional mastering engineers and studio credits. Mastering is important for the enjoyment of music. Of course, I’m biased on this but by listening to the immersive and stereo versions back-to-back, this becomes self-evident.


Old white guys are into immersive audio technology. I see the irony of me writing this as I’m an old white guy, it would just have been nice to see AES including some more people of colour and women in the conference. I want to hear those voices. If this technology is to be adopted widely by audio engineers and ultimately consumers it can’t just be propagated by the same oldboy nerdy audio engineer club.


Once you hear immersive audio in a proper surround studio you’re a convert. I’m guilty of this. I’ve had the pleasure of listening to music in a few 7.1.4 (12 surround speakers) ATMOS studios and it’s incredible. Truly immersive. Once you’ve heard this or worked in that environment you tend to forget about how limited the binaural experience is which is the way most people will hear immersive. I know we’re all monitoring in binaural but the decisions are happening in a surround speaker environment and frankly, they are just not translating in binaural. Also, I think binaural listening presents only a small fraction of the immersive experience. Even with the Sony and new Apple headphones, which are specifically made for spatial sound, they don’t capture anything near the experience of a proper surround speaker set-up. More people need to be able to hear, experience, and own a proper surround speaker array for this format to become viable. That, or a major change in immersive headphone technology that doesn’t just rely on a virtual/perceptual experience.


Interactive music is coming. Soon listeners will be able to create their own mixes of music. This, at first, seems like a nightmare but I actually think this is pretty cool. The artist and engineers get to decide how much they can manipulate the mix at the authoring stage. So if U2 decided that you can’t completely take The Edge’s guitar out of the mix, you won’t be able to but this will provide a whole new way to listen to music that I think will actually enhance the listening experience. Imagine listening to a jazz piece and being able to listen to it again but only the trumpet. To be able to hear the instrument in all its naked glory. That’s exciting to me and any instrumentation fan. Again, we have the potential problem of it not being properly mastered but this I believe will be figured out at some point.


There is no single immersive format. Currently, the biggest formats are ATMOS and Sony 360. Sony just might be the superior format but, like BETA vs VHS, it looks like ATMOS is winning out. There is also MPEG-H to add to the mix which seems super promising and inexpensive. There are also multiple DAW surround panning software platforms, each with its own protocols. Usually, variety is good but there are going to have to be some standardizations if immersive is going to be adopted by consumers. Even with DVD-A and SACD they eventually came out with players that did both. So steaming platforms are either going to have to adopt all formats or there is going to have to be some agreement between the technology companies to output in a generic 100% compatible format that works with all platforms for this to survive.


The Takeaway


I haven’t been into or interested in this latest round of surround music for very long. When it reared its head a few years ago I had visions of 5.1 from the late 90s and how I almost invested $100,000 in a surround mastering setup only to, thankfully, nix the idea in the 11th hour. 5.1 surround as a music format all but failed. I loved 5.1 but it was better suited for film and television. 5.1 mixes offered not much more beyond a higher resolution and some spatial content but at least they were mastered. Some of those releases are still incredible and if you ever get a chance to hear a proper 5.1 SACD or DVD-A in a well set-up living room I highly suggest it. Joni Mitchell’s year 2000 release Both Sides Now is an excellent example.


This latest round of surround is very different. Unlike 5.1 it’s fully scalable to over 100 speakers. It’s object-based so it’s not the channel that’s sent to a speaker or two, it’s an instrument, vocal, or sound that exists in 3D space and the speakers follow it no matter how many speakers there are. It has the potential to create new ways of mixing, new ways of creating music. It puts the power of surround in the creator's hands especially with the new version of Apple Logic Pro natively including an ATMOS panner and renderer and MPEG-H having free tools. The biggest problem and biggest challenge is playback. Binaural is not currently an acceptable playback solution especially since a proper surround set-up sounds infinitely better. Binaural, to me, doesn’t offer a huge improvement over stereo. In fact, with the lack of mastering immersive music being an issue, I’d argue that there is no improvement at all, quite the opposite. I’m excited to see where this technology will go and I’ll be participating in its creation but until it’s thought about from an average consumer's improved experience over stereo, it’ll just be a niche format doomed to follow in 5.1’s dried up footsteps.




By Noah Mintz, Senior Mastering Engineer, Lacquer Channel Mastering & Railtown Mastering

LUFS is a measurement that smart people came up with to express level in a way that’s similar to human hearing. -10LUFS and lower is loud. -20LUFS and higher is quiet and -14LUFS is just right…well, so they say. Many streaming services set their normalization levels to about -14. Meaning, if ‘loudness normalization’ is on it will set the relative level of the music you are listening to -14LUFS regardless of what level the music was provided by the artist or record label. Most services will keep tracks at a relative LUFS level when listening to an album but for playlist or random shuffle, it will adjust all songs to a LUFS level set by the service.

The first thing to note with this is that there is no standard. All streaming music services set their own level standard so you can’t deliver a master that will conform to all services. Beyond that, most streaming services allow you to turn off (or on in Apple Music’s case) loudness normalization so if your master is set to a lower level and other songs in the same genre are not, your masters will sound significantly quieter. As of the end of 2021 almost 100% of the top 100 streaming songs in Rock, Pop, HipHop, Alternative and RnB are mastered much louder than -14LUFS so your songs will sound much quieter in a playlist with those songs when loudness normalization is turned off. Why is that an issue? Well, simply put, if all of the preceding songs are loud and your song comes on and it’s significantly quieter than the others, rather than raising the volume the listener might just skip the song. This was learned years ago from radio and it’s what led to the eras of ‘loudness wars’.

This might change in the future. If streaming services don’t allow you to turn off loudness normalization (like YouTube and Soundcloud) or somehow a loudness standard is adopted but for now it’s still, for the most part, loud masters.

Dynamics are awesome. When I get a mix to master with dynamics it’s a dream but more often than not, the mix is already louder than -14LUFS. You can’t undo loud. Dynamics cannot be recovered, not naturally anyway.

Some music is great at -14LUFS. Jazz, acoustic, some metal, anything that relies on lots of dynamics shines at -14LUFS. The same music that doesn’t benefit from any extreme digital limiting.

That said, I’ll say this out loud and proud; I like loud mastered music. When I listen to Pop and Rap mastered at -14, it seems to be missing something. The energy is taken away. It has less impact and lacks punch. To me, it sounds better when it pops out of your speakers blasting guitars or synths into your ears. That energy is missing at -14LUFS to me.


So, I actually hope there is never a level standard for music. I don’t think we should ever go back to the worst days of the loudness wars. That benefits no one. But a loud master is not a bad thing by default. Master it to the sweet spot. Be it -14LUFS or -8LUFS. LUFS is a measurement not a target.