Время работы: 9:00-20:00 (воскресенье - выходной) | sales@arefyevstudio.com

What is sound normalization and how to apply it?

11 August , 2019

Mention the word “normalization” in the group focused on sound recording and processing, and some “experts” will say that normalization is an amateur way and it should never be done. Others will tell you that this is a useful feature. So who is right? Let’s break down the myths that surround this terminology.

What is sound normalization and how to apply it?

What is this?

There are two types of normalization – peak and average. Peak determines the maximum (peak) level of the audio file, and then increases (or lowers) it to the target. This target peak level is often 0 dBFS (maximum available value), but may have a different level, for example, 6 dB below the maximum value. Average normalization determines the average level of the audio file and likewise raises or lowers it to the target.



What is the difference between the average and peak levels?

Shure drew a remarkable analogy: the average height of the Himalayan mountains is 5400m, but the peak of Everest is 8848m. Volume meters represent an average signal level, so some shock sounds often do not record such a high meter reading, even if the signal is distorted.

Our ears tend to perceive on the basis of an average level, not its peak. For example, a shock release sound with a quick release can have high peaks, but a low average level, therefore, in general, it does not sound so loud. On the other hand, a distorted chord on a guitar – which increases in volume all the time – may have the same peak level as the percussion sound, but because of its average level, it will sound much louder. Many DAWs offer measurements that show both peak and average levels.

— See also: The best sequencers for phones and tablets —

Does normalization degrade sound?

Those who say that normalization can degrade the sound, then yes, it really can, if you create a time machine and return to the mid-1980s, and then do Sound Designer processing on Mac Plus. In the days when 16-bit sound engines existed, almost any processing could theoretically cause deterioration, since some operations would round off the digital numbers representing the sound. If you added several treatments, and some of them were with errors, then in total the sound will be with problems. With modern high-resolution sound engines (like 32-bit floating point), this is simply not a problem. Math can handle huge numbers easily.

Doesn’t normalization affect the signal-to-noise ratio of an audio file?

No. It is no different from turning on the volume control. If the minimum noise level of the audio file is -88 dB and it reaches a peak of -12 dB, normalizing the peak to -6 dB will make everything 6 dB louder – the minimum noise level will be at -82 dB and the peak will be at -6 dB. The signal-to-noise ratio does not change, although the absolute minimum noise level will be 6 dB higher.

— See also: 10 best vocal tuning plugins worth a try —

Voice Balance Per Second – Is It Helpful?

You do not always normalize the sound to 0db. You can set the balance to the target level. Often in vocals, some phrases are sung quieter or higher in level. By isolating them and normalizing to an acceptable level, you can create a more consistent and even batch (and you can eliminate / weaken the compressor). Of course, vocals have natural dynamics that you don’t want to sterilize – so use your ears. Adjust levels only when they need adjustment.

It is important to remember that normalization is a tool. Like any tool, the results obtained with this tool depend on who uses it and how. Do not be afraid of normalization, but do not click this “normalize” button on each source. As mentioned at the beginning, both sides are right – so choose the right tool for the right job, and you will be fine.

Share link

Об авторе: mix-master

Частичное или полное копирование любых материалов сайта возможно только с указанием ссылки на первоисточник.

Читайте также: