Latency In Sound System Design

Keeping latency to a minimum can help you avoid common pitfalls.

With more and more parts of the audio chain going digital, it is important to consider the issue of latency in any given signal path. Any time there is sound reinforcement, latency can cause problems that range from presenters hearing an echo from nearby loudspeakers to stage performers hearing a tonal shift due to comb filtering between the sound in their own head and the signal coming from their cueing feed. Now, with digital wireless microphones becoming more common and digital in-ear monitoring (IEM) systems on the horizon, it’s not just the mixers, routers and processors we must contend with; we must look at nearly the entire signal chain when considering the roundtrip latency.

Issues of latency are often misunderstood. (We saw this when our Digital Hybrid Wireless system was introduced for film and TV production nearly 15 years ago.) Because there’s been a flareup of discussion about this subject in recent years, let’s break it down to the different areas of concern.

Overall system latency, or delay, from the microphones through to the loudspeakers. This is primarily a concern when “timing out” a system to favor time-of-arrival from the acoustic source, thus delaying most of the loudspeakers in the venue.

Performers who use IEM systems. Here, “roundtrip” latency figures, involving the entire sound system, play a crucial role.

“Pure” digital wireless mic and IEM systems are a major emerging technology today. These units convert the audio to digital format in the transmitter, and then modulate the RF via one of the various digital methods (e.g., QPSK, 8PSK, 16-QAM). The main advantages of that approach are, first, to prevent the audio distortions found in analog systems (companding and pre- and de-emphasis distortion) and, then, with advanced modulation methods, to offer a number of potential tradeoffs in terms of audio resolution, range and channel density. Several systems currently on the market have those tradeoff selections right in the menu.

By operating at very low power, and with spectrally efficient modulations, we have seen technology that offers channel counts per available spectrum beyond what a typical analog wireless system can offer. However, the tradeoffs involve reduced range (a function of the low power) and reduced audio quality as compared to the best hybrid or high-definition digital settings.

The digital wireless systems currently on the market exhibit latency figures from about 1.4ms to 4ms or more, which then must be figured into the overall system delay calculations for zone purposes in distributed systems. The main contributor to the latency of any system is the sampling process—usually in the A/D or D/A part of the system. In other words, delay is incurred when the signal is converted to digital in the first place; then, it is incurred again when it is converted back to analog.

What often is missed in the overall addition of latency is the resampling process. Any time two digital units are connected, and the sample rate is not the same between them, there is resampling involved. That can incur additional latency—from very small amounts (say, less than 1ms) to significant amounts (2ms to 3ms or more). Considering how many devices are often in audio systems today, it can add up to a significant problem.

Let’s take, for example, a house of worship that is installing a new sound system with digital wireless microphones, a digital mixing console, a digital “drive rack” and a digital IEM system. The whole system might be connected via Dante networked audio, for example. In such a case, if not all units in the chain are running at the same sample rate, we might have an incurred latency of 10ms or more! Certainly, some performers who use IEMs are likely to notice a tonal shift due to the comb filtering between their own voice (bone conduction) and the delayed signal through the system. With a delay over that amount, some might even notice a sense of timing being off on transients, like vocal consonants, or if they’re playing fast passages on a guitar.

Anyone who has spoken at a live event is certainly aware that sound from the main speakers returns from the room as an echo. However, none of us wants to have a delayed signal in our hot spot or other nearby speaker, and certainly not in an IFB or another cueing system.

In order to avoid such problems, adhere to the following general rules:

1. Whenever possible, choose devices with the lowest possibly latency. Check the figures at the different sample rates offered by the system.

2. Make sure to use the same highest-common-denominator sample rate in all the devices to minimize any potential for resampling.

3. If possible, use a central clock for all devices, assuming they have a clock input.

4. Avoid any unnecessary processing in the signal chain. Some plugins add latency, even if bypassed.

5. Finally, test the signal chain with audio analysis equipment or software to determine if your calculated latency actually matches the real-world system. If not, recheck some of the above factors, or contact the manufacturers of the equipment for assistance.

Latency should not be a problem in and of itself, as long as you have considered it in your system plan for zone timing and other factors. If you keep it to a minimum by using the above steps, you will avoid some of the pitfalls that can arise in today’s sophisticated digital audio systems.

Previous ArticleNext Article
Send this to a friend