As I have often said in this column, you cannot necessarily rely on your own hearing when making decisions about sound quality and performance. That having been said, that’s exactly what we have to do most of the time in order to get the job done! Can you imagine a mixing console being simultaneously operated by a “committee” of three mix engineers at a live event, so as to even out individual proclivities? So, although we do, indeed, have to rely on our own hearing, we must be mindful that not everyone will hear what we hear, nor will every person react in the same way. Let me explain a little more about what I mean, as well as explore some potential implications.
Let’s start with intelligibility and required performance standards. Almost universally around the globe, it’s now agreed that a sound system used for voice alarm or emergency communication purposes should achieve at least a 0.50 STI value. Although there are variations on how that is defined, such as average versus minimum values, the target is effectively the same. It has been agreed to provide adequate intelligibility for the average listener. But what about the non-average listener? Consider the following.
Those who have noticeable hearing loss (around 12 to 14 percent of the population).
Primary-school students and, indeed, those up to about age 14 require far higher intelligibility in order to achieve the same level of speech understanding as compared to adults. For that group, the STI must be ≥0.60 to be equivalent to the 0.50 standard.
People whose first language is not that of the broadcast announcement also need a higher STI in order to understand an announcement or broadcast speech adequately. Again, typically, a value of ≥0.60 is required to be equivalent to the target 0.50 STI. Therefore, if we are designing or setting up a sound system where it’s known that such a group might make up a significant portion of the potential listeners—think about an international airport, a church that has an elderly demographic or a school—then, surely, we must take this into account. None of the emergency sound system standards, as far as I am aware, caters to those minorities. (Please get in touch if you know otherwise.)
While on this particular topic, and while considering children in particular, I find myself wondering whether any engineer or legislator ever considered the effect that loud sounds (speech, perhaps, but particularly alarm tones) can have on autistic children. Fire alarms (bells, sirens, horns) are designed to be loud, and they can literally paralyze autistic people, who generally hate loud sounds. Thus, instead of getting out of the building or danger zone, they’ll often freeze from panic and remain where they are. That means teachers and care personnel must put themselves in danger by staying with the panicked autistic person and trying to evacuate him or her.
So, do these devices have to be so loud? The simple answer is no, they do not. It’s just that the designer’s brief is to make them loud. I have been in situations in which fire alarms have gone off and the noise level was such that, honestly, I couldn’t think straight. You’re not able even to make a rational decision as regards which way to go. You just want to get away from the noise—leading you, perhaps, straight into the path of the fire or danger.
While on the topic of fire and smoke alarms, has their effectiveness ever been tested specifically with children? The assumption is that, if the design and test engineers could hear them, why wouldn’t children? Well, if the research had been done, the lives of several children would probably have been saved! An investigation after a domestic fire in the UK that claimed six children (ages five to 13) found that standard alarms were ineffective. In a pilot study, more than 80 percent of the 34 children tested did not respond to the standard smoke-detector alarms. In the study, only two children woke up every time the alarms were sounded, and none of the 14 boys awoke at all. Interestingly, replacing the alarm signal with the voice of a parent had a 90-percent success rate. So, let me say it again: Just because you can hear something doesn’t mean that the intended audience can or will.
Distortion is another sound system parameter that appears to have a huge divergence related to its audibility. I don’t know if I am particularly sensitive to distortion—here, I mean harmonic distortion (THD), not spectral or temporal—but I have often pointed out the unacceptability of systems due to it. You might remember that, last month, I was discussing a church sound system; among a number of issues, it distorted badly when the radio microphones were used, but no one seemed to be aware of it. I also recall an occasion when I listened to a particular loudspeaker with its designer. I had to establish if the unit could produce the SPL required for a project, so we cranked it up. However, several decibels short of what I needed (and what was being claimed), the sound began to distort. I diplomatically said it was disgusting and totally unacceptable.
The designer looked to be quite offended by my opinion. I couldn’t understand that because, clearly, the speaker was doing a very good impression of being a square wave generator. Only some time later, when formal measurements were made, under anechoic conditions, was it shown that the acoustic output was around 6dB lower than claimed. At anything higher, the sound was measured to be grossly distorted. To me, the interesting issue was that he was a very competent loudspeaker design engineer, many of whose other designs I liked very much. So, I couldn’t understand why, apparently, he was happy with the gross distortion that I was hearing. I can only assume that he simply didn’t hear it.
I also find it interesting to note the difference of opinion that exists as to the correct synchronization delay that should be used when setting up a system. (By that, I mean the synchronization of the sound arrivals from different loudspeakers at a listening position.) For example, Haas (of “Haas Effect” fame) found that, under a given set of conditions, 10 percent of the listeners were disturbed by a delay of 42ms. By contrast, the average delay time for disturbance (i.e., 50 percent) was 68ms, and it wasn’t until 90ms that the delay was disturbing to 90 percent of listeners. In a different experiment, he found that a delay of 60ms was disturbing to 10 percent of the listeners, but 50 percent of listeners were not disturbed by the echo until it was increased to 110ms. Those are huge differences. Such experiments, however, help to explain why I find a given missynchronization annoying, even as those around me don’t hear the problem.
When it comes to setting and optimizing (tuning) the frequency balance of a system, all bets are off, and that’s a discussion for another time. But, until then, just remember that what you are hearing is probably not the same as what everyone else is. So, stop to consider the significant minority populations who might also be listening to your work.