The other day, I was carrying out some routine tests and wanted to check the level and frequency of the test signal I was using. For various reasons, I had several meters to hand, including two old but trusty analog units. I thought I would compare the results and ease of measurement with their modern digital counterparts. In these days of digital meters and readouts, we have become all too accustomed to, and mesmerised by, the apparent but often completely unnecessary, accuracy of meter displays indicating values to four or five decimal places.
But are the measurements sufficiently accurate to justify such precision? Note here the difference between “accuracy” and “precision.” (It is quite possible to make a precise measurement inaccurately, just as it is also possible to make an accurate measurement imprecisely.) To these measurement descriptors, I might also add the concept of measurement resolution, which is similar to precision but actually is quite different because it refers to the resolving power of the measurement system, e.g., 0.1 volts.
Further parameters to consider are those of measurement tolerance and measurement uncertainty. The measurement tolerance usually directly relates to accuracy while the uncertainty depends also on the measurement conditions and variability.
It seems to me that we are often potentially using and accepting apparently precision data that is actually of dubious accuracy! For example, all I needed to know was the approximate frequency and voltage of the test signal. For various reasons, I was also using an old, but very accurate, analog audio millivolt meter, and was intrigued by the differences between this and the digital meters. In one test, the signal was quite dynamic, and it was almost impossible to obtain a sensible reading with the digital meter. (I will leave you to figure out where “sensible” fits in with precision, accuracy and resolution!).
However, the analog meter, with its needle display, worked just fine, particularly as I could slow the integration time of the display and obtain an averaged result, something you can only do on really fancy and expensive digital meters. Out of curiosity, I compared the results of the tests where both the digital or analog meter could be used. This then produced further uncertainty, as I now had two slightly different readings (one analog and one digital), so which one was right? Time to call on the services of a third meter to arbitrate. In the end, I went for a majority decision as shown in Table 1, using four digital meters and two analog ones.
Whereas it was much easier to read the digital displays, these readouts can lull you into a false sense of security. Take, for example, the readings produced by meters 1 and 3. Meter 1 gave a reading of 99.82mV, while Meter 3 showed 100.06mV. Although, to some extent, the difference is academic, it is interesting to note that there was a difference, albeit only 0.24mV. The point is that both meters are capable of making a reading with 0.01mV resolution (an order of magnitude greater precision) but, clearly, with a difference in accuracy. Meter 1 was the more expensive of the two, although that is not necessarily any measure of potential accuracy because Meter 1 has a lot more built-in functions.
However, reading the specifications for each of the meters throws some further light onto the measurements (uncertainty actually!). Meter 3 has a specified accuracy of 0.4%. In other words, the reading should be within 0.4% of the correct answer. Therefore, for a 100mV signal, the meter could read between 99.6mV and 100.4mV. The fact that the display read 100.06mV is, therefore, potentially misleading. Meter 1 has a specified accuracy of 0.5% and so could read between 99.5mV and 100.5mV.
Interestingly, the signal generator output has a specified accuracy of ±2.5%, so it could be outputting a signal between 97.5mV and 102.5mV. So the probability is that the signal was 100mV, but we don’t know exactly; that is the uncertainty! Meter 2, with a reading of 99.3mV, had the lowest reading, but its accuracy is 1%. This is still very good, but the other two meters were twice as accurate. The reading of 99.3mV was, therefore, within its accuracy and uncertainty.
Interestingly, Meter 4, although a good meter, was considerably less expensive, being about half the price of Meter 2, which was half the price of Meter 3 that, in turn, was about half the price of Meter 1!
The analog Meter 5 cost more than all of the digital meters put together but only has an accuracy of 6%. This is actually very good for an analog meter and an almost vintage piece of kit. Therefore, it could be expected to read between 94mV and 106mV, so a reading of 97mV is pretty good. Meter 6 is a vintage unit and I would expect it to have an accuracy of 5% to 6%.
OK. So much for a 1kHz signal. Now what happened when I measured one at 3.6kHz? All I needed to know was that the test frequency was around 3.6kHz at a level of about 800mV. Table 2 sets out the readings I obtained. All four of the digital meters concluded that the frequency was 3.65kHz and I could probably safely say it was 3.652kHz, particularly because the accuracy of the top two meters was 0.003% to 0.005%.
But what of the level? Here, I found a significant discrepancy between the units. Clearly, the signal was notionally 800mV, although in all probability, 795mV. Clearly, Meters 2 and 4 were way off the mark, indicating the signal voltage to be 700mV or less.
The point of all this is that no measurement we make is exactly precise, but is it accurate and precise enough? Unless you know the accuracy of your test equipment, you won’t know what you have actually measured! Luckily, the dB scale can hide a multitude of sins…but more about that next month. In the meantime, I am off to do some Christmas shopping. I wonder if Santa will give me a new meter for Christmas. Oh, the uncertainty of it all!