Measurement Methods Explained (RTA? FFT? TDS? WTF?)

Measurement Methods Explained (RTA? FFT? TDS? WTF?)

This is a basic explanation of the common measurement methods. This is brand-agnostic, with specific products mentioned only for historical context – we are concerned here with what’s happening under the hood – how the data is obtained and displayed, and what that means for our work.

It’s also not exhaustive – there are decades of published work about each of these methods, which I am happy to recommend if you are ever having trouble sleeping. There are also other measurement methods that aren’t super common these days, which I won’t cover. Sorry, MLS fans. But it should be enough to get you equipped with an understanding of the basics of each method and the major differences between them – which helps you decide which method to use in a given situation for best results.

RTA

The Real-Time Analyzer is the oldest and, from a technical perspective, the most primitive measurement method in terms of what it can tell us. It works very simply: it looks at a signal and displays the frequency content of that signal. Easy peasy. It looks like this.

In the analog world this is basically a bunch of bandpass filters with level meters. Digitally the data comes from an FFT (a mathematical operation that breaks a signal down into component frequencies) and is then banded into octaves or fractional octaves.

Traditionally the data is banded into 1/3 octave bands. Most modern RTAs will offer options for higher resolution – 1/6th1/12th1/24, or 1/48 octave. There is also octave banding, a common way to view data in acoustics-related work. If we band the entire spectrum together, we have a regular signal level meter. In fact, that’s a good way to think of the RTA – having a bunch of level meters for different frequency ranges.

Different RTAs have different banding algorithms, and may use different FFT parameters (more on that below) to calculate the raw data, so different RTAs might look or “feel” a bit different.

We can also change the averaging (or more technically, integration) time of the RTA. The more averaging we use, the less “jumpy” the meters get and we get a better picture of the signal’s tonal balance over time.

The RTA can only tell us about the signal it sees. It has no idea what happened to it on the way, or how long it took to arrive. Since it studies the signal itself, it tends to correspond pretty well with our hearing, and that’s why it’s a very popular choice for mix engineers to get some visual confirmation on the tonal balance of the mix.

Spectrograph

If we view a series of RTA measurements taken back to back in a scrolling view, we can get better context on how the signal levels change over time. Here’s a split view showing an RTA below and a spectrograph above. Brighter colors correspond to higher levels. This dual view is a big help when mixing. It’s also the easiest way to spot feedback and ringing. It can be hard on a simple RTA because once an obvious peak forms, it’s probably very loud in the room. The spectrograph can reveal a trend of ringing over time as a bright vertical line.

System Optimization with RTA

The RTA has two major failings when it comes to system optimization. The first is that it measures response of a signal itself, not the system through which the signal is passing. When we’re tuning a system, we are interested in what the system does to the signal passing through it, not the signal itself (that’s up to the mix engineer). We can’t tell from looking at the RTA what parts of what we’re seeing are the system response and what parts are the signal (are the subs set too loud, or is this just a bassy track?). We can try to cheat a little bit by using a known input signal called pink noise. It doesn’t have a flat spectrum, but it will appear flat(ish) on the RTA’s banded display.

(Why? An octave is a doubling of frequency, so bands centered at higher frequencies include a wider range of frequencies than lower bands. Pink noise has less and less energy per frequency in the HF. The roll-off is offset by the RTA’s banding. The dark red line here shows the raw data of a pink noise signal before it goes into the banding. The 3 dB / octave rolloff is clearly visible. Higher bands contain a wider frequency range and the result is a flat response.)

So now that we have a known-flat source, we know that any deviations from flat were caused by the system, right? Well, no, but that was the working assumption for a few decades at least. The truth is more unfortunate: we still have no time information, so we can’t tell what came straight out of the speaker apart from what’s bouncing off the floor, for example, or arriving later from another loudspeaker in a problematic way. Additionally, although we know what the source signal was, the analyzer still doesn’t, so it can’t tell the difference between stuff that’s supposed to be there and stuff that’s not. You can cut 60 Hz all day long, but it’s not going to go away if it’s caused by the HVAC system.

The inherent mismatch here is we’re trying to study system response by studying signal response. Wrong tool for the job.

TDS and Swept Measurements

Time Delay Spectrometry was brought to the audio world in the late sixties by Richard C Heyser, who was probably one of the most brilliant audio engineers in the history of the field. The techniques were likely being used already in military radar applications, but Richard is credited with adapting the concept to audio system measurement (in the groundbreaking TEF analyzer). Here is a massive PDF containing an anthology of Richard’s TDS work.) TDS and the sweep-based techniques that followed is where many folks first gain exposure to system measurement. (The freeware analyzer Room EQ Wizard is a log sweep system.)

TDS works by playing through the system a swept sine wave signal that starts low in frequency and rises logarithmically. You’ll also hear it called a pink sweep, swept measurement, or time domain chirp.

Since the sweep method means that we’re only sending a single frequency through the system at a given time, we can easily measure harmonic distortion (if we’re putting in 200 Hz and we’re seeing some 400 Hz and 600 Hz coming out, we now have information about the harmonic distortion at 200 Hz). TDS can create a plot of harmonic distortion over frequency, which is a very useful way for finding problems with loudspeakers.

The brilliance of TDS is that the analyzer uses a swept filter on the measurement signal that sweeps up in frequency in time with the sine sweep source (technically a bit delayed, because of the propagation time through the system.) If the sweep moves fast enough, we can actually “window” out reflections in the environment, because they arrive later than the direct sound. By the time they show up, the system has already moved up in frequency and the reflections are ignored. The trade-off here is that short sweeps limit both the frequency resolution of the measurement and the lowest frequency we can measure. If you want higher res data or want to study the sub range, you’ll have to use a longer time window and a slower sweep, and that means reflections becoming included in the measurement again.

Another extremely important development was that we now had a known source signal, which we can compare against the system’s output to get time and phase information. This opens up a whole new world for system optimization, because time data is the key to understanding what happened to the signal in the meantime. We view this data as two traces, one for magnitude and one for phase. This type of plot is known as a Bode plot. Time-domain problems (misaligned crossovers, reflections) can cause frequency response deviations, but they can’t be fixed with EQ. TDS allowed us to spot those and avoid trying to EQ something we shouldn’t. (This is the fundamental flaw in Auto-EQ algorithms. Without time information, we will end up using EQ to “fix” things that EQ won’t fix.)

Here is where the waters get a bit choppy. Being able to get a near-anechoic measurement in a room is clearly a valuable ability, especially for loudspeaker design and testing, but is it the best choice for a sound system that’s going to be used in a room? There have been some major clashes amongst some of the top system measurement gurus about this topic, (Bob McCarthy has this to say) and I have no intention of jumping into that fray.

My considerations are more practical: I am not often in a professional situation where I can ask everyone to be quiet while I run swept measurements over and over, and I often have to measure while other folks are doing stuff (background noise in the measurement). We can run multiple sweeps and average them to lower the noise floor, but that takes even longer and you get into diminishing returns (twice the sweeps will usually drop the noise floor by about 3 dB). However, I do have a friend who does all his optimization work with REW and achieves great success (assuming he is given the available time and isolation to do the work).

Dual-Channel FFT

FFT (Fast Fourier Transform) is a mathematically efficient way of breaking a signal down into its component frequencies. If that sounds familiar, that’s because an FFT is what’s under the hood of a modern RTA. The distinction here the “dual channel” bit. It compares what’s going into the system with what’s coming out. The basic difference compared to the above methods is that a dual-channel FFT analyzer can use any signal source, as long as we give the analyzer a copy. That’s where the term dual-channel comes from: the Reference signal is what’s going in, and the Measurement signal is what came out. The analyzer shows us the difference between the two.

The term “transfer function” describes this concept of what happened between the input and output of a system. A TDS measurement is also a transfer function measurement, it’s just obtained differently. The end result is still a Bode plot. (Here, phase is on top and magnitude is below. Some people, myself included, prefer to view the data the other way around. It’s a matter of preference.)

Meyer Sound got the dual-channel measurement ball rolling with the SIM analyzer. (Source-Independent Measurement). Most (but not all) of the industry standard analyzer platforms are dual-channel FFT systems. Source independence means we can use anything we want as a test signal – pink noise, music, sine sweeps, the board mix, even Ed Sheeran tracks in an emergency.

The measurement happens in real time, so you can make measurements as quickly as you can press a button. In fact, the slowest part of the process is usually moving the mic around. Real-time measurements mean that we can get a high number of averages – better noise immunity – in a couple seconds. A feature called Coherence compares the results of successive measurements to the source signal and indicates how well everything matches up. If successive measurements are similar, and similar to the source signal, coherence is high. If the data is changing quickly or doesn’t match the source signal, coherence will drop. This is a big help for spotting stuff like noise, reverberant energy, and reflections.

The end result is that we can measure more quickly and at a lower level using our choice of program material, so a dual-channel system is probably the friendliest choice for working on a system when other stuff is happening around us (both for us and for them).

Without getting too mathy, 2FFT has one cool trick up its sleeve – remember that nice feature of TDS where we can window out the late-arriving reflection energy? You can get the same result out of a dual-channel FFT system simply by using trace smoothing, with the added benefit that you can retain LF data that would have been windowed out by TDS.

The dual-channel platform is not the best choice for every job. For example, it can’t separate out stuff like harmonic distortion. There’s an argument that we don’t want to, because it’s part of how the system sounds. I agree with that, however, sometimes we need to measure distortion for bench tests, etc, and for that, log sweep is the ticket.

Just as the RTA is a bad choice for measuring the response of a system, a dual-channel analyzer isn’t helpful for measuring the response of a signal, only the change in it. A 2FFT analyzer won’t work for spotting feedback – it’s coming out of the PA and it’s coming out of the console, so it’s in both signals and won’t show up in the measurement.

There are a lot of mathy options under the hood of an FFT analyzer, but they’re far beyond the scope of this basic overview (and the good news is most users have no need to adjust them).

Which one should I use?

My view is that all of these methods have strengths and weaknesses and by understanding them we can pick the best one for a given task. The measurement’s function is ultimately to give me more information on which to base my decision, and so it follows that I should use whichever method gives me the most helpful data for what I’m hoping to learn. My day to day work involves all of the measurement methods described above. I know we have some measurement ninjas here, so feel free to jump in with thoughts and comments as well.

A quick note to the “which platform should I buy” question: Most measurement platforms (with the exception of SIM and TEF, which require special hardware) offer free demos, so you can download them and try working with them and make up your own mind. There are also many books, classes, and articles, and most of the knowledge you will gain is platform-agnostic as well, so it’s all very helpful.

I will close with a quote from Adam Black, one of the original developers of the Smaart analyzer platform (disclosure: I am an independent contractor for Rational Acoustics, developers of Smaart), discussing a comparison between Smaart and SIM:

I’m also a proponent of having a large array of tools at my disposal. And a devout believer in using the right tool for the task at hand, whatever it may be.

This may seem an odd sentiment coming from someone who is financially bound to the success of Smaart. But this parity doesn’t change my belief. I see threads of this comparative nature somewhat frequently and they always leave me bemused as they often turn negative. Why must choice of measurement tool be limited to only one? Why must using one tool (seemingly) disparage the use of another? They are just tools, use what works best for the task at hand.

Leave a Reply

Your email address will not be published. Required fields are marked *