Audience Dialogue

Listener satisfaction survey

Case study 14

In most other industries, this would be called a "customer satisfaction surveys". But listeners to broadcast media aren't customers - they don't actually buy anything. Even so, they're equally important. Without listeners, a broadcaster would be nowhere.

This is a story about a survey I organized, which seemed reasonable at the time, but turned out to be a mistake.

The broadcasting organization wanted to find out how well its planned goals were being achieved. So a colleague and I looked at the various mission statements of the several networks of this broadcaster. Each network had its own mission statement. From each mission statement, we identified a number of key goals. For example, one radio network wanted to be Australia's finest broadcaster of classical and serious music. So we built a survey questionnaire from a number of statements about how well each network had achieved its goals, and asked listeners to that network to give their opinion on that. For example, "Radio MC satisfies its listeners' needs for classical and serious music extremely well."

For each such statement, we came up with a strongly worded statement, and then invited listeners to agree or disagree with it, either mildly or strongly. For each statement, the possible responses were

We also had a "can't say / don't know" category.

We then did the survey, and produced a report including statements such as "88% of listeners of Radio MC agreed (strongly or mildly) that the network satisfies its listeners needs for classical and serious music extremely well."

What could be wrong with that? you might wonder. The approach seems, on the face of it, unbiased and innocuous.

Answer: several things are wrong with that approach. But you have to think ahead, to find out what's wrong.
(1) If 88% of listeners agree with such a statement, the network is going to pat itself on the back, bask in perceived adulation, and make no changes.

(2) Other findings, relating survey results to organizational performance, show that such an approach is a recipe for disaster.

(3) Because if 88% "strongly or mildly" agree, you'll usually find that maybe 30% "strongly agree" and the other 58% or so (the great majority) "mildly agree."

(4) In places where a competing network existed, you'd probably find that those who "mildly agreed" had the same opinion (or a better one) of the competing network.

(5) But perhaps the biggest problem is not the large conceptual difference betwen mild and strong agreement: it's the idea that the network dictates the terms. The inherent attitude is "we know what we're trying to do, and we don't care what our listeners' priorities are." The goals of the network were not developed by consulting the listeners, or even the potential listeners. They were developed by network managers, in total isolation from what the listeners might have wanted. (The listeners weren't asked what they wanted - they were only asked to agree or disagree with management- originated statements.)

We repeated this survey every year, for about five years, and each year some key measures of satisfaction fell. We told ourselves "this is affecting all industries - people are becoming more choosy - customer satisfaction levels with everything are dropping - so this network isn't doing too badly at all."

With the benefit of hindsight - and a more detailed study of customer satisfaction research - I'd do it very differently now. This is how I'd approach it.

(1) Survey the listeners (and potential listeners) to see which measures were most important to them.

(2) Get estimates of how well the network was doing on each of these listener-defined variables. (This would probably have to be in a second survey.)

(3) Only use the highest point on each scale - i.e. strong agreement. "Mild agreement" is almost irrelevant - the only reason for using it is to make the figures look better, to reassure the managers that all is well - even when it isn't.

(4) Set up (outside the research program) an improvement process - so that for measures where the network received a less than ideal score, concrete steps could be taken to improve it.

In retrospect, the solution seems simple. But it isn't - because it involves convincing managers that listeners' (or customers') views are more important than their mission statements. If the goal of the organization is to make as much profit as possible, managers may be willing to accept this. But for a nonprofit organization, or public broadcaster, many managers will not accept that listeners (as a group) are better informed than the mananagers (as a group). Some managers fear that by accepting the values of listenenrs, they would be pandering to the "lowest common denominator" and program standards would decline.

- Dennis List