Audience Dialogue

Qualitative mail survey

Case study 2

During the late 1980s and early 1990s, I managed listeners' panels for two of the Australian Broadcasting Corporation's radio networks: one which played mostly classical music, and the other with a wide mixture of mostly spoken programs, aimed at people with above-average education. Both stations had minority audiences: around 6% of the population in the areas they served listened at least once a week, and around 2% of the population in each area listened to the station on most days.

When you're surveying reactions to specific programs on stations with such small audiences, survey costs can be very expensive. It alway seems unfair, but (other things being equal) the smaller a station's audience, the more it costs to survey. This is compounded by the fact that (other things being equal) the smaller a station's audience, the less money the station has available for research.

For example, if the minimum sample size you need to comment on a specific program is 20 people (and that's cutting it fine), for the most popular possible station, that everybody listens to all the time, you only need to survey 20 people in total. But for a station listened to by only 2%, you need to question 1000 people, of whom only 2% (i.e. 20) will be able to answer questions about the station.

Faced with such cost pressures, we realized that:
- For both networks involved, the audience seemed to be very keen listeners;
- And because of the educational standards of listeners, they'd have no problems completing mail questionnaires.

So we decided to set up a listener panel for each station: several thousand people around Australia, who'd be willing to complete questionnaires about programs on their favourite station. We advertised on air, deliberately choosing off-peak times. We knew from other surveys that these times were when the most regular listeners listened - many of the listeners at peak times listened only at peak times, and would not have been able to answer questions about programs broadcast outside peak times.

In Australia, as in most other countries, peak times for radio listening are usually meal times - e.g. from 6am to 8am, 12 noon to 2pm, and 4pm to 6pm.

So we kept advertising on air, scattering the advertisements for panel members carefully across all the off-peak times, until we had enough applications. The arrangement was that we'd mail out questionnaires every few months, and include a reply-paid envelope. It would cost the panel members nothing to belong to the panel; all we asked of them was to complete the questionnaires they were sent, and mail them back within a week of receiving them.

The system worked very well. We achieved excellent response rates, around 80% to 85% before the cutoff date. (Mail questionnaires always trickle in for months, and at some stage you have to say "Enough!")

The problem (as top management saw it) was that these listeners liked the stations too much. The management people, who were replaced about once a year on average, were always itchy for change, always thinking "What about the other 94% who don't listen to this station? What would it take to make them listen?"

The station staff, like the listeners, tended to favour the status quo. After all, the panel members liked the programs, and everybody (except top management, with their endless, fruitless quest for newer, better listeners) was happy. We tried to tell them that the listeners they had already were the best listeners they'd ever have, but this advice did generally not go down well.

There was one clear ground for attacking the sample: this was not a random sample of all listeners, but a sample of volunteers. So after a year or two, we changed the recruitment method: in all the other surveys we did, we asked about listening to these two networks. Every time we found a listener, we asked that person if they'd be willing to be a panel member. About half of them said Yes, and were signed up on the spot.

This recruitment method brought in a lot of people who listened to the networks only occasionally, and were not much use when it came to answering questions about specific programs. They also dropped out of the panel much more quickly.

In each survey, we compared the volunteers with the more typical listeners recruited in random surveys. They tended to differ a lot in behaviour (i.e. the "typical" listeners spent a lot less time with the station) but not much at all in their opinions of specific programs.

An interesting part of these surveys was their qualitative emphasis. Much of the space in each questionnaire (an A3 sheet, folded into a 4-page A4 booklet) was dotted lines, for answering open-ended questions. Typically, we'd ask questions in pairs, like this:

 

What's your opinion of Program X?
[0] Have not heard this program
[1] Excellent
[2] Very good
[3] Good
[4] Fair
[5] Poor

Comments on Program X: ........................................................
........................................................
........................................................

 

We could then report the results by tabulating the numbers of listeners with each response to each program, and relating the responses to the comments - grouping the comments for those who thought the program was excellent, those who thought it was poor, and so on.

Everybody seemed to like this reporting format. Like broadcasters in most other countries, ours were highly literate, but usually not very comfortable with numbers. Many were ex-journalists; very few were ex-accountants.

However, after about five years, the panels were no longer funded. Management's perception was that if all the listeners seem to like all the programs, why bother to keep doing surveys? In fact, the audiences held up for a year or two (as show by continuing surveys which estimated audience size), then tended to fall away.

If you enjoyed this article why not explore our uniquely comprehensive and practical book on how to do audience research, the book is called Know your Audience - A Practical Guide to Media Research.