Following huge cuts to the Australian Broadcasting Corporation's government funding in 1997, we were constantly looking for new, low-cost ways of doing surveys for providing useful information to program-makers. I came up with the concept of a phone-in panel, which we tried out with an Adelaide radio station.
The panel members were recruited from an earlier random survey, which had found hundreds of listeners to the radio station. We recontacted these listeners and invited them to join our new panel. Most agreed.
For each listener we collected some basic information which would help interpret their radio listening:
Each one of them was given a 3-digit serial number. Our instructions to them were:
Whenever you're listening to this station, and you hear something that you really like, dislike, or just feel like commenting on, dial our free-call number, quote your serial number (or your name, if you can't remember the serial number), and record a message on our answering machine. Every time you leave a message, you'll be in a draw for a small prize, such as a T-shirt, or a coffee mug with the station logo.
And that's about all there was to it. Every morning my assistant would check the answering machine and transcribe the comments. It was a fairly elaborate answering machine, which recorded the date and time of each message in a computerized voice. We were also able to limit the length of a recorded message, and we set this at one minute, to encourage panellists to keep their comments to the point. (If they wanted to speak for longer, we told them to ring back and continue. Few did.)
So my assistant entered the person's serial number, the date and time, and the comment. Comments were categorized as positive, negative, or "other".
I wrote a simple computer program that replaced the serial numbers with the personal details. A typical comment looked like this:
Man 55+ Northern suburbs Professional @@@ Weekday-breakfast Weekday-morning Weekend-morning NOS Tue 15 March 7:55am + I thought Rex did an amazing interview this morning, worming the answers out of that politician the way he did.
The first line describes the listener; the second shows his listening habits. We defined the @ symbols as ears: a 3-ear listener spent an above-average amount of time listening to the station, mainly in the time zones shown. The time zones are self-explanatory; NOS means No Other Station listened to regularly. The third line is about the comment itself: the date and time, and a symbol showing whether the comment was categorized as positive, negative, or other.
When the day's comments had been transcribed, they were emailed to the station manager and producers. Every two weeks we produced a summary of comments, showing how many positive, negative, and "other" comments had been made for each program.
The panel was very cheap to run, and the station management found it helpful. The only problem was that our sample was probably too small: we simply didn't get enough comments in. Perhaps we should have offered better incentives to make comments, and/or we should have recruited a much larger panel.
Why is this better than simply letting people contact the station, without having a panel?
Mainly because people who spontaneously contact a station are usually in some special position: perhaps they know somebody on the station staff, perhaps they have some axe to grind, perhaps they are just bored. Whatever the reason, it's not safe to regard them as typical listeners. The important element of this panel was the random selection. Because the participants had been originally selected at random from the whole population, we knew that (in aggregate) they had to be typical.
I can recommend this method to any station wanting to monitor its audience's reactions to its programs. It would be difficult to imagine a cheaper or simpler research method than this.