Audience Dialogue

Know Your Audience: chapter 12
Monitoring and response cultivation

It's strange that media organizations, whose business is communication, often have so little knowledge of their audiences. Sometimes it springs from an arrogant "We know best" attitude, but for very small organizations, it's often because they can't afford to organize regular surveys.

After considering this problem for years, I eventually came up with a new type of solution. It's not quite research, not quite monitoring in the usual sense, though similar. I call it "audience response cultivation".

Suppose you're running a radio station. How do you know how many listeners you have? How do you know what their tastes and opinions are? How do you know what sorts of programming they prefer?

If you never do audience research, you probably have some hazy idea of what your listeners are like, based on those who you happen to meet. But people who know the station staff are usually far from typical, and (if there are several radio stations serving the area) they're likely to be much more frequent users of your station than those who you don't meet.

What you need is some systematic way of finding out more about your listeners. The best way is to commission high-quality research. But what if you can't afford that?

This is where response cultivation can be used. The principle is simple: basic data collection, if done systematically, can be more useful than elaborate research done very occasionally.

Data is arriving constantly. If it can be filtered, organized, and viewed appropriately, spontaneous data can be a reasonable substitute for a survey.

Response cultivation in a nutshell

Response cultivation has three main activities:

(1) Encouraging audience contact, in a wide variety of ways;

(2) Keeping records of that contact;

(3) Regularly reviewing the records, so that you can work out the relationship between changes in your programming and changes in your response patterns.

1. Encouraging audience contact

The following examples are specially directed at radio stations, because it is for these that the problem of audience contact is most acute. Radio stations often operate with very small budgets, and in the normal course of events receive very little listener feedback. And because so little feedback is received, it's almost inevitable that the listeners who do contact the station will not be typical of all listeners.

To create a large enough flow of inquiries that the inquirers can be regarded (almost) as typical listeners, audience response techniques are used.

The main purpose of encouraging listener contact (from the researcher's point of view, at least) is to create a steady flow of listeners. But there are usually other benefits too, to offset the increased time spent dealing with listeners. Other things being equal (as demonstrated below) more contacts generally lead to more listeners.

The usual sources of feedback for stations which don't do surveys are:

None of these sources is a good indicator of what the bulk of listeners are thinking. People who spontaneously contact a station often do so because they are upset about something, or want a favour. And the friends and relatives of station staff (though staff often refuse to believe this) are usually not a cross-section of society.

Here's an exercise that I've found helpful when dealing with staff who think their friends are representative of all listeners. Ask them to name 10 people who've spoken to them about the programs. Ask how many are men, and how many are women. Ask how many are in each broad age group (eg. under 20, 20 to 40, 40 to 60, over 60). Ask their educational achievements and their occupations. Having obtained this information for 10 people, compare it with Census data (or, better still, a survey of the station's listeners). Note the large differences between these 10 "representative" people and the local population (or the station's listeners). Case proven.

The solution to the problem of unrepresentative feedback is to encourage more feedback. Give listeners reasons to contact the station, apart from making a complaint. Some steps you can take here are:

The most effective ways of increasing audience contact are those which are done in the course of programming. For radio stations, two of the most common, and most effective are musical request sessions, and talkback (phone-in) programs. Both types of program can be extremely popular, attracting much larger audiences than other programs.

A few years ago I was in Papua New Guinea, examining the possibilities for audience research for the National Broadcasting Corporation's radio stations. Everything seemed to be against the possibility of audience research: the budget was low, costs were high, the literacy rate was low, roads in many provinces were nonexistent, less than 1% of people had a telephone - and so on.

But in one province, with a population of 30,000, the local radio station received in the mail an average of 1,200 musical requests a month - equivalent to half the population over a year. Though of course some listeners would have sent in a number of requests, many letters had multiple signatures, often as many as six.

By taking a sample of the letters, plotting their origins on a map, and comparing this with census data, we were able to find out where the listeners were most concentrated. In the absence of a systematic survey, the resulting information seemed quite credible, although we had nothing (except Census data) to cross-check it against.

When you're doing an exercise like this, making assumptions from imperfect data, you always need to be alert to possible reasons for the data being wrong or misleading.

In this case, we had to consider whether there might be a reason why listeners in some districts should be more likely than listeners in other districts to send in requests? We also had to consider whether the addresses given on the letters were actually where the people lived, or whether they were mailing addresses in towns some distance from their villages.

But you have to do this with surveys, too. Even a rigorous survey can fail to ask a question which in hindsight was essential, and we are left to make whatever we can of imperfect information. All survey information is imperfect: the only advantage of random sampling over response cultivation is that we can calculate how imperfect the results are likely to be (not that that's much help, very often).

The principle of response cultivation is to make the most of imperfect information, by encouraging its flow from a wide variety of sources. In research terms, this is known as triangulation.

This is an analogy taken from land surveying. Think of it, in this context, as gathering information in three different ways, somehow plotting the three results on a graph. If you draw a triangle connecting the three, the truth lies somewhere inside that triangle. Of course, you're lot limited to three types of approach: the more, the better.

How many responses do you need?

In surveys, a large sample size doesn't guarantee accuracy. There's a famous example of a survey in an American magazine, trying to predict who would become President in 1936 or thereabouts. Something like 2 million responses came back - but the result was wrong, because the magazine's readership was biased toward Republican voters.

It's the same with response cultivation. You don't need to collect a vast volume of data. But you do need to ensure that you use a multiplicity of sources. If you have less than about 20 contacts from any one source, there's a fairly high risk that atypical responses may unbalance the results. If you decide to summarize responses every month, and you have less than 20 per month, you're likely to find large apparent variations from one month to the next. If the programs haven't substantially changed, most of these variations are likely to be random noise, or sampling error. If you can manage to get a minimum of say 100 responses a month, you'll find that many of these unexplained variations will disappear.

Above 1000 or so responses in each reporting period (monthly, or whatever), sampling fluctuation becomes relatively small. Thus samples of more than 1000 are usually not worth pursuing. (But if something is being automatically counted anyway, with no additional effort, it may be easier to accept the existing numbers rather than work out a way of taking a sub-sample.)

2. Keeping records of audience contact

There seems to be a strong human tendency to think you know something, without actually keeping a record of it. People can be very surprised at how wrong their assumptions are - and often won't admit it, unless shown records that can't be disputed. Memories are short, too. The solution is to record audience-related figures.

This task need not be onerous, and need not be continuous. One week in every three months may be enough for most radio stations. Anything more frequent would be a lot of work, and would probably only show that audiences were much the same each week.

For TV stations, depending on regularity of the programs, audiences could vary much more. But TV stations usually have much larger staffs than radio stations, and can more easily afford for somebody to track viewer contacts.

An example:

The person who answers the telephone can keep a one-page chart of telephone calls. The sheet of paper could have a weekly box-chart, with one line per hour, and one column per day of the week. It could look like this (assuming the phone is answered between 6am and 6pm)

Hour from... Mon Tue Wed Thu Fri Sat Sun
6 am              
7 am              
8 am              
9 am              
10 am              
11 am              
12 noon              
1 pm              
2 pm              
3 pm              
4 pm              
5 pm              

Every time somebody calls, the phone-answerer asks "Were you listening to our station just now?" If the answer is Yes, a mark is placed in the space for that hour, that day. At the end of the week, the number of marks in each space gives an indication of how the audience size varies through the week. If the same pattern appeared for several weeks in a row, and was consistent across days and across hours, you'd begin to have some confidence that this was the true pattern of listening - for people who ring the station for some reason.

But if only a tiny proportion of listeners ring the station, they may not be representative of all listeners. So, to be able to make a stronger conclusion, supplement this source of information (people who spontaneously telephone) with others. For example, people who visit the station could be asked "In which hours yesterday did you listen to this station?" This information can be recorded on the same type of chart.

The key to interpretation of audience response data is

Never rely on a single data source: it's probably biased in some way.

To be safe, use at least three separate data sources. If all three agree (or even if two do) you can begin to have confidence that they are representative.

To continue this example, we were using:

(1) people who telephone the station

(2) people who visit the station.

So to tap a different group of listeners (those who would not normally contact the station), you could add:

(3) each day, telephone 20 people on a list of known listeners, and ask them which hours they normally listen to the station on each day of the week.

Plot the three sets of figures on a single graph, using different coloured lines. If the three lines practically overlap, you'd be reasonably safe in concluding that this is the station's pattern of listening. If one line followed a very different path from the other two, you'd want to consider why this could be.

For example, you might find that the second line (for the people who visited the station) peaked much later in the day than the other two. Could this be because you asked them the hour when they last listened? If all listeners tuned in from 6pm to 10pm daily, and the office was open until 6pm, everybody questioned would have last listened between 9pm and 10pm. This is why it's best to ask them for all the hours they listened on the previous day (or previous week, if most people don't listen much - and they have good memories).

There's no real need to repeat this exercise constantly, as long as you have no reason to believe the audience size fluctuates greatly from week to week. Radio audiences are usually quite stable (because radio listening becomes a habit), so to repeat this exercise for one week in each season would usually be quite enough.

Another secret of monitoring is to plot effects (i.e. audience size or appreciation level) at the same time as their potential causes. When I worked for the Australian Broadcasting Corporation, we kept radio and TV audience survey reports for many years. We could have used these old reports to relate changes in programming with changes in audience. The problem was that nobody kept detailed records of the changes in programming. Radio and TV stations - perhaps because they put out such a vast volume of programming, or perhaps because they are so focused on the present time - seem not to keep accurate records of their output.

A more successful analysis was one I did for an orchestra, relating ticket sales for each concert to the soloists and the music for that concert. Some tickets were bought by people who subscribed to all concerts, so I considered only non-subscription tickets, separating door sales from bookings. I took into account rainfall records (going back for 5 years or so) for a few hours before each concert, some indicator of the fame of the soloists, the concert venue, the music played, and various other matters.

After laboriously collecting the data, I did a statistical analysis. I seem to remember the results showed that narrower ranges of music led to higher ticket sales, but the relevant point is that all the information was already available (though the Weather Bureau hated me for putting them to so much trouble!) But if only somebody had timed the length of the applause for each concert, I might have been able to make better predictions still.

In the case of concerts, which might be held only every few weeks, it could take years to collect a large enough sample for safe generalizations. Conclusion: if you're not already collecting such information, and keeping it together in a way enabling easy analysis - start now! If you don't, you could regret it in a few years' time.

The purpose is always to gather output information (e.g. audience size, ticket sales) and relate it to the inputs (e.g. programs broadcast, or music played).

Databases

With the increased availability of computers, there's a strong temptation for small organizations to decide to keep a database of all their listeners, users, customers, contacts, or whatever. It seems like a good idea, doesn't it, to record the details of everybody who contacts you on a database - whether on a computer, or even index cards or exercise book. But consider:

I've organized surveys of radio listeners using station-supplied databases (e.g. phone numbers of people who entered competitions) and have usually found so many problems - e.g. with out of date information, duplications, non-listeners, and just plain mistakes - that it would have been cheaper and more accurate to do a random survey of the local population.

Databases are most useful when:

(A) the people form a small and/or hard to find segment of the population, and

(B) they continue to do the activity (e.g. listen to the same radio station) for at least several years, and

(C) there's regular contact with the people on the database, so that it can be kept up to date.

An example of a useful database is one of customers of an art gallery who are interested in buying the works of particular artists. Profit margins are high enough and the number of customers is small enough to defray the costs of regular mail-outs, which in turn prompt people to keep their details up to date. The customers tend to be fairly stable, and remain on the mailing list for more than five years, so there's no great burden of updating work. And the total number on the database, at about 1,000, is manageable with a normal PC.

The cost of managing a database rises more than proportionally to its size: a database of 10,000 names seems to cost a lot more than twice as much as one with 5,000 names. You'd expect economies of scale, but my experience has been the opposite.

Databases tend to decay (if not updated) at about 25% a year. So after a year, 75% of entries are still correct. Another year later, 75% of 75% (about 55%) are correct.) After the third year, 75% of 75% of 75% (about 40%) are right. And the people who disappear cannot be considered typical of the whole population.

Finally, I should point out that computer databases, even using modern Windows or Macintosh systems, can be surprisingly complex to create and maintain. All sorts of horrible things can go wrong if concentration is relaxed at the wrong time. At the worst, all the data can become unusable. You can end up with several different versions, and not know which is correct (usually, each is partially correct.) If you're not an expert at using the database software, it can be frustratingly difficult to summarize the data in a usable form.

Have I put you off databases yet? My conclusion is that it's generally not a good idea for radio stations to try to keep databases of their listeners. But if the station serves a very small and specialized audience, and they're subscribers (i.e. a source of revenue) rather than listeners, the whole situation changes.

3. Using audience response records

I recommend the use of graphs. Most people can intuitively interpret a graph much more easily than a string of figures. A graph can be set up on a wall, in a place where many staff will see it. Each day, a new point (or set of points) can be plotted on the graph. There's nothing as good as regular measurement for improving almost any process.

One way to get staff interested is to hold a small lottery on the figures. When I was working with the New Zealand Broadcasting Corporation, the research staff used to place small bets on the weekly TV audience figures. We found that investing some money in your guesses has a great effect on improving their accuracy. Over a year or two, we all greatly improved our skills in estimating audience reactions to programs, based on the programs' content.

In most countries, radio listening is very habit-bound. People tend to listen at particular times of the day, or particular days of the week, and there's a clear cyclic pattern. The listeners at 6 a.m. and the listeners at noon may be two quite different sets of people, with hardly anybody listening at both times.

When the listening is habit-bound, it's often because the programs follow the same pattern.

Perhaps the main goal in using audience contact data is to relate your output to audience feedback. If you suddenly get a lot more feedback, does this mean your audience has increased?

For a radio or TV station that broadcasts a varied program, it can be very difficult to relate programs to feedback. It's probably hardest for a radio station, because TV output tends to be made up of specific individual programs, while radio stations usually have less clear distinctions between programs (in the audience's minds, if not the programmers'.)

How to reconcile disagreement

When equally large sections of the audience disagree about an issue, you can use consensus groups to find out exactly where the disagreements lie, and whether disagreements are real. Often, when people are disagreeing about an issue, the disagreement turns out to be spurious, because the opposing people are actually arguing for and against different things. Very often, it's possible for one solution to satisfy almost everybody - if it's properly chosen.

But occasionally that's not possible, because what half the audience likes is exactly the same thing that the other half dislikes. In that case, the best solution for a radio station may be to use dayparting - which means scheduling different kinds of program, at different times of the day. Eventually, listeners will learn that the programs vary, and the station's reach can be much larger than it would otherwise be. But for this to work, the schedule must be very simple, and you have to keep doing it for a long time before the audiences get used to it - probably at least a year, in markets with more than about 5 stations.

Experimenting with program changes

When you have a monitoring system working well, so that you are able to draw a graph showing the daily reaction of listeners to the programs, you can experiment. This works well with most types of audience - except radio audiences.

Most of this article has used radio stations as an example, but radio stations are not easily experimented with. This is mainly because many listeners don't pay a lot of attention to programs, and it may take them months to realize something has changed - then more months to change their listening habits. A one-year gap between a program change and a matching audience change is not at all unusual.

For other types of audience: print, TV, live performances, and internet audiences, reactions are much quicker.

When it's not safe to experiment

If your market share is small, and your service isn't unique, it's not a good idea to experiment with your market presence. For example, a radio station that broadcast on short wave to Cambodia changed its frequency without much warning. Listeners knew that the station was facing budget cuts, and assumed it had gone off air. In fact, it was still there, but few people could be bothered searching for it - there were plenty of other stations available.

The frequency change wasn't exactly an experiment - but it could have been. If the station had been the only one available, listeners would (eventually) have found it, but as there were many alternatives, nobody bothered - even though they lamented the loss of their former station.

Advantages and disadvantages of response cultivation

The main disadvantage of response cultivation is that it takes up time. Stations that encourage audience contact tend to have lots of visitors, and phones which ring all the time. So encouraging audience response can use up time that would otherwise be spent making programs. Obviously a balance must be struck - and the way to do it is to keep track of just how much staff time is spent on audience contact, how much purely on program making, and how much on both.

Some media outlets I've worked with are worried about having their audience contact them. 'But we're understaffed," they say. "We don't have time to deal with our audience."

My experience is that you don't have time not to. An organization that ignores its audience will usually - very gradually - lose it.

The main advantage of encouraging audience response is that it tends to increase the audience.

An interesting example is the Australian Broadcasting Corporation (for which I worked for many years). It has about 50 local radio stations, all over Australia, all broadcasting much the same program. But the audiences in each region vary tremendously in size. Trying to find out the reasons for these variations, I used the statistical technique of multiple regression analysis. After controlling for factors such as population size and number of radio stations available in the area, the major remaining factor seemed to be a station's closeness to its community, and the level of contact between the local station and the local people. So, when other things were equal: the more contact a station had with listeners, the larger was its audience share.

Thus encouraging audience response has its rewards, in the form of larger audiences. To put it another way: when existing listeners are better known, new listeners turn up. This seems to happen even when there is no formal program for using the results of audience response.