Television audiences are measured in two main ways: using diaries, and using meters. With television (unlike radio), the survey unit has usually been the household, not the person. That's because most households have had only one TV set, and people have usually watched together. In western countries, that's beginning to change, but the mainstay of TV diary or meter results is still the "rating" - the percentage of all households that viewed a particular program.
A diary survey is done by choosing a random sample of households, and sending interviewers to visit those households. When a household agrees to co-operate in the survey (with co-operating rates ranging from about 30% of contacted households in rich western countries, up to about 95% in developing countries), the interviewer usually leaves one diary for each TV set in the household. This is different from radio surveys, which use one diary per person - because people usually watch TV in groups. A diary normally runs for one week or two weeks. Often there are several "practice days" at the beginning that are not used to generate statistics.
Television diaries are similar to radio diaries. The main difference is that, while there is one radio diary for each person in a household, there is usually one TV diary for each TV set. The idea is that the diary is placed on top of the TV set, stays there for a week, and whoever watches a program on that set fills in the diary to show what channels they watched, at what times.
Each double-page opening of the diary usually has a large table. The rows show all the quarter-hours of the day, while there is one column for each TV channel in the survey area. People indicate their viewing by ticking the box for the channel they watched, during each quarter hour. Such a diary doesn't show which people in the household were watching: the tick only means that somebody was watching. Another way of doing this is to enter in the box for the channel and quarter-hour not a tick but a number showing how many people were watching. A still more elaborate way is to write the initial of each viewer in the box. On the front page of the diary is recorded the fact that (say) person A is a man aged 35-44, B is a woman aged 25-34, and so on. Though this sounds simple enough, when I organized this type of diary survey the results were rather messy. People didn't try very hard to co-operate.
You can get much more accurate data from telephone (or personal interview) diaries, but people can only remember their viewing for a few days. This is usually a much more expensive way to do a diary survey, but if distances are large and almost everybody has a telephone (as in Australia) it can be suitable for surveys of rural audiences.
Obviously it would be very convenient if the program titles could be printed in the diary, as well as the times. In New Zealand, when there were only 2 channels, we experimented with diaries showing programs as well as times. By comparing data from diaries that showed only times and data from diaries that also listed programs, it was clear that the program-based diary was more accurate. However, we didn't use the program-based diaries after the trial, because of problems of (a) finding out the program times far enough in advance to print and distribute the diaries, and (b) TV channels changing their advertised schedules.
Unlike a diary survey, where the respondents are different each week (or each two weeks, for a 2-week diary) meter surveys use panels of people for months at a time - anything from 6 months to 2 years. That's because of the expense of installing meters. When a household agrees to co-operate (usually for some reward, such as guaranteed maintenance for their TV set), a technician comes to the home and wires a meter to each TV set. In countries where most homes have a connected telephone, the meter is also connected to the phone line. The meter automatically records the channel the TV set is tuned to, minute by minute. In the early hours of the morning, the research company's computer automatically dials the meter, which sends that household's viewing data for the previous night. This is done in sets of 3 numbers:
channel number - starting time - stopping time
...repeated as many times as different channels were switched on.
After ringing all the households in the sample (often 300 to 400 per city) the computer has all the previous night's data, and software automatically calculates the percentage of homes watching each channel at each time. Buyers of the diary data - TV channel owners, advertising agencies, and large advertisers - are then sent a fax or email with the previous night's viewing data.
That's the simplest version, using "set-meters". But most countries now use "peoplemeters". As well as showing which channels each TV set was tuned to, at which times, the people living in the household are asked to indicate their presence while watching TV. Typically, the peoplemeter sits on top of the TV set. A common type of peoplemeter has 8 lights on its front, numbered 1 to 8. The meter has its own remote control, with 8 buttons, one for each person in the household, and the others for their occasional guests. So when button 1 is pressed, that tells the meter that (say) a man aged between 35 and 44 is watching. Guests are prompted to enter their gender and age group.
When the TV set is switched on, all the lights start flashing. A new model flashes up the message "Who is present?" As this is annoying for the viewers, they are likely to press their personal buttons to stop the flashing. When the TV set is on, and nobody has pressed a button for about 45 minutes, all the lights start flashing again. If nobody then presses a personal button, the meter assumes they're all out of the room, and doesn't record any viewing. But if at least one person presses a button, the meter keeps recording that viewing.
Unlike the setmeter, which is completely automatic, the peoplemeter depends on the co-operation of viewers. Do people actually remember to press their buttons, or do they just press any button to stop the lights flashing? To demonstrate to their skeptical customers that peoplemeters give accurate data, the research companies do manual checking - such as by ringing up members of their panel to ask what they are watching at that moment, then comparing the answers with the peoplemeter data. From the figures I've seen, compliance levels are quite good - correct around 90% of the time. The major problem is when people who were watching a program leave the room (e.g. to answer a phone call) and forget to un-press their button. This produces audience figures that are a little too high.
The most serious problem with peoplemeters is the representativeness of the panels. Often, less than half the households asked to co-operate actually do so, which raises the question of what is unusual about households that are in the panel. From data I've seen, the wealthier and better-educated households are often under-represented. As such people tend to spend less time watching TV, this also produces a slight overestimate of audience sizes. Though TV stations don't care about that (they like to see large audiences reported), potential advertisers are more skeptical.
An emerging problem, in western countries, is what type of viewing is actually counted. There are now many ways to watch a TV programs...
In Australia in 1991, the diary system was superseded by a meter system. This was an excellent opportunity to find out the problems with diaries. There were several clear differences. Audiences at evening peak times didn't change much, particularly for series programs; they were marginally lower.
But audiences in the middle of the day were much higher with meters than with diaries. The reason seemed to be that midday programs were sort of trashy, and perhaps people who were watching TV around midday were ashamed of watching those programs, and didn't want other people in the household to see that viewing in the shared diary.
The other change with the introduction of peoplemeters was that late-night programs now had much larger audiences. It seems that when diaries were used, the late-night viewers either forgot to fill them in, or couldn't be bothered.
From the raw data of the numbers of households or people viewing TV channels, these measures are calculated:
Ratings People ratings are also a percentage, but of people, not households. Unlike radio audience surveys,which don't include children under a certain age (around 10 to 15), TV surveys usually include everybody (except babies). People ratings can also be based on demographic groups: age groups, sexes, occupation types, and so on.
HUT (Households Using Television) figures are simply ratings (not people ratings, though) for all channels combined. An HUT figure of 50 means that 50% of households were watching TV in the survey area. This can be either an average across a long time period, or a figure at a particular time.
TARPs (Target Audience Rating Points are used mainly by advertisers. They apply moe to commercials than programs.
Reach is the number or percentage of people who see that program or commercial, or watch that channel in a particular time zone. For example, a program might have an average people rating of 10%, but a reach of 20% - which means that 10% of people in the survey area were watching it on average, but across the whole time it was being broadcast, 20% saw some part of it. Divide the people rating by the reach to find out the proportion of the program that the average viewer saw: in that example, it's 10 out of 20, which is half: i.e. the average person who watched the program saw half of it.
Frequency also applies to commercials. For example, a common myth (not supported by much empirical data) is that the average frequency for a TV commercial should be 3: in other words, the average person should see the commercial 3 times before rushing off to buy the advertised product. (In fact, people just don't behave like that.)
Program rankings are popular with people who don't understand the ratings system (e.g. many journalists). These are often reported in the press, along the lines of "Channel 28 had 6 of the top 10 programs last week". Such figures can be quite misleading. A channel with 6 of the top 10 programs probably had an audience share of around 30%, not the 60% you might assume. That's because in one week a channel might broadcast around 150 programs, and most of the viewing was to the bottom 140, not the top 10.
Audience share can be confusing. If a channel has 30% audience share, that doesn't mean that 30% of people watch it. A share figure is a share of person-hours, not of people. A 30% share means that, of every 100 hours that people in the survey area spent watching TV, 30% of those hours were with that channel. So if you add up all the share figures for every channel, the total is always 100%. It's also possible to calculate share figures for specific time periods, but it usually doesn't make sense to calculate shares for an individual program unless all channels in the area have programs starting and finishing at the same time.