Many organizations automatically collect information about their audiences, but don't make much use of it. The principle of media impact assessment is to gather and information collected for other purposes. You need to collect two types of information, and related them. These could be called "cause" and "effect". The cause is your main activity, and ways in which it varies. The effect is the audience response: the numbers of people and the reaction they give.
An organization can measure three kinds of things: inputs, outputs, and impacts.
You might have noticed there are two kinds of impacts. The immediate one is sometimes called "outcomes": e.g. audience size. The longer-term impact is the result from being an audience member. For a school, the immediate outcome is how many pupils attended; a longer-term outcome is how much they learned. For a radio station, the immediate outcome is the audience size; the longer-term outcome is the effect of the program. If it was (for example) an anti-AIDS campaign, how many people started practising safe sex as a result of it?
The role of impact assessment is to relate impacts to inputs: in what ways do inputs get transformed into impacts? When the mechanisms are understood, the process can be improved. It works like this:
Media impact assessment generally measures short-term impacts. Obviously long-term impacts are harder to measure than short-term ones - but the main purpose of many activities is to have long-term impacts, it's worthwhile to try to assess long-term impacts though (naturally) they take longer to assess, and the measures are often contaminated by other effects. For that reason, assessing long-term impacts is best done by setting up multiple measures. If all the measures end up pointing in the same direction, this adds evidence to the effectiveness of the process being evaluated.
I helped set up a response monitoring system for a music organization that I once worked with. This example is about an orchestra that held concerts several times a week, and wanted to know how to attract larger audiences without reducing its music to the lowest common denominator. So we set up a database that related their box-office figures to the music they played. Imagine it as a spreadsheet: each row applied to one concert, and each column applied to a particular piece of information about that concert. Some of the columns were inputs (or "causes") while others were outputs and impacts ("effects").
Inputs =In impact assessment, it's important to take time-lags into account a reputation can lag years behind actuality, for people who don't have a lot of contact with the organization. With an orchestra, maybe people are still staying away because they didn't like the previous conductor, and don't know that he left five years ago.
(a) The content - the music played, the musicians
(b) The publicity - advertising budget, number of ads, estimated readership of the ads
(c) Other factors. In this case there are a lot, including accessibility of venue, day of week, time of day, competition from other attractions, and the weather a few hours before the concert which affected casual ticket sales.Outcomes =
(a) People - e.g. number of tickets sold
(b) Money - e.g. revenue from tickets sold
(c) Reputation - reviews (e.g. in newspapers), often rated on a 5-point scale, from Poor to Excellent.
I've noticed in many impact assessment studies that staff of the media organization being studied often assume that the content is by far the main factor in determining outcomes. But in fact, content is more often than not quite a minor factor, because it varies only within a narrow range.
Relating inputs to outputs can be treated as a mathematical problem, producing a formula. Perhaps you could find a friendly statistician who could help. For the above exercise, I used a statistical technique known as regression analysis, but different techniques will be needed for different types of situation. For the formula to be reliable, the number of incidents needs to be fairly large: this example was based on more than 100 concerts.
Impact assessment can often be improved by doing tiny surveys. Often a sample of 20 is enough, if it's fully typical of the population. "Why did our February concert get double the audience of our March concert?"
This example was a one-off study, but usually media impact assessment involves setting up an ongoing monitoring system: filling in that spreadsheet on a regular basis, to help keep track of the causes and the effects.
Though this is a common type of need, this is a much more ambitious assessment, because almost any type of large-scale communication has multiple effects, and also because sought effects have multiple causes. Therefore, trying to trace a link from a single cause to a single effect can be almost impossible. A good example would be an anti-smoking campaign on TV: a series of (say) 70 commercials, broadcast on one one channel, over a period of a month. The desired behaviour is like this:
The obvious problem with that approach is that the impact measures would be based purely on statemetns by respondents. Specifically, some would be likely to try and please the interviewer by exaggerating the extent to which they'd cut down on smoking. The "gold standard" solution would be to conduct an experiment, instead (or as well as) the survey. This could involve choosing around 30 geogrpahical areas, surveying all the populations beforehand to estimate frequency of smoking, running the commercials in half of the areas (chosen at random), then measuring the frequency of smoking after the campaign. Though sounder in theory than an after-only survey, the experimental method can produce unexpected results, and is far more expensive. Nor is it helpful in working out what to do next: if the campaign was very successful or very unsuccessful, you'll never know exactly why. But if you use the survey method, you can collect a lot of information from respondents about how they reacted to the commercials.
Instead of spending vast sums of money on an experiment, it's often possible to draw some conclusions from data collected for another purpose. For example, you might be able to get statistics on the numbers of cigarattes sold in the area in the month before and the month after the commercials. This information may not mean much by itself, but could be used to compare the survey data with: if the survey indicated a 10% drop in smoking, this should be reflected in cigarette sales. Another possiblity for verifying survey statements would be to ask other people in the smoker's household if they had noticed the smoker had cut down since the campaign. Any form of verification will add substance to such survey data.
A separate problem is that the effects might be delayed. It could be that, a year after seeing the commercials, some people might decide to give up smoking - purely due to those commercials, not for any other reason. In fact, it is very rare for this to happen as a single reason, but the drip-feed effect will probably have some impact in the end - though this will not be attributable to any single campaign.
You can see from the above discussion how messy this type of impact assessment can be. Whole books have been written on this subject. But for a simple approach, the after-only survey, with some attempt at independent verification, is often adequate - given that the budget for impact assessment never seems to be enough to do it properly.
Plenty has been written on impact assessment, but very little on media impact assessment. If you'd like more information, you can do a web search on "social impact assessment" or "health impact assessment", which will produce some relevant material. A search on just "impact assessment" will bring up thousands of reference to environmental impact assessment, which is quite different. Also, much of the writing on social impact assessment really refers to the social impact of environmental projects.The difference is that this type of assessment is prospective: it's looking ahead, trying to estimate the social effect of projects. But "media impact assessment," as described here, uses information that already exists.
If you are interested in assessing the impact of media work and more broadly, evaluating communications for development goals, then I suggest you may like the following new resource:
Evaluating Communication for Development is a new and important work by Dr. June Lennie and Prof. Jo Tacchi that presents a comprehensive framework for evaluating communication for development (C4D). The proposed framework combines the latest thinking from a number of fields in new ways, and offers an alternative holistic, participatory, mixed methods approach based on systems and complexity thinking and other key concepts, supported by examples and case studies from action research and evaluation capacity development projects undertaken by the authors over the past fifteen years.
If you are involved in media/communication for development in some capacity then you are likely to benefit from this book.
Here is what notable Bolivia-based communications specialist Alfonso Gumucio-Dagron (who we feature on this website) had to say:
“This book is an extraordinary contribution to understanding
communication for development, not
only from the evaluation perspective. With this book, those who are
reluctant to acknowledge the role of communication in
development and social change will lack good arguments. It presents
a comprehensive framework for understanding how C4D can
be evaluated and why it is indispensable for long-term
sustainable development.”
To learn more about the book and order a copy click here