|Home | Tools | Techniques | Cases | About us | Sitemap | Search|
Even though managers often bemoan the lack of research, a lot of research that's done is never acted on. For researchers, this is often frustrating, and for managers, irrelevant research is a money-wasting irritant.
The key principle in using research data is to plan the action at the same time you plan the survey. If the wrong population is sampled, or the questions are not fully relevant, the research will not be properly usable.
One sure way to produce unusable research is for the end-users of research and the researchers not to communicate fully. Here's an example of how not to do it:
A middle manager wants some research done, and sends a written request to his or her superior. The superior makes a few changes and sends it on to say, a purchasing manager, who rephrases it again to comply with corporate policy. The purchasing manager then contacts a researcher. If that researcher is not then permitted to deal directly with the originator of the request, the original idea will by this stage be so distorted that any resulting research will be completely useless!
Why is that? Because (a) the sample you really need is probably one you'll never quite be able to reach, and (b) the art of question wording is a very subtle one. It usually takes several stages of interplay between researcher and client before a questionnaire and sample design are adequate.
The person who will use the results and the person who will manage the research need to spend an adequate amount of time discussing the research plan. Usually this will require at least two meetings, and several hours at least. Time spent planning the research is never wasted.
Often an organization will spend months vaguely thinking that it needs some audience research done, and at the last moment will decide that it needs the results as soon as possible. A false sense of urgency is built up. With the resultant rush, mistakes are made. As soon as the first results are released - or even in the middle of a survey - people will begin saying "If only we'd thought to ask them such-and-such..."
I've often experienced this frantic haste - specially when advertising agencies are involved. There's an old truism about research: "It can be cheap, it can be fast, and it can be high quality. Pick any two." So hasty research will either be of low quality, or very expensive. Take your pick.
A common criticism of survey data is that you spent a lot of money to find out what you already knew. Here's an example.
Some years ago, I organized a survey on the effectiveness of educational radio programs. The manager of educational programs didn't really want a survey, but the network manager insisted. So I met with the educational manager, and we worked out what he needed to know. He wasn't so much interested in audience size, which he assumed would be small. He was more interested in what kinds of people had listened to each program. I commissioned a big research company to do a survey and the results came back in the form of hundreds of pages of computer-printed tables. From these, I wrote an intelligible report, and passed it to the educational program manager.
He flicked through it, stopping at a page about (I think) a program about car maintenance. It showed that the main listeners to this program were women, older people, and those with above-average education.
"That's obvious," said the manager. "The young blue-collar men would know it already, and these are the people who need to catch up. Why did I need a survey to tell me that?"
But there were other things about the data that seemed strange, so I went back to the computer tables and took a closer look. I rang the research company, who sheepishly confirmed what I'd begun to suspect: that the table headings were transposed, and the real listeners to the program were the opposite of what I'd put in my report. I told the education program manager there was a problem, rewrote the report and took it back to him.
This time it showed that the listeners to the car maintenance program were chiefly young men with below-average education. "That's obvious," said the manager. "I could have told you that all along."
This was an extreme example, because he was notoriously bloody-minded, and had never wanted a survey in the first place. His concentration on the demographic breakdown of the audience (rather than its size) was, I found later, intended to sidestep the fact that he suspected the audience was almost nonexistent.
There's a very educational way to overcome the "I knew it all along" attitude. When the questionnaire is complete, and the survey is ready to go, give all end-users a copy of the questionnaire. Ask them to estimate the percentage who will give each answer to each question, and write these figures on the questionnaire, along with their names. Collect the questionnaires, and summarize everybody's guesses. When the survey has been finished, compare the actual results with the guesses. Then it will become obvious that:
(a) They didn't know it all along. Even experienced researchers are doing well if they get as many as half the results within 20% of the actual figure;
(b) The act of guessing (OK, estimating) the answers will make the users more aware of the audience, and more interested in the results. I don't know why this is so, but it always seems to work out that way.
This involves deciding before the survey begins what will be done with the results. Is any action foreshadowed? Or is the purpose of the survey simply to increase broadcasters' understanding of their audience? Or what? In practice, each question usually has a different purpose.
Here's a useful exercise, which is best done while the questionnaire is being
written. For each question, note...
(a) the reason for it's being asked,
(b) how (if at all) the results could be acted on.
The advantage of making an action plan is that it is often a long time - sometimes several months - between the questionnaire being written and the survey results becoming available. It's easy to forget why a question was asked. Here's an example.
Reason for asking this question:
Find out how to get more subscribers to renew.
Action to be taken:
If answer = 1 or questionnaire returned blank: delete from database
If 2 or 6: Send Letter A, pointing out increased benefits
If 3: Send Letter B, pointing out new programs
If 4 or 5: Send reminder letter C
If 7: determine which of the above is most appropriate.
That example was for a census rather than a survey, and the question was very specific. Normally it would not be possible to have an individual reaction for each respondent.
And not all questions call for a specific action. To gain an understanding of the audience is also important - and you never know when previously-collected information might suddenly become relevant. But you can ask 1,000 questions and never ask the exact one for which you'll need an answer next month . I suggest that questions which don't lead to any action should be given a low priority in a questionnaire.
When the survey results are out, the researcher needs to do more than simply send out a report. It's most effective when the initial results are presented to a group of end-users, who can then ask questions and make it clear which questions need more detailed analysis. At this presentation, everybody's initial estimates of the answers (see above) can be brought out, and a small reward perhaps offered for the closest guess.
If the report is written after this initial presentation, it will contain more relevant data, and less irrelevant material. When the report has been finished and sent out, it's a good idea to hold a second presentation, this time focusing on how the results can be used, what has been learned, and what further research or information may be needed to make better decisions.