Audience Dialogue

Our model for website marketing effectiveness

A website is effective when both the users and the owners achieve goals for the site. For the users, normally some kind of sequence is involved. First, users must know about the site, then they find the site on the web, then they find the page they want, with the information they want.

This all sounds very logical and obvious, but in fact studies of such "hierarchies" (as they are usually called, though they are really sequences) have found that many users don't follow the standard sequence. In a small study we did in 2003, involving 19 in-depth interviews, we found that not one of the respondents followed the logically obvious sequence - though most of them followed parts of it. For example, some people hadn't originally heard of the site we were researching - but they'd visited it after it came up using a search engine.

So if there's a metaphor for an effective website, it's not the usual chain or ladder - in which people must take one step before another. It's more like a path in a forest, with multiple entrances, a few shortcuts - and maybe some blockages. In a forest, that might be a fallen tree lying across the path. On the web, it might be terminally bad usability - such as a page that's unreadable in one browser because there are words over the top of other words.

Most of the writings on website evaluation - such as www.useit.com) focus on usability and accessibility. Those were certainly major problems in the early days of the Web, but most sites now make at least some attempt to be helpful for their users. However, just maximizing usability and accessibility will not by itself ensure that a website achieves its goals. We've been trying to move beyond that.

1. A route to effectiveness

We've designed a method of evaluating the effectiveness of a website, by first defining a sequence of nine elements, then considering the flows between those elements, and finally obtaining the necessary information to measure the flows and thus evaluate the criteria. The elements of effectiveness will usually happen in this sequence:

(A) Reaching the site
1. Awareness: the web site's target audience must know it exists.
2. Findability: they must be able to find it when they want to.
3. Availability: it must be available to them when they try to access it.
4. Popularity: it must attract a reasonable number of visitors.

(B) Finding the relevant page
5. Accessibility: they must be able to access a downloaded page, even if they (or their computers) are impaired in some ways.
6. Usability: users must be able to navigate to the pages they need, and perform the tasks they have come for.

(C) Accomplishing their goals
7. Trust: users must have enough trust in the site and its owners to want to perform those tasks. (More detail on trust.)
8. Fulfilment: users' performance of the tasks should fulfil the needs of both users and the site's owners. For example, if the site uses e-commerce, the logistical aspects must work smoothly - i.e. any goods and services ordered should arrive as expected and as promised. Otherwise, a lot of users won't make a second order.
9. Reputation: users' experience with the website should increase the reputation of the organization that owns the site.

Do you disagree with any of that? Doesn't it all seem boringly obvious? If that's what you're thinking, take a look at the implications of the path. Imagine you have a website with a potential audience of 1,000,000 people, and see how the numbers fall away.

Coming back to our method for site evaluation: as you move along the path, the evaluation will become more and more specific, requiring more and more knowledge of the users and the owners' objectives.

2. The evaluation framework

Communication via the Web has three points of evaluation, as shown on the following diagram.

That diagram shows the flow of information and value between a web site, its owner, and its users. The bottom loop acknowledges that communication involving the web site can also occur through other channels. Though the arrows are shown as double-headed, this does not imply that the flows are equal in both directions. For most sites, more information flows from left to right (i.e. from owner to users), while the flow of value (for a commercial site) tends to be more the other way. In other words, a commercial site should be profitable for its owners - in a broad sense - which may involve saving money, rather than making it. But it must also deliver enough value to users to make it worthwhile for them to visit the site.

Though separating the owner and the web site may seem odd at first (aren't they all part of the same organization?) this separation acknowledges the fact that a web site is usually not a perfect expression of its owner's intentions. (We keep hearing that "web designers never do what you ask them to!"). So the separation of owner and web site into two entities enables evaluation of the information flows between them.

3. Data sources for evaluation

Based on those two sets of principles, evaluating the effectiveness of a web site is done by:
(a) assessing the barriers to fulfilling the pipe of effectiveness, by
(b) collecting data on the information and value flows shown in the diagram above.

And where does that information come from, to evaluate the site? Well, all three elements of the above diagram supply some of that information: the site itself, its owner, and its users. Let's consider each of these in turn...

3.1. Data from the web site

Some information can come from analysis of the site itself - e.g. a heuristic usability audit, which involves comparing the site against industry benchmarks. Other site-based data, usually not available from inspection, includes statistics on visits, and on the size of the site.

3.2. Data from the owner

A more extensive evaluation obtains information from the owners about the site's objectives, target audiences, and performance. Working with the site's owner, we can create a set of use case scenarios (ways in which visitors are expected to use the site) and to evaluate the site's usability in terms of those use cases.

The other type of information that can be confirmed in discussion with the site's owner is the site's specific logic model. The 9-step model above is a generic one, which may need to be changed for a particular site. Defining this model involves asking questions such as "What is the site trying to achieve?" and "How are the users expected to react to the site: to buy products and services (for a commercial site), to change their habits (for a social marketing site), or in some other way?"

3.3. Data from users

The most comprehensive type of evaluation will involve the site's users. Rather than making assumptions about their preferences and behaviour, we can study actual users in realistic situations. To identify major problems, a large sample is not necessary. A sample as small as 10 users - if carefully chosen - can often be enough to identify major barriers to the completion of the chain of effectiveness.

When all the above data sources are combined, they can be used to comprehensively evaluate the site's performance, at each stage of its logic model.

4. What all this means, in practice

Over the last few years, we've been steadily improving our approach to effectiveness evaluation. It now involves:

What's wrong with checklists?

On the Web you'll find a number of usability checklists, listing a number of criteria for a "good site". (The most we've seen is 374 criteria.) The idea is that you compare every page of your site against the checklist, to find problems that need fixing. Then you are supposed to fix all the problems, and you'll have a perfect site.

If only life was so simple! We set out to build our own checklist, covering most of the steps above. We were thorough, but didn't go overboard. When the checklist reached 150 items, we realized that a perfect site, according to the checklist, would have lots of helpful features, and all those features would have no faults. But when we compared the checklist with some highly successful sites, we realized that we were making a mistake. Successful sites (such as Amazon) fail a lot of the tests on a long checklist, but the reason those sites are successful is not because they minimize the number of things they do badly. It's because they do a few things very well indeed - and that gets them noticed among the world's millions of websites. Above all, they've earned the trust of their users. Look at all the subtleties of the way Amazon does business, including the "tone of voice", the range of interactivity, and its ease of search. Most of Amazon's virtues are minor and obvious - but put them all together, and they say "you can trust this site."

During the last few months, we've been creating checklists and critically evaluating them. After a lot of trial and error, we've found that it is feasible to use checklists for the early stages of the path: findability, availability, and accessibility - and for some of the elements that encourage users to trust a site. But for usability as such, the heuristic checklist approach isn't enough. There are too many possible items on the checklist, and many of them don't apply to most sites - even within the same industry.

A "use case scenarios" approach seems to work better, even in a heuristic evaluation. This involves (1) determining the key tasks of visitors to a site, then (2) trying to carry out each task.

1. Determining the key tasks. These are the ways in which its visitors will want to use it - not necessarily the ways the owners have in mind. Most sites have between about 2 and 5 main purposes. For an e-commerce site, these tasks are done by customers and potential customers - once they have found the site...

Those conditions probably apply to all e-commerce sites, and there are probably other tasks that some customers would do (and other types of audience - e.g. suppliers, regulators, search crawlers - would want to do other tasks).

2. Carrying out the tasks.For each key task, this means trying to carry it out in ways that typical customers might. (The more familiar you are with the habits of Web users and the site's customers, the more realistically this can be done.) While navigating around the site, trying to perform the task, the user records each page visited, the exact time, and makes comments about (a) what they are trying to do at this point, and (b) how well this was achieved. The important thing here is to do tasks as realistically as possible, e.g. without pre-planning each move.

Our meaning of "use cases" is a little different from the meaning used by Jacobson and Cockburn, the best-known experts on use cases for system development. Their assumption is that the user will be able to perform the task - therefore a use case simply documents a typical set of steps that are performed to achieve some goal. But in practice (as all computer users know) systems don't work as intended, and inexperienced users often face unexpected obstacles. So our use cases are empirical rather than theoretical: "what happens in practice" rather than "what's meant to happen." As our uses cases are user-centred, not site-centred, they often include tasks that can't (yet) be performed on the site.

Some designers use a similar concept -"personas". A persona is a hypothetical user of a common type. The difference between our use cases and personas is that a use case covers a single task, while a persona may have a range of tasks.

Of course, to find out what happens in practice, you need to study a sample of real users, and find out what they are trying to do on the website. We've developed a good set of methods for doing this in person, and are now working on ways of studying users in their own settings by using software that doesn't need to be installed.