Category Archives: Techniques

Prototypes don’t need to be “finished”

Catching up on some old copies of the Harvard Business Review I was struck by this quote about prototyping in an article by IDEO’s Tim Brown, Design Thinking.

“Prototypes should command only as much time, effort, and investment as are needed to generate useful feedback and evolve an idea. The more “finished” a prototype seems, the less likely its creator will be to pay attention to and profit from feedback. The goal of prototyping isn’t to finish. It is to learn about the strengths and weaknesses of the idea and to identify new directions that further prototypes might take.”

I couldn’t agree with Tim more. As a user experience practitioner, prototyping is just about the most important technique I have up my sleeve. I am amazed when I hear people come up with excuses not to prototype:

  • It will take too much time
  • I need more time to finish the prototype
  • We can’t learn anything from anything that isn’t finished or more representative of the actual product.

What rubbish. All of these are excuses of people stuck in their old ways of doing things. The truth is that you can obtain valuable insights from very low-fidelity prototypes, and also that even when dealing with higher-fidelity prototypes you can get away with a lot of smoke and mirrors to minimise the amount of effort that goes into the creation of something that appears finished.

Tim’s point about “finished” prototypes being of less value is particularly insightful. The amount of time we invest in creating a prototype should be directly proportional to the amount of confidence we have in what we are designing. The less clarity we have about the direction the design will take, the less time we should invest in prototyping.

If we invest too much time in a prototype too early in the process we run the risk of investing too much in its success that we won’t be prepared to listen to, or have time to respond to, the constructive feedback we receive.

This is particularly troublesome if you are working on a project with fixed timescales. The more time invested in “finishing” each design, the less time there is to explore alternate designs, and the less time there is to respond to any insights obtained from user research.

When prototyping we should resist the temptation to cement our thinking around one particular design paradigm for as long as possible. Short prototyping sprints (to use the Agile vernacular) are far more preferable to the prototyping marathon many engage in. By remaining open to radically different concepts deep into the design activity, we remain nimble enough to realign our thinking as insights are revealed from user research.

Research suggests that using personas leads to superior products

A research study conducted by Frontend, Real or Imaginary: The effectiveness of using personas in product design, suggests that using personas lead to designs with superior usability characteristics.

The debate about whether personas work or not has been one of faith versus scepticism; claim versus counter-claim. This study demonstrates the effectiveness of using personas in the product design process and while more research is needed, this is now some objective evidence that using personas does work.

The study is far from the definite work that puts this debate to bed once and for all, but it is good enjoyable fuel for the fire and certainly confirms my own belief that, if nothing else, personas provide teams with a way of remembering key audience needs. Well worth a read.

When conducting analysis, don’t jump to conclusions too quickly

The following diagrams, copied from De Bono’s Lateral Thinking, provides a great example of why we should avoid jumping to decisions too early when analysing research findings. If we jump to conclusions too early, we run the serious risk of not getting to the most elegant solution.

De Bono’s puzzle involves adding together a number of objects. With each new object the challenge is to create a regular shaped object.

Step 1 is to put the following objects together…

1

Having successfully turned these into a square, we then have another object to add…

2

Turning this into a rectangle is fairly straightforward for anyone, but then it gets a bit trickier…

3

Having scratched out heads a bit, you’ll end at the final challenge…

4

Now this is a bit tricky and for most people there will be no obvious way of turning this collection of objects into a regular shape.

How often have you brushed a research finding under the carpet because it didn’t fit with the mental model of the findings you were developing?

The trick is to deconstruct the objects each time a new object is added.

The solution is as follows…

5

Targetting seducible moments

I’ve just got back from holiday to find that both the airline I travelled with (Air New Zealand) and the travel company I booked with (Flight Centre) have sent customer satisfaction survey emails. Is this the seducible moment for survey participation?

It would also be a rather seducible moment to influence my next holiday plans, i.e. if you liked your trip to the South Island of New Zealand here are some other trips you might want to consider, or to capture user reviews of particular aspects of my trip.

Are any travel companies actually exploiting these entry-level social networking techniques? I know when I worked on the launch of Opodo back in 2000 they were considering such things down the track, but my subsequent move to Australia means I never get around to booking through their site.

As for the surveys, I do hope both Air New Zealand and Flight Centre are balancing their quantitative research with a good bit of qualitative research.

Tobii eye tracking

I was given a demo of the Tobii T60 eye tracking software by James Breeze last week. Eye tracking software has come a long way since the last time I investigated it. In laymans terms, the T60 is a clever monitor that has uses beams/sensors to monitor where on the page you are looking. There is absolutely no funny head-gear required!

Although a very slick and impressive presentation, I was left with many questions and concerns about using this for usability testing.

It only works when the participant is close to the monitor

The T60 relies on you being within a specific distance of the monitor. Although vastly superior to old-school eye tracking, this felt awkward when I used the product and certainly restricts the use of the product to “lean forward” interaction experiences (despite what they may say about using it for “sit back” experiences such as watching videos, etc).

A colleague mentioned that other eye tracking products, such as faceLAB, are far less restrictive in terms of movement and proximity to the sensors.

I can imagine that this distance limitation could be problematic and require the facilitator to constantly remind the participant to “move closer”.

The heatmap produced by the product is misleading

The heatmap produced by the product was, in my opinion, very misleading. I am not sure if this is a deficiency in eye tracking heatmaps in general, but a colleague spent approximately 2 minutes attempting to complete a task. He gazed nearly all over the webpage until, after around 1:45 minutes, he fixed on the correct area of the screen. He then spent approximately 15 seconds gazing at the correct area of the screen before being confident enough to click.

My laymans interpretation of observing this interaction was that the website failed my colleague, yet the heatmap produced by this interaction showed the correct area of the page as single biggest area of the user’s attention. The suggestion of the heatmap was that the website succeeded.

(This observation is likely to be directed at heatmaps in general rather than the Tobii T60’s heatmaps in particular)

The “think-aloud” protocol interferes with accurate eye tracking data

James mentioned that pure eye tracking should be conducted without interrupting the participant, i.e. without using the “think-aloud” protocol. Apparently use of “think-aloud” protocol has been shown to interfere with the results produced from eye tracking. So to get accurate results from eye tracking you require a sterile test environment.

As someone who has conducted usability testing for many years, I would be highly reluctant to lose the valuable insights that come from using the “think-aloud” protocol. Even though the product enables you to quickly review the video footage so the participant can reflect upon what they did and why, after the event reflection is highly likely to miss out on many valuable insights and lead to the user blaming themselves as they watch video of them failing a task. Also this requires people to watch themselves, something I have found research participants almost universally unhappy about.

If I had to pick just one research technique…

As any good research consultant will tell you, to get accurate insights you should use a variety of research techniques. But the reality is that few clients have the budget or time to allow for multiple research techniques to be used.

This means that during a particular research project I will get to use one or two different techniques if I’m lucky. Given that reality, I am not sure whether the gloss and glamour of the data and videos produced by eye tracking is enough to make me want to lose the valuable insights offered by techniques such as “think-aloud” usability testing. I appreciate that it doesn’t need to be either/or, but the grim reality of many commercial projects is that you don’t get the opportunity to do both.

That together with the cost of the product ($$$), my conclusion is that there is still some way to go before I’ll be recommending eye tracking for anything other than clients with plenty of spare cash.

Lightbox usability

Modal dialog boxes, i.e. windows/dialogs that once opened must be closed before the user can continue, have always been a bit of an usability no-no.

The biggest problems being that the user either doesn’t notice the modal dialog has appeared, or doesn’t realise that it requires their attention before they can continue.

During usability testing I’ve seen first-hand the frustration that can be caused by modal dialog boxes. Thus I’ve always diligently tried to define interaction solutions to avoid/minimise the need for modal dialogs. But the clever Javascript Lightbox code seems to have put an end to these problems.

picture-1.jpg

As you can see from the above example on the BBC’s site, the Lightbox technique leaves the user in no doubt what they need to do before they can return to the main window.

I haven’t yet conducted any usability testing on this kind of Lightbox dialog, but my guess is that it will test far better than traditional modal dialogs.

Update

I see that Jakob Nielsen named the Lightbox his “interaction design technique of the year” in his Year’s 10 Best Application UIs 2008. Nice to get there before the Dane.

Understanding real user requirements is important

Organisations are generally good at identifying business requirements, but most aren’t good at understanding user requirements or at using these to inform their solutions.

Techniques such as usability testing are excellent for bringing user involvement into the refinement of a solution, but by the time a prototype is available to usability test, it is often too late to make substantial changes.

Also the sheer act of developing a prototype, however low-fidelity it is, shapes the way people think about other potential solutions – it defines the design space in which people think about potential solutions. But what if the user requirements not fully understood, or if they bear no relation to the perceived environment in which the solution will be used? The result could be an expensive white elephant that completely misses out on massive opportunities.

This is not an attempt to dissuade you from prototyping. Iterative prototyping and refinement is a key aspect of good design. But they should have their rightful place – i.e. after the user requirements are understood.

What is needed is a way of understanding user requirements before any design activity begins and using these to inform the business requirements.

Contextual research activities provide rich insight into user requirements

Research activities such as contextual research, interviews and diaries, each provide rich insights into current user behaviour.

These will identify:

  • Current user behaviour
  • How existing solutions fit into current user behaviour
  • Activities currently enjoyed by users that the new solution should not change.
  • Activities that users find frustrating.

These insights can challenge business assumptions, add weight to or dispel hunches, and bring depth to the definition of the requirements document.

Don’t fear the unknown

Many project teams have difficulty comprehending or justifying these early stage user research activities. This may be because it is impossible to predict exactly what they are going to discover, but this is exactly why they are so important. Only by understanding existing behaviour, frustrations and pleasures can the best solution be identified.

It is only natural to crave processes that are familiar and comprehensible, but refining a prototype through usability testing can only take you so far. Early stage customer insights help organisations think completely differently about a problem space, enabling them to develop solutions that aren’t simply ‘me to’ imitations of the market leaders in their chosen industry.