I was given a demo of the Tobii T60 eye tracking software by James Breeze last week. Eye tracking software has come a long way since the last time I investigated it. In laymans terms, the T60 is a clever monitor that has uses beams/sensors to monitor where on the page you are looking. There is absolutely no funny head-gear required!
Although a very slick and impressive presentation, I was left with many questions and concerns about using this for usability testing.
It only works when the participant is close to the monitor
The T60 relies on you being within a specific distance of the monitor. Although vastly superior to old-school eye tracking, this felt awkward when I used the product and certainly restricts the use of the product to “lean forward” interaction experiences (despite what they may say about using it for “sit back” experiences such as watching videos, etc).
A colleague mentioned that other eye tracking products, such as faceLAB, are far less restrictive in terms of movement and proximity to the sensors.
I can imagine that this distance limitation could be problematic and require the facilitator to constantly remind the participant to “move closer”.
The heatmap produced by the product is misleading
The heatmap produced by the product was, in my opinion, very misleading. I am not sure if this is a deficiency in eye tracking heatmaps in general, but a colleague spent approximately 2 minutes attempting to complete a task. He gazed nearly all over the webpage until, after around 1:45 minutes, he fixed on the correct area of the screen. He then spent approximately 15 seconds gazing at the correct area of the screen before being confident enough to click.
My laymans interpretation of observing this interaction was that the website failed my colleague, yet the heatmap produced by this interaction showed the correct area of the page as single biggest area of the user’s attention. The suggestion of the heatmap was that the website succeeded.
(This observation is likely to be directed at heatmaps in general rather than the Tobii T60’s heatmaps in particular)
The “think-aloud” protocol interferes with accurate eye tracking data
James mentioned that pure eye tracking should be conducted without interrupting the participant, i.e. without using the “think-aloud” protocol. Apparently use of “think-aloud” protocol has been shown to interfere with the results produced from eye tracking. So to get accurate results from eye tracking you require a sterile test environment.
As someone who has conducted usability testing for many years, I would be highly reluctant to lose the valuable insights that come from using the “think-aloud” protocol. Even though the product enables you to quickly review the video footage so the participant can reflect upon what they did and why, after the event reflection is highly likely to miss out on many valuable insights and lead to the user blaming themselves as they watch video of them failing a task. Also this requires people to watch themselves, something I have found research participants almost universally unhappy about.
If I had to pick just one research technique…
As any good research consultant will tell you, to get accurate insights you should use a variety of research techniques. But the reality is that few clients have the budget or time to allow for multiple research techniques to be used.
This means that during a particular research project I will get to use one or two different techniques if I’m lucky. Given that reality, I am not sure whether the gloss and glamour of the data and videos produced by eye tracking is enough to make me want to lose the valuable insights offered by techniques such as “think-aloud” usability testing. I appreciate that it doesn’t need to be either/or, but the grim reality of many commercial projects is that you don’t get the opportunity to do both.
That together with the cost of the product ($$$), my conclusion is that there is still some way to go before I’ll be recommending eye tracking for anything other than clients with plenty of spare cash.