usability testing

Usability reports (usability rant part 2)

Following on from yesterdays post about the usability process, today I’ll focus on the deliverables, the usability report and my contention that they are rarely grounded in any understanding of the project reality. Here are a couple of examples of usability findings from a (well respected) usability company’s report:

Finding: It was said that the ability to filter [the search results] would be important.
Recommendation: Add check boxes so the customer can choose between [result types]

“Add check boxes”.

That’s easy to say, three words “Add. Check. Boxes”. But what if the particular search engine the team are using does not allow such functionality?  Or such functionality will take significant effort to build.

Finding: When probed about the use of breadcrumbs on the site, 2 participants were confused by the structure that was displayed.
Recommendation: Consider using chevrons [for the breadcrumb] to better covey to the customer that these words show the journey they have been on [rather than ‘/’]

Let’s leave aside the basis of this recommendation; two participants commenting that they weren’t sure about the use of the ‘/’ (this sounds more like it is reinforcing the authors prejudice against the use of / in a breadcrumb and their preference for the ‘>’ symbol).  And let’s also leave aside the fact that it has taken three weeks to let the development team to know that.

It is presented on a powerpoint slide with a screen shot of the breadcrumb and a mockup of a preferred solution, e.g. “Home > UK > South East > News” (Rather than Home / UK / South East / News”).  I’d estimate this took twenty minutes elapsed time to produce this slide. It will take a further ten minutes to discuss when the page is presented to the product owner. And the product owner will spend ten minutes explaining it to the IT project manager who will take ten minutes to explain it to the developers to make the change…

Save your money

Usability testing is not a science. Investing in one or two formal usability tests is almost certainly money badly spent. The Cue reports give a good insight into this.  For example, they asked seventeen experienced professional teams to independently evaluate the usability of the website for the Hotel Pennsylvania in New York.

The teams reported 340 different usability issues. Only nine of these issues were reported by more than half of the teams, while 205 issues (60%) were uniquely reported, that is, no two teams reported the same issue. Sixty-one of the 205 uniquely reported issues were classified as serious or critical problems.

They go on to state…

The study also shows that there was no practical difference between the results obtained from usability testing and expert reviews for the issues identified.

This suggests getting a UI expert into the project team is probably money better spent than employing the usability company (and supports my assertion that usability testing is often just validating the work of a professional).  And when you do get a usability company to report back, as I’ve discussed above, don’t hold your breath for the quality or usefulness of the results:

They found that only 17% of comments in usability reports contained recommendations that were both useful and usable, and many recommendations were not usable at all [source]

So if the recommendations you get from one company are likely to be different to the recommendations from another; if the report is going to be full of recommendations that are impractical and not implementable; if an expert can pick up usability problems that usability testing can, why bother with the usability company testing at the back end of the project?  Indeed, why bother with the usability company at all?  Get an interaction designer on the project from the outset (call them an information architect, user centred designer ,UX person if you want), get them testing ideas and interfaces informally and regularly throughout, and truly embed usability into the project itself, not as an add-on process and report.

Usability rant part 3>

Critiquing the critics (usability rant part 1)

Michael Winner may be a good food critic, but if you were looking for someone to cook you the finest meal for your budget, I doubt he would be your first choice.Same with film critics, they may be able to write an insightful and critical review, but would you want them directing a film for your budget? Would you want Jakob Neilsen, who is essentially a usability critic, to design your website? I mean, take a look at his site!

When you are building a product, you get a usability company in because you know that usability is a good thing that you want to have. If usability companies are the critics, what are you expecting?

The first usability test I ran was in 1991. I’ve set up usability labs, I’ve observed hundreds of people interacting with technology and products. My passion has always to do things at speed, turn around results ASAP and engage all stakeholders in the process.  But I’ll talk about that in a later post.  For now I’ll draw on experience of working with organisations that have commissioned usability companies to review their products.  I’ll breakdown the process I have often observed from usability testing vendors, considering both the elapsed time and the actual ‘value added time’ taken.

Day one

The client (usually the business) engage the usability company to audit the usability of the product that is being developed. The consultants will come in and understand the user tasks, roles and goals; the target audience will be identified for recruitment. ‘Value added time’ = 1 hour.

Day two

The team go away and produce a test plan and a recruitment brief for a research agency to find participants. They promise to get it back to the client in a couple of days. They contact their preferred agency who set about recruiting people (let’s assume this is a simple brief for a retail website targeted at young mothers).  Produce test plan (value added time = 3 hours). Send to client for review.

Day three

Client return test plan with a few comments. Update test plan. Value added time = 30 minutes.

Days six-ten

Twelve usability sessions, each an hour long, they do three a day, that is four days of testing. Value added time = 12 hours

Days eleven – thirteen

The team spend three days analysising and synthesising the results, pull supporting video clips and produce a detailed report. Value added time – 15 hours

Day fourteen

The client sees the report for the first time. (Value added time = 2 hours). Interesting results. (IT representation were not invited, they did not commission the report, the product owner wants to see the output first before sharing it with IT).

Day sixteen

The product owner informs the dev team of the changes that need to be made in the light of the usability report. Project manager sucks air through his teeth and says “you’ll need to raise a change request for those items… ha! quick wins they say? hardly… Hmmm, OK, change the labels in the field, we should be able to do that…”

Value added Vs. Elapsed time

The usability company has delivered and their engagement is complete.  From the start of the process to the recommendations hitting the developers who must ultimately action these, for this not-too-fictitious scenario sixteen days have passed, of which only four were spent on value-added tasks, actually doing stuff.

Day n

The product goes live. The usability company are aghast that so many of the changes they reccomended have not been implemented. They place the blame fairly and sqaurely at the door of the developers and reinforce their belief that IT just doesn’t listen, or worse, care about usability. The critics have critcised from their armchair, like the pigs and chickens they are the chickens, participated not committed.

Usability rant part 2>

Test Driven Design

I recently worked with a client where one of our deliverables were wireframes that illustrated how pages would be laid out and how the UI would work.  We were quite pleased with the results, there was some quite complex AJAX based functionality that provided a really immersive, goal orientated experience that looked like it would make finding products easy and enjoyable.  Testing the initial wireframes with users was an enlightening exercise, and demonstrated that the wireframes we had developed were not yet ready – users were not able to fulfill the goals they were set.  More worrying, some of the complex functionality we were introducing just did not work (some of the navigation, filters and sorts were confusing, just presenting information on a single page would suffice).

Usability testing often gets discussed and is a good intention but all too often budgetary or time constraints mean it never happens.  The user testing I refer to here impacted neither.  We did our testing in a meeting room, the customer sitting at one end with a facilitator, and the team watching on the projection screen in the same room.  We used a talk-aloud protocol walking through the static powerpoint wireframes that were linear in their presentation according to the ‘happy path’ to realise the customer goal.  Someone took notes as we went through the wireframes (in the notes section at the bottom of the PowerPoint deck).   It was quick and dirty but produced results.  After a couple of sessions things that we, too close to the design, had missed.  Changes to the wireframes took a few hours and allowed retesting the following day.  Indeed we made some quite significant changes to the user interaction model.  When we re-tested the wireframes the improvements were evident.  The feedback was more positive; there were fewer blank faces, less confusion and “I’ve no idea what to do next” was never uttered.  This was true iterative design in cycles that took a few hours.  Compare this to the days if code was involved.

Where does this fit into the agile way of delivering software?  In the agile/ lean zealot’s passion (and impatience) delivery, and their (dogmatic?) assertion that anything but code (working software) is waste, they loose focus upon what is really important, that of overall product quality.  Product quality is not only zero/ minimal defects and meeting the business requirement, but also delivering something that is usable and delightful to use.  Developers may do Test Driven Development, but this is based on assumptions that what they will code is right.  TDD should start earlier in the process, Test Driven Design.  It takes time to write your tests up-front, but we know it to be a good thing.  So why not design the user interface (wireframes) and test that up front?