Methodology

Usability reports (usability rant part 2)

Following on from yesterdays post about the usability process, today I’ll focus on the deliverables, the usability report and my contention that they are rarely grounded in any understanding of the project reality. Here are a couple of examples of usability findings from a (well respected) usability company’s report:

Finding: It was said that the ability to filter [the search results] would be important.
Recommendation: Add check boxes so the customer can choose between [result types]

“Add check boxes”.

That’s easy to say, three words “Add. Check. Boxes”. But what if the particular search engine the team are using does not allow such functionality?  Or such functionality will take significant effort to build.

Finding: When probed about the use of breadcrumbs on the site, 2 participants were confused by the structure that was displayed.
Recommendation: Consider using chevrons [for the breadcrumb] to better covey to the customer that these words show the journey they have been on [rather than ‘/’]

Let’s leave aside the basis of this recommendation; two participants commenting that they weren’t sure about the use of the ‘/’ (this sounds more like it is reinforcing the authors prejudice against the use of / in a breadcrumb and their preference for the ‘>’ symbol).  And let’s also leave aside the fact that it has taken three weeks to let the development team to know that.

It is presented on a powerpoint slide with a screen shot of the breadcrumb and a mockup of a preferred solution, e.g. “Home > UK > South East > News” (Rather than Home / UK / South East / News”).  I’d estimate this took twenty minutes elapsed time to produce this slide. It will take a further ten minutes to discuss when the page is presented to the product owner. And the product owner will spend ten minutes explaining it to the IT project manager who will take ten minutes to explain it to the developers to make the change…

Save your money

Usability testing is not a science. Investing in one or two formal usability tests is almost certainly money badly spent. The Cue reports give a good insight into this.  For example, they asked seventeen experienced professional teams to independently evaluate the usability of the website for the Hotel Pennsylvania in New York.

The teams reported 340 different usability issues. Only nine of these issues were reported by more than half of the teams, while 205 issues (60%) were uniquely reported, that is, no two teams reported the same issue. Sixty-one of the 205 uniquely reported issues were classified as serious or critical problems.

They go on to state…

The study also shows that there was no practical difference between the results obtained from usability testing and expert reviews for the issues identified.

This suggests getting a UI expert into the project team is probably money better spent than employing the usability company (and supports my assertion that usability testing is often just validating the work of a professional).  And when you do get a usability company to report back, as I’ve discussed above, don’t hold your breath for the quality or usefulness of the results:

They found that only 17% of comments in usability reports contained recommendations that were both useful and usable, and many recommendations were not usable at all [source]

So if the recommendations you get from one company are likely to be different to the recommendations from another; if the report is going to be full of recommendations that are impractical and not implementable; if an expert can pick up usability problems that usability testing can, why bother with the usability company testing at the back end of the project?  Indeed, why bother with the usability company at all?  Get an interaction designer on the project from the outset (call them an information architect, user centred designer ,UX person if you want), get them testing ideas and interfaces informally and regularly throughout, and truly embed usability into the project itself, not as an add-on process and report.

Usability rant part 3>

Critiquing the critics (usability rant part 1)

Michael Winner may be a good food critic, but if you were looking for someone to cook you the finest meal for your budget, I doubt he would be your first choice.Same with film critics, they may be able to write an insightful and critical review, but would you want them directing a film for your budget? Would you want Jakob Neilsen, who is essentially a usability critic, to design your website? I mean, take a look at his site!

When you are building a product, you get a usability company in because you know that usability is a good thing that you want to have. If usability companies are the critics, what are you expecting?

The first usability test I ran was in 1991. I’ve set up usability labs, I’ve observed hundreds of people interacting with technology and products. My passion has always to do things at speed, turn around results ASAP and engage all stakeholders in the process.  But I’ll talk about that in a later post.  For now I’ll draw on experience of working with organisations that have commissioned usability companies to review their products.  I’ll breakdown the process I have often observed from usability testing vendors, considering both the elapsed time and the actual ‘value added time’ taken.

Day one

The client (usually the business) engage the usability company to audit the usability of the product that is being developed. The consultants will come in and understand the user tasks, roles and goals; the target audience will be identified for recruitment. ‘Value added time’ = 1 hour.

Day two

The team go away and produce a test plan and a recruitment brief for a research agency to find participants. They promise to get it back to the client in a couple of days. They contact their preferred agency who set about recruiting people (let’s assume this is a simple brief for a retail website targeted at young mothers).  Produce test plan (value added time = 3 hours). Send to client for review.

Day three

Client return test plan with a few comments. Update test plan. Value added time = 30 minutes.

Days six-ten

Twelve usability sessions, each an hour long, they do three a day, that is four days of testing. Value added time = 12 hours

Days eleven – thirteen

The team spend three days analysising and synthesising the results, pull supporting video clips and produce a detailed report. Value added time – 15 hours

Day fourteen

The client sees the report for the first time. (Value added time = 2 hours). Interesting results. (IT representation were not invited, they did not commission the report, the product owner wants to see the output first before sharing it with IT).

Day sixteen

The product owner informs the dev team of the changes that need to be made in the light of the usability report. Project manager sucks air through his teeth and says “you’ll need to raise a change request for those items… ha! quick wins they say? hardly… Hmmm, OK, change the labels in the field, we should be able to do that…”

Value added Vs. Elapsed time

The usability company has delivered and their engagement is complete.  From the start of the process to the recommendations hitting the developers who must ultimately action these, for this not-too-fictitious scenario sixteen days have passed, of which only four were spent on value-added tasks, actually doing stuff.

Day n

The product goes live. The usability company are aghast that so many of the changes they reccomended have not been implemented. They place the blame fairly and sqaurely at the door of the developers and reinforce their belief that IT just doesn’t listen, or worse, care about usability. The critics have critcised from their armchair, like the pigs and chickens they are the chickens, participated not committed.

Usability rant part 2>

This much I know

The Observer magazine has a feature titled “This much I know“. It takes an interview with a celebrity who “share the lessons life has taught them” and distills it down into a number of key statements. There is probably some milage in this as an innovation game to play when you are looking for ideas and insights, hopes and fears from a newly formed team.

Ask team members to write statements “this much I know” on post-it notes with Sharpies (because you can’t write many words like that) and see what you get.

For structure you may consider using different coloured post-its to represent PEST themes: Political, Social, Environmental and Technical, or how about CRIT:

  • C: Competitive landscape
  • R: Return on investment
  • I: Internal politics
  • T: Technology.

So for example…

This much I know

Acme.com recently redesigned their website. The  forums and twitter were full of positive comments (C)
People will pay for content. It’s just got to be priced right, relevant and timely to them and easy to pay  (R)
Getting things done here is a nightmare. The process to get approval for any new project is designed to be hard (I)
We’ve got problems with our CMS. Our license is due to expire and the vendor is trying to get us to upgrade. We don’t know what to do (T)

Getting these ideas on the wall will help participants articulate their thoughts and provide a framework for understanding the current reality and mining for new ideas and opportunities.

Petition

I’ve written in the past about the government’s abysmal track record on IT development.  I met with the local MP to discuss the issues but he didn’t really get it; he sent me away to write a policy paper for him which I really had time for…  So good news that someone is doing something about it with a petition on the Number 10 website.

In his recent update on the progress of the petition, Rob Bowley mentions the Rural Payments Agency project.  I can’t attest to either have been an ‘expert’ or to have had a salary anything near what he mentions, but I was a consultant on that project so nod in informed agreement.  That experience gave me a benchmark to compare ‘bad’ ways of going about an IT project to compare with the ‘good’ world of lean and agile that I now inhabit.

Please sign the petition.

Are you prepared for the dip?

So you are refreshing or rebuilding your website.  You are introducing new functionality and features, and sweeping away the old. You’ve done usability testing of your new concepts and the results are positive.  Success awaits.   You go live.   And it doesn’t quite go as you expected.  You expect that the numbers and feedback will go on an upward trajectory from day one, but they don’t.  What you should have expected is the dip.

Illustration of the dip

In October 2009 Facebook redesigned the news feed.  Users were up in arms, groups were formed and noisy negative feedback was abound.  A couple of years back the BBC redesigned their newspage, “60% of commenters hated the BBC News redesign“.  Resistance to change is almost always inevitable,  especially if you have a vocal and loyal following, you can expect much dissent to be heard.  What is interesting is what happens next.  Hold your nerve and you will get over this initial dip.  We’ve seen a number of projects recently where this phenomenon occurred; numbers drop and negative feedback is loudly heard.  But this dip is ephemeral and to be expected.  The challenge is in planning for this and setting expectations accordingly.  Telling your CEO that the new design has resulted in a drop in conversion rate is going to be a painful conversation unless you have set her expectations that this is par for the course.

Going live in a beta can help avert the full impact of the dip.  You can iron out issues and prepare your most loyal people for the change, inviting them to feedback prior to the go-live.  Care must be taken with such an approach in the sample selection o participate in the beta.  If you invite people to ‘try out our new beta’, with the ability to switch back to the existing site, you are likely to get invalid results.  The ‘old’ version is always available and baling out is easy.  Maybe they take a look and drop out, returning to the old because they can.  Suddenly you find the conversion rates of your beta falling well below those of your main site.  Alternatively use A/B testing and filter a small sample to experience the new site.  That way you will get ‘real’ and representative data to make informed decisions against.  Finally, don’t assume that code-complete and go-live are the end of the project.  Once you are over the dip there will be changes that you can make to enhance the experience and drive greater numbers and better feedback.

Bunch of grapes or bunch of arse?

“We’ve got to have the ability to enable customers to share”
Random London Taxi driver spouting opinion on social media

“‘ere,  you say you’re in IT, whatcha make of this Facebook and twitter malarky? That Stephen Fry, what a tw@t, I don’t care that he’s just woken up and brushed his teeth. Now that QI, its a fix. He’s not so bright, he doesn’t know all the answers etc etc etc….  I’ll tell ya, Facebook and all that sh!t is a bunch of arse”.

“We’ve got to have the ability to enable customers to rate and review products”
Random UK customers in a focus group

Facilitator “So if I gave you all these user reviews for the product, or a review by Martin Lewis, who are you going to go with?”  Group: “Martin Lewis…  yeah, I trust him, no idea who these people who write reviews are… what’s in it for them?…  they are paid by the company aren’t they (cynical agreement etc)”

“Blackberrys are the business users phone”
Random teenagers in shopping centre talking about their mobile phones

“You’re nobody if you don’t have a Blackberry”  (Ummmm, aren’t Blackberry’s the business person’s phone?)  “You’ve got to have one coz of the Blackberry PIN for texting”

Sometimes you can get hung up in your view of the world, you make assumptions about the way the world works.  Yet it can be refreshing to go out onto the street and canvas ideas and feedback.  That may be as simple as striking up people on the street (people love to talk), or running focus groups for no particular research purpose other than taking the pulse of what people think.  Or it may be spending time on the shop floor.  Get out of the office for a day and have fun seeing your customers, consumers of your idea, in the wild.  I’m not saying you take the word of a taxi driver, a comment from a single focus group or an anecdote from a shopping centre as gospel, but it might make you think and spark some new, unexpected and contrary ideas

A leap of faith

We recently pitched to a prospective client.  We told a story of how we believed their customers would use the new product they had sketched out in the RFP; we told a ‘day in the life’ and brought it to life with illustrations.  We walked through our approach to tackling projects and described our experience on similar engagements.  One of the bid team described his experiences and what the client could expect and we guaranteed that he would be on the project.  We let them have a rate card, and an indication that, comparing what we knew of the project with others that we have successfully delivered, what they indicated they wanted in the RFP was in the range of their budget (which we knew).  We let have them directly relevant references who were ready to take their calls.

What we didn’t do was produce them a project plan, nor a definitive cost.  Given the paucity of information in the RFP there was just not enough to go by; it would be a lie if we came up with a number, there were too many imponderables.  Yet that was what they craved the most.  Most of the questioning was friendly and we answered well, however one question sticks in my mind: “From your presentation, you are asking us for a leap of faith to engage you”.  Well yes, we are.  But is that any different to the way you engage any new contractor?  It is always going to be a leap of faith; even if you engage one of the big trusted brands with an established reputation.  Sainsburys and NHS took a leap of faith when they engaged my old employer Accenture; they probably felt they had done all the necessary due dilligence and were partnering with a proven, reputable organisation, but in those two instances good things did not result.

We didn’t build it because the business didn’t prioritise it

Agile software development is inherently democratic.  Choice over Prescription could be included in the Agile manifesto.  We give the customer the choice, the choice to decide what is most important to them, what will deliver the greatest value and build that first.  We do not prescribe that they must build a complex framework first- the software will evolve, You ain’t gonna need it (Yagni) until you need it.

The problem with this democracy, with this unleashed choice is that, if you don’t have the right mix of stakeholders, the (agile project) customer doesn’t always know what is best.  They are not always the best people to choose.

There is a difference between domain knowledge and what I’ll call ‘experience’ knowledge.  A banker may know the banking domain inside and out, they can tell you the difference between all the different types of balance and how (and where) they are calculated; closing balance, running balance, etc.  But unless they have done any research with customers, unless they have ‘experience knowledge’, when it comes to  a question such as which balance to provide as an SMS alert, their ‘domain’ knowledge is as good as your common-sense.

Imagine software were a supermarket store.  IT are responsible for the construction of the store, the basic layout, the signage, the checkout, the peripherals.  The business are responsible for what goes into the store, the merchanising, the planogram.  The business imperative is to fill the shelves and shift the product.  They want to spend their money to this goal, anything that does not directly support this will be of lower priority.  That is their domain and they will prioritise that over anything else.  If they could fill the store with nothing but shelves they’d probably be happy.

Now imagine visiting the store.  There’s no carpark, there are no shopping trolleys, there’s no emergency exits.  There’s no ramp for disabled customers.  The shelves rise to eight foot high (with no steps to reach the heights), the aisles are difficult to negotiate because of promotional displays between the shelves.  The business is happy, but what about the customer?

In the agile world, nobody is going to pay attention to this stuff unless it is prioritised.  “Sorry, we didn’t build any shopping trolleys because you prioritised building more shelf space over them”.

This sort of thing happens all the time; functional domain requirements trump experience requirements. Why? Because no-one brings experience knowledge into prioritization and planning sessions.

When stating their choice, your stakeholder wears a commercial hat, they are thinking about their targets and those are based upon shifting product.  They are living in thier operational business domain.  But cold commercials are not what shifts product.  It is the experience that does.  Now go back to the democracy of choice on an agile project.  Who is the ‘business’ specifiying requirements? Is it a balanced team? Is their an experience champion with an equal voice?  Is the voice of the customer recgognised?  If not, isn’t about time you got an customer experience champion onto the team.

Do you modify your approach according to context?

I look in the rearview mirror.  Blue lights flashing. Maybe he’s been called out to respond to a call and will overtake me.  No, he’s flashing me.  Instinctly I’ve slowed down, I look at the speedo, it is in KM/H and I’ve not been paying attention to the roadsigns, but it is clear that I’ve been going too fast.  So I pull over.

The question in my mind is what to do next.  I’m in Australia, driving an awesome road, the Great Ocean Road that just asks for a car to be driven (OK, it’s hardly an Grand Tourer, it is a compact Hyundai Getz).  The brain is racing, pumped by adrenaline and fear.   In the UK I would get out of the car, go to the back of it and talk to the officer.  You’ll be asked to do this anyway “Would you kindly step out of the car sir”…  The last time I hired a car overseas was in the US in Atlanta.  Driving through the deep south I got pulled over and I jumped out of the car. Bad move.  “You’re makin’ me kinda nervous’ the cop drawled with his thick southern accent.  He spread me over the ‘hood’ to search me and ended up taking me down to the station, something that was straight out of the Dukes of Hazard, and handed me a huge fine to pay.

So I’m slowing down and thinking do I do the UK thing and jump out, or do the US thing and stay in the car?

I decide to stay in the car.  The right move.

So that’s an interesting story, but what does it have to do with the themes that I usually blog about?  Adapting your approach based upon context.

A while ago I met with the CIO of a company whose core business is in complex instrumentation hardware.  They were looking to diversify their offering, take some of their products out of the hands of specialist practitioners and into the broader marketplace.  Core to the success of these new offerings was usability; their devices required complex set-up and calibration.  Their question; how do you redesign an expert system for novices?

Seeking an answer they hired a customer experience consultancy to gather insights, understand the new segment needs and create wireframes for the new application interface.  But the consultancy couldn’t fit with the way the company worked.  They would run a workshop with the client for a couple of hours then go ‘back to base’ to do the thinking and designing and return to present their designs, well thought out and well polished.  Yet every time they would come back they had got something wrong.  That approach may have worked for a website, but for this complex product they were getting it wrong.

We were asked for our advice.  I started by saying that I thought they should stick with the incumbant,  whilst we would love the business, both parties had invested a lot and learned a lot in the past few months and it would be a folly for us to come in and have to start from scratch.  The answer was to get both sides into the same room, a war room, and thrash out the designs.  Forget about their formal methodology and way of doing things.  If you they were both in the same building they didn’t need that formal staccato present  – review – sign-off process.  They could continually innovate.  That is certainly the way we would do it, yet the CIO thought the incumbent would be resistant to changing their ways.
The theme that joins these two stories?  It’s about reading the situation, knowing the culture and context you are in and adapting your approach and behaviour accordingly.   And that applies as much to agile practitioners as Big Methodology people.  know your audience, understand the context then pick your battles; think big, start small scale fast, remember that change won’t occur overnight.

Innovation games

Innovation games are a great way of engaging stakeholders, getting them to collaborate and think creatively around solutions to problems. Here are a few I’ve recently used. Introducing a persona helps focus the attention.

What happens if?
Ask participants to construct a back story for the persona. What have they done in the last year. Describe each touch point they have had with your brand or product. Now introduce a crisis moment. Lost a job, got a terminal illness, won the lottery. What happens? How does the experience with each touch point change?

Build a widget
Again, give the group a persona to help focus their attention. Now give them half an hour to build a widget that would solve a problem the persona has. Give them paper, post-its sharpies, coloured pencils. This is agile right. Now present back – They get two minutes to provide the context, pitch the product. Then one minute to demonstrate how the widget works. Open the widget to questions. How will it work….

You’re all crooks
<Insert your industry> are crooks. What new laws would you introduce to clean up their act? (OK, this feels uncomfortable but it may help get people thinking about how consumers perceive the industry and how the customer experience could be improved. For example you are crooks because you hide details in small print, introduce a new law on transparency. What would that mean you would change?)

Kill the sacred cows
Every business has sacred cows or elephants in the room; things that are done because they’ve always been done, not to be challenged, considered immune from criticism or are too risky or dangerous to change. Ask participants to identify these, putting them on post-it notes. Now imagine that they no-longer exist. What could you do now that you couldn’t do when the sacred cows were in place?

2 of 11
123456