testing

IT chalta hai

“Hearing the words ‘I LOVE this…’ from a client is a magical thing”  So tweeted Graham Smith.

Now how often does “the business” say that that to IT?  Rarely I guess.  Why is that?  Why doesn’t “business” love IT?

I think the Indians have got a phrase for this: Chalta hai.

I’ve recently come back from India.  As always it was a pleasure to read the Indian newspapers and weekly news magazines.  In discussing the Commonwealth Games, several columnists in their English language columns made reference to the hindi ‘Chalta Hai‘.  There is no direct translation (hence the columnists use of Hindi) but “it’s all right” or “it’ll do” comes closest.

Chalta Hai is an attitude.  It is mediocrity.  The columnists applied Chalta Hai to service culture and getting things done (or rather the lack of it).  Whilst Chalta Hai may be an Indian affliction, India is not alone.  I’m going to suggest that corporate IT suffers from Chalta Hai.  There’s an industry mindset that success is just getting stuff delivered.  Success is  “it’ll do”.  Mediocrity is a sufficient goal.  To hell with the experience; who cares what the users think, it’s all about delivering functionality and features.  We’re happy if “it’s all right”.  No-one has the desire to hear the business say “I love this!”

Let’s bring some magic into the enterprise.  Let’s introduce a new acceptance criteria to our requirements; that the stakeholder who signs it off says “I love this”.

Hey usability dude, join the devs (Usability rant part 3)

The last couple of posts here and here have been critical of boutique usability companies that offer user testing services. In the final part of my rant, here’s an alternative approach.  Seed into the development teams someone with a usability background (probably with a psychology degree), someone who can create as well as critique.  Align them to Marketing (or ‘the business’) but physically locate them in IT.  To be successful they must be a passionate advocate for the business vision (hence business alignment), but unless they have a daily relationship with the developers (on hand to support immediate design decisions) they will fail.  (The only way to do this is for them to be part of the development team, same building, same floor, same location).

It is better than they Eschew the formal usability testing process for more informal guerilla usability testing.  In an agile project treat this as akin to the showcase.  Testing sketches and wireframes prior to the development iteration, and putting working software when it has sufficient UI coherence in front of users and well as business stakeholders, with the developers as observers in the process.  One person can cover multiple workstreams.  This is a far better use of a limited budget than commissioning ad-hoc reports too late in the day.

Are you prepared for the dip?

So you are refreshing or rebuilding your website.  You are introducing new functionality and features, and sweeping away the old. You’ve done usability testing of your new concepts and the results are positive.  Success awaits.   You go live.   And it doesn’t quite go as you expected.  You expect that the numbers and feedback will go on an upward trajectory from day one, but they don’t.  What you should have expected is the dip.

Illustration of the dip

In October 2009 Facebook redesigned the news feed.  Users were up in arms, groups were formed and noisy negative feedback was abound.  A couple of years back the BBC redesigned their newspage, “60% of commenters hated the BBC News redesign“.  Resistance to change is almost always inevitable,  especially if you have a vocal and loyal following, you can expect much dissent to be heard.  What is interesting is what happens next.  Hold your nerve and you will get over this initial dip.  We’ve seen a number of projects recently where this phenomenon occurred; numbers drop and negative feedback is loudly heard.  But this dip is ephemeral and to be expected.  The challenge is in planning for this and setting expectations accordingly.  Telling your CEO that the new design has resulted in a drop in conversion rate is going to be a painful conversation unless you have set her expectations that this is par for the course.

Going live in a beta can help avert the full impact of the dip.  You can iron out issues and prepare your most loyal people for the change, inviting them to feedback prior to the go-live.  Care must be taken with such an approach in the sample selection o participate in the beta.  If you invite people to ‘try out our new beta’, with the ability to switch back to the existing site, you are likely to get invalid results.  The ‘old’ version is always available and baling out is easy.  Maybe they take a look and drop out, returning to the old because they can.  Suddenly you find the conversion rates of your beta falling well below those of your main site.  Alternatively use A/B testing and filter a small sample to experience the new site.  That way you will get ‘real’ and representative data to make informed decisions against.  Finally, don’t assume that code-complete and go-live are the end of the project.  Once you are over the dip there will be changes that you can make to enhance the experience and drive greater numbers and better feedback.

Just ignore what they say

What customers say they like and how they behave are not the same thing. Don’t always trust what you are told, use data and real insights to drive decisions that have major commercial implications.

So there was a skyscraper, banner and numerous MPUs across the website.  A survey panel was set up and the results came back.  The agency briefed the team, “Your customers really don’t like all the ads you have on the page”.  The message was reiterated in focus groups.  Ditch the ads.  This would be a painful decision, despite customer’s not liking them they were still delivering reasonable revenue.

The organisation was striving to be more customer-centric; if the advertisements were degrading the customer experience then removing them would be a price worth paying.  And so they were switched off.

The result?  Nothing.  Except lost revenue.  Analysing the data, customer volumes remained the same.  There was no difference in the successful completion of customer goals.  Switching the ads off had no impact on customer behaviour on the site; when asked customers said they didn’t like them, but what they said and what they did were different things.

The moral: if you are going to use emotion and what customers say to make commercial decisions, consider A/B testing with real data before making wholesale changes.

Test Driven Design

I recently worked with a client where one of our deliverables were wireframes that illustrated how pages would be laid out and how the UI would work.  We were quite pleased with the results, there was some quite complex AJAX based functionality that provided a really immersive, goal orientated experience that looked like it would make finding products easy and enjoyable.  Testing the initial wireframes with users was an enlightening exercise, and demonstrated that the wireframes we had developed were not yet ready – users were not able to fulfill the goals they were set.  More worrying, some of the complex functionality we were introducing just did not work (some of the navigation, filters and sorts were confusing, just presenting information on a single page would suffice).

Usability testing often gets discussed and is a good intention but all too often budgetary or time constraints mean it never happens.  The user testing I refer to here impacted neither.  We did our testing in a meeting room, the customer sitting at one end with a facilitator, and the team watching on the projection screen in the same room.  We used a talk-aloud protocol walking through the static powerpoint wireframes that were linear in their presentation according to the ‘happy path’ to realise the customer goal.  Someone took notes as we went through the wireframes (in the notes section at the bottom of the PowerPoint deck).   It was quick and dirty but produced results.  After a couple of sessions things that we, too close to the design, had missed.  Changes to the wireframes took a few hours and allowed retesting the following day.  Indeed we made some quite significant changes to the user interaction model.  When we re-tested the wireframes the improvements were evident.  The feedback was more positive; there were fewer blank faces, less confusion and “I’ve no idea what to do next” was never uttered.  This was true iterative design in cycles that took a few hours.  Compare this to the days if code was involved.

Where does this fit into the agile way of delivering software?  In the agile/ lean zealot’s passion (and impatience) delivery, and their (dogmatic?) assertion that anything but code (working software) is waste, they loose focus upon what is really important, that of overall product quality.  Product quality is not only zero/ minimal defects and meeting the business requirement, but also delivering something that is usable and delightful to use.  Developers may do Test Driven Development, but this is based on assumptions that what they will code is right.  TDD should start earlier in the process, Test Driven Design.  It takes time to write your tests up-front, but we know it to be a good thing.  So why not design the user interface (wireframes) and test that up front?

How are you managing the change?

To the development team ‘change’ relates to scope and requirements within the project, but change runs far deeper than that.

A question that I am often asked is how do you manage business change on agile projects. Release regular and often is an often quoted mantra, but what does that mean to the business where rolling new software across the large, multi-site organisation? How do you manage the piecemeal introduction of new technology, features and functions to hundreds or thousands of people, many levels removed from the project across remote offices and geographical locations? How do you ensure the recipients of the new technology rapidly adopt it and accept the change, even when change is occurring every few months.

What are the financial and human performance implications of each new release in terms of training, productivity and morale? What is the overall burden on people in frequent change?

The reality is that it is not unusual for projects deemed successful by IT and the immediate business team to ultimately fail when released to the broader organisation. Effective change management can be even more important when an organisation adopts agile software delivery.

An analogy as an example. If I expect a screwdriver and you only give me a cross-headed screwdriver when I really want a flat head one I am going to be unhappy. The core team may have prioritised the cross-headed one first for good reason, a flat headed one maybe coming just round the corner, but if you don’t deliver to my expectations I am going to be unhappy. Worse, I am likely to become resistant to future change and less likely willingly cooperate with the uptake of future releases, even if they do start to deliver to my needs.

Keep it on the shelf
The first point is that regular and often does not necessarily mean release to production for the entire organisation or marketplace. Running a number of internal releases, keeping them on the shelf until a complete and marketable product is ready is a strategy often employed. Significant value can be accrued by getting tested and working software into a pre-production environment and held “on the shelf” awaiting a full release. This maybe a UAT environment where a limited number of stakeholders test the functionality in an ‘as-live’ environment. Or it maybe a beta release to a small, selected number of interested people (e.g. a ‘friendly user trail’). This can often pay dividends with usability issues and minor gripes being picked up and addressed before a major roll-out.

Communication

Let’s assume that the team wants to roll out the application early and often to the whole target population. Critical to the success of managing the business change is communication. It is important to manage expectations on a timely and appropriate manner. Explain what the upcoming release will do and more importantly what it will not do (and when it will do it). Keep all stakeholders informed of the project progress (setting up a project blog can be a cheap and easy way of letting interested people know of progress), yammer maybe another way of broadcasting updates and information. Having a release count-down can also prepare stakeholders for the change. The techniques can be googled, the important thing is to communicate and manage the expectations (and be ready for inbound questions and comment after go-live).

Adaptable user interface
It is not unusual for the core team to drive for as much functionality as possible in the first release, considering UI enhancements as ‘nice to haves’ and consigning them to later releases. This is a false economy. Consider the cost of training and lost productivity through a hard to use interface. Now multiply that across multiple releases that focus upon utility before usability. Delivering a first release that is self contained and compelling will go a long way to driving organisational buy-in of the new application and greater acceptance of future change. (Jeff Patton writes some great stuff on using story maps to explain what the system should do. Using these will help focus on complete and useful slices through the application rather than random features that are perceived to be of value but do not make a coherent product).

A new user interface, however well designed will inevitably take time to learn the first time it is used. The challenge is with each subsequent release to introduce funcitonality and interactions that leverages the users existing mental model of the application, building upon what has been already been learned. Starting with the end-state, wireframes that articulate the final application then trimming out features, feields and controls to represent each notional release can be a good way of ensuring a UI that will scale as new functionality is added.

Agile organisation
Ultimately the most successful way of introducing agile is to build a beta culture with everyone as agents of change across the whole organisaiton. More importantly change becomes a cycle of learning and continuous improvement. And here I’ll borrow this most excellent graphic from David Armano. David compares what he calls conventional and unconventional marketing but the parallel with software development is obvious. His iterative cycle is “plan-design-launch-measure” but that is not a million miles away from the lean philosophy of “plan-do-check-act”. And critical to the journey is the learning cycle between iterations.

Innovation and the idea approval index

I’ve written in the past about organisations setting up test and learn capabilities, and how languages such as Ruby on Rails make this so much easier. Sadly, even with the best will in the world it is not unusual for these internal skunk works outfits to fail. Scott Berkun’s recent post gives a good reason why; they score highly on the Idea Approval Index where the higher the number, the harder the innovation is.

If you are going to be serious about test and learn it is essential to remove barriers to its success, and that means removing bands of bureaucracy and sign-off. Identify the number of approvals you assume will be required then plan how to eliminate them. What strategies can you put in place to keep stakeholders in the loop, but at a distance so they can’t kill the innovation before it’s given oxygen to breathe. Realise that what is being tested is a “beta” and market it as such. Give clear terms and conditions, use controlled environments (for example testing on staff within the internal network), anything to prevent it being subject to the usual legal / compliance / architectural constraints.

“Let’s pretend” user testing

How do you test the usability of the software you’ve built?  At one end of the scale you could run full usability tests with a sample of users, given them a scenario and let them loose, trying to realise pre-set goals.  On the other you could do guerrilla usability testing, just grabbing people and informally doing a usability test at your desk.  You could get the experts in to do a usability review.  They’ll typically use a bunch of usability heuristics and provide you with a report critiquing the application against the heuristics.  But what if you don’t have the budget for usability testing, can’t get hold of any representative users, don’t want to wait for some experts report?  How about playing “let’s pretend”?  Remember playing doctors and nurses as a child?  Well do it again, only this time be Users and Computers.  And you’ve got the role of the user.

Create a persona, a pen portrait of your customer.  Think in your mind how they act, behave.  Maybe you know someone like them, seen someone like them in the supermarket.  Now ask yourself some quetions:

How do they behave in their daily interactions with others?
What does their life look like?
What experience do they have with using computers, with using products like yours? How much time have they got? How are they feeling?
Why are they using your product?

Spend a few minutes internalising the persona.

Now turn to your application and imagine you are the persona, using it for the first time.  Give yourself a task to complete (make sure the task is something real that your persona would look to achieve, not a goal you think the application will help him achieve – there is a subtle difference).  Try and forget all you know about the application (easier said then done) and be the novice user.  Walk through the process, speaking aloud (in your head will do).   Focus upon I not “the user”.

I’m looking for this, I’m looking for that.  I do this, I do that”.  I can’t find this; I don’t think that makes sense”

I want to check my balance.  Hmmmm, there are three “balances” here; I’m not a banker, what’s the difference between Running balance and Cleared balance?

I owe my mate some money for the golf holiday, he’s given me his bank account details, I want to transfer some money into his account… huh?  I can’t do that.  What do I do?  Pay a bill?  He’s not really a Bill is he… Etc etc.

You don’t need heuristics for this – many of issues the heuristics would help uncover should come out using this method anyway (speed, ease of navigation, speak users language etc).  If your persona is having difficulty you can probably multiple the usability issues for a real user.

This approach works for a quick and dirty review of an application’s usability.  And if you document your experience make sure you preface it with a description of your approach.  Someone who is close to the development of the application, reading your first person critique, is rightly likely to think you are an arrogant so and so.  I speak from experience.

Need for speed

What a cool tool Firebug is.    It lets you get under the covers of web pages, to see what is going on.  One feature is the ability to measure the page size and that of individual components on the page.  This blog post comes in at 294kb.  If I think back to the early days, that number would make me shiver.  In the days of 56k dial up a page that size could take more than a minute to download.  Things are different now.  In the UK Broadband penetration is now greater than 72% (source [xls]) and the page appears almost instantaneously.

So do we still need to optimise our pages for size and speed of download?  Having run hundreds of usability sessions, the key gripe of the web used to be speed – slow download times could effectively kill a proposition. But that’s not such an issue with a fat connection.  Not unless it’s a hollywood movie you are downloading…

But what about the remaining 30% who still use dial-up – that is not an insignificant minority to exclude.  I assume that people who read this blog are more likely to be using a fast connection, so I don’t feel I am excluding anyone there.  But as large pages become acceptable, with rich content and streaming media, should we spare a thought for the dwindling dial-uppers?