You’ve changed, design
In the 20 years I have been working on digital projects design has changed quite a bit, or perhaps I have changed. Back in the dark ages of digital design (I think we started out calling it multimedia) designers would pick up a brief and work on ideas for a big chunk of time (a number of weeks usually). Once happy we would then show it to our clients and have a chat about how suitable the design was. If the client was happy we would give the designs (over the wall) to a team of wizards to build.
This was usually where things got tricky. The work was nearly always too big for the project time scales (not enough time given to the developers), or it wasn’t understood. Nearly always, the delivered work didn’t reflect the designs shown to the client. Moreover, once delivered it was the end of the project. The work would drop off of a cliff and we wouldn’t hear much about it afterwards. There was no feedback and rarely any iteration.
This way of working was closely related to print design work processes. Once printed what would be the point of discussing change? Print paradigms informed our early ways of thinking about digital design just like books informed early web concepts.
The upshot of this way of working was bad working relationships, bad work, unhappy clients and probably frustrated customers. Design needed to change with its materials.
Cue the hero, entering stage left, Agile
Agile is a great way to work. Multi-skilled teams are now planning stories together every day and talking about work as it happens. This results in effective collaboration at work, which is not only fun but hey… it’s good for design too. Talking with the right people about how we will deliver a story means we are more easily able to get it right.
Iteration on stories helps us build fidelity over time and can help us make improvements as we go. However, any story we deliver might pass agreed acceptance criteria but still not be the right solution. We might have delivered what we planned but how do we know if it’s what our customers want?
How do we know what’s working and what’s not working?
We can look at data and figures to help us understand what people do and how they behave when they visit our app. It can tell us WHAT is happening, but not so much WHY. Which would leave us looking at the data and design and trying to second guess what the disconnect is.
So the best way to find out what’s working and what’s not working is to watch people using your app.
And moreover, some clever people say that if everyone in your organisation watches at least two hours worth of people using your product every six weeks, then this is the closest thing to a silver bullet to help you improve your service.
Sounds pretty simple, right? Not so much. I have found that in a lot of organisations, particularly bigger ones, it’s hard to convince everyone of the value of this.
Moneyhub app customer research / usability studies
There is a difference between what people say and what they think. As described by Jakob Nielsen, useful customer research comes from observation rather than verbal feedback.
When asking people to tell us what they are thinking, social psychology comes into play and this can skew what is said. It means that people have a tendency to not say quite what they are thinking. Consider the recent political polls versus the results in both the UK and the US!
Having said that, sometimes quantitative studies, interviews and card sorting — i.e. a method used to help design or evaluate the information architecture of a site — can be useful to gain a picture of a customer’s mental model. Any research sessions should ideally be a mixture of observation with plenty of listening.
You only need five people
Nielsen also identified that you can uncover 85% of usability issues by observing just five people using your product. You might give them a task to complete like a sign up or a purchase — it could basically be anything you want feedback on.
At Moneyhub Enterprise we want to learn as much as possible about how our app is working and so we have regular research sessions with different users. The lovely folk at People for Research have helped us to find the right people to test our app and the aim of these studies is to see how well suited it is to their needs. We do this by showing them the app (and perhaps any new features we want to try out) and ask them to say out loud their thoughts and any questions they have. We also observe during these sessions, where we encourage the users to follow their noses and to explore whatever they find interesting. This gives us the chance to ask why they did it at the end of the session.
We are also lucky enough to have an in-house lab for research which means we can run research sessions here, but what’s even better about this is that people within the business can sit in the observation lounge and watch the research sessions too. This often leads to valuable conversation about the experience and note taking during the day. It brings people from different teams together and opens-up the design discussion to everyone in the business. For me the best outcome from these sessions is what we see and learn. It’s usually eye-opening and despite my twenty years of design experience I am still surprised by how people behave and the conclusions they make as they experience the world.
Designing an app without customer research is like cycling with your eyes closed, only dreaming about the road ahead. Checking in with customers along the way can help you keep your eyes on the changing road and help you stay on course.
Written by Steve Murphy (@Murfalizer), UX / UI designer at Moneyhub Enterprise