Last year we ran a short workshop with the user researchers who assess government services against the digital by default service standard.
We wanted to find out if there were any patterns to what we were seeing in assessments, and any things we should be focussing on to help services do better.
The first thing we agreed was that the quality of user research across government is continuing to improve.
Most service teams are doing great research with the people who will use their service, learning about their circumstances and understanding their needs. And we see fewer problems with teams producing usable services.
But there are some common problems that stop services teams passing on the criteria that relate to user research.
In the long tradition of blog posts setting out 7 or 10 good or bad habits in some area, in this post I'll describe the common reasons that teams don't pass, and compare that to what we see from the best teams.
Start with Needs. Start in Discovery
When some teams come to their assessment, it's clear that they did limited user research in their Discovery phase, and very little continuing contextual research through Alpha and Beta.
As a result they have a shallow understanding of the people who will use their service, and struggle to explain how research findings have influenced the design of the service.
These teams do not pass point 1, Understand user needs.
By contrast, the best teams start researching with users as soon as they get together in discovery, and continue through every phase. Everyone in the team is involved in user research, and they're keen to challenge their own assumptions. Their research is broad and inclusive.
As a result it's easy for the team to show how the things they've learned about users have informed the design of their service.
See:
Doing research in the discovery phase
Understanding the problem is key to fixing it
Researching with all your users
At their assessment, some teams struggle to describe the diversity of circumstances, behaviours and motivations among the people who will use their service.
These teams research and test their service with a narrow range of users. They give little consideration to users who will need assistance, and often pass responsibility for those users to other parts of their organisation.
They do not pass either point 1, Understand user needs, or point 12, Users succeed first time.
The best teams work hard to understand the diversity of the people who will use their service, and the different needs that result.
They consider people with access needs at every stage, and test their service with users with a broad range of impairments and who use a variety of assistive technologies. They include people who lack digital skills, design support options based on needs, and fully test support with users.
See:
How we recruited people with low/no digital skills on Carer's Allowance
Research with visually impaired users
Including users with low digital skills in user research
Researching shape and scope
When teams struggle to explain how they took important design decisions based on user needs, we often find that they have accepted a shape and scope for their service dictated by a senior stakeholder.
These teams consider few ideas, and start building the first, most obvious solution they think of. They quickly narrow their user research to the usability of that solution. If the service does not fit into relevant parts of their users' lives, business, family, education, health care, travel, etc., the team find it hard to respond.
These teams are likely not to pass point 1, Understand user needs.
The best teams research to understand the full context of their service. They shape their service to fit. They explore lots of ideas, and if something isn't working, they're prepared to abandon significant parts of their solution and try out radically different alternatives.
See:
Doodles on why doing user research in the right place matters
User research, not just usability
Researching the end to end service
Most teams tell the assessment panel about problems that users have finding existing services, choosing the right services, or understanding further required actions.
But some of these teams do not research or test their new service end to end – they start at the start page and end at the completion page.
These teams do not pass point 10, Test the end-to-end service or point 14, Encourage all users to use the digital service.
The best teams start from the context of their users' lives, and consider how their users will discover their service, and know it's the right thing for them to do. They test their service to make sure that users achieve the right outcome for them, with support if necessary.
See:
Crossing the Atlantic: integrating with US Global Entry
Working with identity providers as they become certified companies
Good user research skills embedded in the team
We still see some service teams who depend on other parts of their organisation to do research for them. As a result, user research is poorly integrated into their agile process. Too much of the research time and money goes into large scale quantitative studies. And too much into opinion research.
Some service teams have poor research skills given the complexity and scale of their service. They struggle to create good research plans, choose inappropriate methods, and execute them poorly.
These teams do not pass point 2, Plan for ongoing user research and usability testing.
The best teams have a strong user researcher, backed up by designers, analysts, and others, who understand and take part in research activities. They use a variety of techniques, and try innovative approaches to overcome blockers.
These teams can help their organisations to develop junior researchers, and in return have the support of an effective community of practice.
See:
User research for government services: 8 strategies that worked for us
User experience is a team sport
So, you’re going to be a user researcher: top tips to get you going
We're here to help
If you need help to improve your user research, and fit it better into agile service design and delivery, please get in touch at john.waterworth@digital.cabinet-office.gov.uk.
Keep in touch. Sign up to email updates from this blog. Follow John on Twitter.
Feature image from Digital by Default Service Standard in the Government Service Design Manual.
5 comments
Comment by Rachel posted on
Excellent. Thank you.
Comment by Joshua Mouldey posted on
Really useful summary! Would also be great to have similar summary gov.uk blogs about where teams fall short on other aspects of the standard (technical, design, data, etc).
Comment by Adrian Murphy posted on
Very useful.
Comment by Jodie Smith posted on
It's great having this round up of user research work. However it would be even more useful if you could be transparent and publish the service assessments. I mean, since you publish all the code on github - the day to day work of every coder at GDS. Tax payer has already paid for them. So why are even the conclusions of each user research exercise kept secret?
Comment by John Waterworth posted on
Hi Jodie. We publish the results of the service assessments here: https://gdsdata.blog.gov.uk/all-service-assessments-and-self-certification/