Skip to main content

https://userresearch.blog.gov.uk/2021/06/03/how-we-use-user-research-to-inform-service-manual-guidance/

How we use user research to inform Service Manual guidance

Posted by: , Posted on: - Categories: Research stories, Service manual, User research

Laptop screen showing a user research session in which members of a service team review guidance on data they must publish

In an earlier blog post, we talked about how the Service Standard and Service Manual team is taking a community-led approach. 

We described how we reviewed past qualitative and quantitative research sources and work outputs relating to the Service Manual and Standard. We also created a usage survey that was shared widely across government. With 195 responses, this told us how, when, by whom and for what the Service Manual is used.

The combination of these findings gave us a glimpse into the main problems that our end-users are facing and has subsequently informed more focused user research activity.

This is part 1 of 2 blog posts documenting the user research approach and various methods we have recently used to inform our guidance.

In this post, we will provide an overview of the approach we have adopted to further understand the needs of Service Manual users. We will share how elements of this approach have manifested into the first of 3 different tracks of user research. Another post in the coming weeks will cover the remaining 2 research areas.

Our users are really varied

We use insights from user research and other feedback channels to inform changes to the guidance in the Service Manual. However, we know that each user’s individual experience and context are unique to them. Their needs are shaped by a lot of different factors.

Creating guidance to meet the needs of such a variety of people and situations is hard. But it starts with learning more about them. We know that the more diverse and broad the range of users we speak to, the more inclusive, relevant and useful the guidance will be. And, depending on the topic or area we’re investigating, we’ve used several different research methods to learn about our users, their different contexts and their needs. 

Three elements of our user research approach

The different kinds of research we’ve been doing can be characterised as:

  • exploratory – interviews with our users to help us understand the wider context of their service assurance journey
  • consultative – engaging with our user groups to listen to their expertise and to learn from their experiences and challenges
  • evaluative – observing people using our guidance to test if it’s easy to understand, relevant and accurate

Our research activities usually have more elements of all 3 of these, though there is more focus in one of these areas.

Rapid evaluative research to shape new data publishing guidance

The Performance Platform was used by service teams to publish data about their services and was retired in March this year. To prepare for the retirement, we needed to create new guidance for the Service Manual about meeting point 10 of the Service Standard – define what success looks like and publish performance data – once the Performance Platform was no longer available.

Our first challenge was that there had been options prepared, but no decision made about how teams should be publishing their service data instead of using the Performance Platform. We had to work closely with subject matter experts to understand the options, which would work better for users, and what the process would be. 

Our content lead drafted an initial version of the guidance to use as a stimulus to gather the details we were missing about the process from the relevant specialists. We shared a collaborative document of the draft guidance for comment, set up a Slack channel for open discussion, proactively gathered feedback and held daily check-ins with the relevant people working on the Performance Platform retirement.

It took time to set up these tools and channels for collaboration, but it meant that we could easily get feedback on the many rapid iterations we went through as we developed the guidance. 

When the guidance was still at an early stage, we started doing remote usability and comprehension testing of the draft with service teams. We knew we’d need to test with teams with varying capability levels to make sure the guidance, and the process itself, would work for all of them. However, we purposefully spoke with mid- to high-level capability teams first as we knew they would be forthcoming with any significant gaps or issues with the guidance.

Using the draft as a stimulus helped us to test the proposed solution with service teams at the same time as testing the guidance itself. It enabled us to answer questions such as:

  • how were service teams currently publishing their data?
  • what might be the impact on day-to-day work once the performance platform was retired?
  • did people understand what the new guidance asks them to do?
  • were there any parts of the guidance that were unclear or confusing?
  • were there any unanswered questions raised by the draft?

We made changes to the guidance so it was clearer what would be a manual process from now on (hosting their own data, the steps to gather, format and publish user satisfaction data). 

As a result, we repeatedly tested the guidance with more teams, including less digitally mature teams, to check the iterations improved comprehension. 

Once we published the data you must publish guidance, we made sure there were ways for people to contact us about it – including a Slack channel and a dedicated email address – to get feedback and address any queries, something we continue to support now. 

Our research also revealed the need to improve the mandatory guidance on key performance indicators (KPIs). We heard that teams want to spend their efforts publishing data that is both meaningful and useful. Not all of the mandatory KPIs were deemed applicable, and in some cases, it wasn’t feasible to get the specific data to publish these KPIs on their service.

But teams collect other meaningful data, so we curated a blog post on how service-specific performance indicators can improve a service to highlight the need to use custom KPIs and the mandatory ones. We’re also looking into further improvements to the guidance about this in the Service Manual.

This user research track was our first taster of consulting with subject matter experts on guidance but using a familiar method and format, that is the usability test and the contextual user interview. 

In the next blog post, we share 2 examples of where we utilise the contextual interview and the workshop format to get user feedback. This is in search of opportunities to leverage and evolve the value of the Service Manual and Standard.

If you are interested, join the #servicemanual channel on the cross-government Slack to hear more about our ongoing work.

Sharing and comments

Share this page

2 comments

  1. Comment by David Murphy posted on

    I like what you're doing in being willing to change websites, from user reactions. Have you considered being more proactive; advertising/promoting the changes you're willing to make on users' behalf?

  2. Comment by Alison Foley posted on

    Thanks for leaving a comment, David.
    In terms of advertising the changes we are making - we aim to work in the open as much as possible. So if you work in government you can join the #servicemanual channel on the cross-government Slack - this is where the team shares its weekly updates.

    Not forgetting - there is also an option to sign up to be notified about changes to the Service Manual. Each of the main topic pages on the Service Manual has a 'Get notifications' call-to-action (on the right of the page). This is where you can add your email address to get updated when guidance within that topic has changed.

    So I'd say yes, we are considering more proactive ways to encourage conversations about changes to the Service Manual.
    Especially as the team are starting to think how we can enable subject matter experts, service teams and communities of practice to all contribute to guidance - in some shape or form.

    We regulary publish what we are doing (in more depth) on the Services in Government blog (https://services.blog.gov.uk/) These posts prompt useful and valuable feedback. Feel free to chat further with us at community-led-service-manual-and-standards@digital.cabinet-office.gov.uk
    We will be posting a team update blog in the next week or so...!