Last year in July, I joined the cross-government user research training community to facilitate an ‘Introduction to User Research in Government’ online course. The course aims to educate researchers and practitioners from non-user research backgrounds to apply good research practice in a government context. It also offers avenues to network and connect with other professionals.
The training is offered by a community of researchers who are passionate about improving access to education and upskilling for user-centred design professionals. Over the years, it has evolved to accommodate the changing needs of user groups as well as the effects of the Covid-19 pandemic. Currently, it’s delivered every few months remotely and is available to all professionals across government. Here are some things I have gained from being a facilitator, and some ideas about what you could learn from it too.
Running training is not a solo sport and collaboration is at the heart of it from the moment you sign up. When I expressed my interest in facilitation, I was matched with another user researcher to co-facilitate the course with me.
We collaborated on all the subsequent activities from arranging training invitations to agreeing on what modules each of us would run. I also joined the user research training community on Slack and got to know a group of researchers who all share the same passion in enabling wider central and local government community to understand and apply good user research practice in their work.
Ahead of the course, I had a chance to asynchronously complete a facilitation course that explained good practice and had helpful tips to make sure the course ran smoothly.
The facilitation course covered logistics around effectively setting-up the course in a remote setting, as well as tips about active listening and fostering a safe, open discussion forum.
Getting involved gave me insight into the cross-government community, something I wanted to explore as a user researcher in the Parliamentary Digital Service. Before joining as a facilitator, I hardly knew anyone in the cross-government community and my understanding on its structure was quite hazy. Through running the course, I had the chance to speak to professionals from across the organisation who helped me piece the puzzle together and better understand how the government community is built. The trainees also offered insightful comments on their day-to-day and unique user research considerations, like informed consent or safeguarding.
Being a facilitator doesn’t mean knowing everything! The structure of the course allows plenty of time for open and interactive discussions where trainees have a chance to share their experiences employing different research methods or overcoming challenges. As a result, at the end of the course, we gathered a comprehensive list of tips and resources as supplementary learning for everyone.
As a facilitator I learned how to best frame training for the specific audience. The course was a chance to champion good user research practice, particularly in low maturity teams. A big consideration was to be as useful as possible to practitioners who may be from non-user research backgrounds too.
The open discussion and feedback was particularly useful in understanding what attendees got from the training. I’ve brought this thinking back into the Parliament Digital Service as part of my role, to provide training and development opportunities for other teams.
As we continue improving the training we deliver and growing our team, we’d love to hear from you if you’re interested in joining future courses as a facilitator and help shape the work that we do.
]]>In May this year we were tasked with understanding how government farming advisers could encourage farmers and land managers to aim for more ambitious environmental outcomes when applying for funding from the Countryside Stewardship scheme. With our backgrounds in social research (Dan) and policy design (Becky), we decided to include an ethnographic element in our research. This method would allow us to learn things about our participants which might not be strictly within our scope (in this case Countryside Stewardship), but from experience we knew these kinds of insights could prove fundamental in understanding our research question and the policy area more deeply. And from a user-centred perspective, these kinds of insights are critical to make sure we are designing the right thing for people.
Ethnography is a qualitative research method which involves observing people’s behaviours in their natural environments. It aims to capture a detailed and rich account of a social setting from the participant’s perspective.
Our research involved shadowing government farming advisers on their farm visit days. This allowed us to observe in real time how they went about advising farmers and land managers, hearing about issues that came up, and observing their interactions more generally. We followed our shadowing visits with online interviews and journey mapping, so that we could also ask specific questions regarding their role and experience advising on Defra’s Countryside Stewardship scheme.
Ethnographers can take on both an active role, participating in the research setting, or a passive role, observing from a short distance. Our shadowing approach allowed us to both observe advisers’ interactions with farmers and also be part of the farm visit itself. This included the possibility of asking some questions and experiencing other aspects of the adviser’s day, including travelling between rural sites and their lunch break time.
Ethnographic research can be recorded in a range of ways, including writing up field notes, keeping ethnographic diaries, capturing photographs and videos, as well as collecting any artefacts that provide insight into participants’ lived experience. These activities tend to be led by the researcher, but can also be led by the participant. For example, asking participants to keep a video diary using their mobile phone. In this instance, we asked permission to take photos during our visits. We were keen to include this visual element in our research to give a richer, more compelling level of context that can inspire more empathy when sharing findings with people who weren’t part of the research.
We’ll share our top 3 benefits for bringing an ethnographic, lived experience approach into your user research and service design practice
An ethnographic approach helps to build participant trust and participation in the research and design process. This was particularly important for us because we followed up our research activities with two ‘codesign’ or ‘ideas’ workshops, where we invited government farming advisers to share their ideas on how they thought we could increase the ambition of Countryside Stewardship applications. Despite it being a very busy time of year for farming advisers, we recruited enough advisers, including those from our research sample. Everyone who attended the workshops was keen to participate and share their ideas, because we had already established meaningful, trusted relationships through our ethnographic research.
That said, trust can be made or broken during research. And given the more intimate nature of ethnographic, lived experience research compared to other methods, it’s particularly important to consider the ethical implications. During our research, advisers allowed us into their busy working lives and farmers welcomed us into their homes. We found it important to be open, honest and transparent with our participants. In practice, this meant managing expectations about the outcomes from our work, and taking the time to playback both the research findings and outcomes to participants. We also made sure we listened to and documented participant’s perspectives that went beyond the scope of our research, and passed this onto relevant teams. You can read more about how to build trusted relationships during design research in Ideo’s Little Book of Design Research Ethics.
The GDS Service Manual has guidance to help you plan and carry out contextual research and observation. If you’re new to lived experience research, social researchers can also be a great support advising on how to bring an ethnographic approach to your research. You can contact the Government Social Research (GSR) Profession to find social researchers in your department. You’re also welcome to contact Dan (daniel.barwick2@defra.gov.uk) directly if you’re keen to draw on his expertise and experience with ethnography.
Policy Lab (where Becky worked previously) has been pioneering the use of ethnography in policy making across government for several years now. You can read about how they used film ethnography for their lived experience research on the Independent review of Children’s Social Care, and watch the ethnographic videos they produced for the National Landscapes Review and the Windrush Lessons Learnt Review.
Becky Miller (senior service designer) and Dan Barwick (user researcher) work on Defra’s Farming and Countryside Programme.
]]>User research operations (ReOps) teams, which support user researchers / UX researchers to do their best work, are increasingly common. If you are considering setting one up, you probably want to learn from those who have trod the path before. I certainly did, when I established research operations at the Government Digital Service (GDS) in March-September 2022, joined in late July 2022 by the brilliant Uzma Razaq who runs it to this day. I learned a lot from Dr Salma Patel's blog about setting up research operations at NHS Test and Trace, Emma Boulton's work on pillars of user research, and resources from the global ResearchOps Community. This blog post summarises, in turn, my key lessons from what went well, and what did not, setting up research operations at GDS.
It’s important to know whether you’re making things better and be able to show this to others.
On day one of research operations at GDS, I asked its user researchers to complete a short benchmarking survey. Questions were derived from prior reflective work by the GDS user research community and should probably vary by organisation. I asked:
Average scores from the rating questions and word clouds of the ‘three words’ responses powerfully communicated how researchers initially felt about research operations. I repeated the survey five months later, and could show all ratings improving, and the most common words shifting from “siloed” and “inconsistent” to “helpful” and “visible”.
If I had my time again, as well as measuring the sentiments of researchers I’d also try to benchmark the time and costs that research incurred.
Research operations functions should be focused on the needs of their users (user researchers), while considering the needs of their organisation and constraints like regulatory compliance.
At GDS, the benchmarking survey gave an initial sense of problem areas. However, to get a richer understanding of how our user researchers operated and the pain points they encountered, I ran a workshop at a community event. In small groups, I had researchers map the stages in a typical research project, record the pain points they encounter in each, and then vote on those that have the greatest effect on their impact. I later sorted these issues into the eight pillars of user research, and affinity mapped them within these. Arranging these by number of votes gave me a prioritised list of problems and problem areas.
Research operations is very broad, and there are many models of how it can operate. Researchers need clarity on what you’re there for, and what you’re not, so they know how to work with you and you can avoid spreading yourself too thin.
After the survey, workshop, and other engagement with user researchers at GDS, I worked with GDS’s lead user researchers to set out on one slide what our user research operations function was for. It described our initial purpose, central missions, and our first steps to achieving them. If I had requests that sat outside it, it allowed me to explain why I couldn’t currently support them, what I was prioritising instead, and why. That said, we were clear it would respond to the changing needs of our research community.
I presented this to all GDS user researchers when it was produced and sought feedback. I could have referred to it more frequently afterwards to maintain researchers’ understanding of our purpose.
Some areas of research operations, like knowledge management, are long games. To get buy-in from your researchers and other stakeholders, get some quick wins under your belt in the first few weeks.
At GDS this included sorting out permissions issues on research shared drives and collating details of all tooling in one place. In your organisation, it will likely be something different.
Research operations functions can only achieve so much by beavering away themselves. Ask for support from researchers, leads, and other relevant functions in your organisation such as data protection, and do what you can to help them help you.
For the initial few months, GDS’s ‘research operations team’ was just me, supporting 50-60 user researchers. Going it alone wasn’t an option. I worked with other parts of the organisation such as the highly supportive Privacy Office and Information Management teams. I organised community activities such as a (thrilling) ‘research data spring clean’ to get support in sorting out our shared drives. And for some research operations activities, especially those successfully being conducted by others already, I was explicit about only performing a coordination role.
Coordinating work being done by others, especially informally by user research communities, is hard. I had a go at implementing a “user research community contributions” model, with a big Trello board, activities linked to researchers’ performance objectives, and drop-ins with the ReOps team. It was overcomplicated and needed a bit of a rethink.
Engaging with user researchers shouldn’t be done initially and never again. To build trust and be proactive rather than reactive, be as visible as you can.
At GDS this meant attending and running sessions in user research community workshops and show and tells, putting in intro chats with new joiners, attending meetings of each directorate’s user researchers, meeting fortnightly with our lead user researchers, and many a Slack message.
Research operations professionals tend to like order, processes, guidance, and documentation. We don’t want chaos and non-compliance. Equally, don’t go overboard. With the best will in the world, if your guidance for new starters is 80 pages long, it probably won’t be read, and it almost certainly won’t be well understood.
I tried to keep this in mind. When I revised our guidance on managing user research data, I resolved ambiguities raised by researchers while making it two-thirds the length, and it was well-used. But I also spent time re-working wiki pages that the analytics later showed nobody much visited, and which conversations suggested few knew existed.
There are so many brilliant research operations people out there. There are communities of them, such as the global ResearchOps Community, and that in UK central government (find them on UK Government Digital Slack). All those I’ve met to date are a supportive, helpful bunch.
The approach I took at GDS, and indeed the lessons in this post, draw heavily on the conversations I had with some of them. A huge thanks to those I spoke to at the Department for Education, Defra, HMRC, the Home Office and Hackney Council, among others.
So reach out, learn, and contribute. Maybe even write a blog post once you’re some ways in. Good luck!
]]>In government, we sometimes do user research with colleagues - other civil servants or contractors who work for government departments. These people can be users, even if they’re in your department or team.
Internal research could be on topics ranging from service or product work, like the GDS service manual, to surveys and interviews about experiences, like the reorganisation of departments.
If you’re consulting the users of your service, product, or project about their thoughts, opinions, and feelings, or testing how they behave or think in certain situations, it’s very likely to be internal research where ethical considerations must still apply.
Internal research is worthy of serious ethical consideration - just as external research is.
Although information about companies or public authorities is not subject to GDPR, employees’ information can be personal data. For example, when you’re doing internal research you usually collect an employee’s name, their role, their email address, and other information that can identify them. This is personal data.
Internal research also gathers the opinions, feelings, and experiences of employees, and can be on controversial topics, such as the reorganisations of departments, internal processes, and internal teams or services. As researchers, we have a duty to ensure that our research participants can’t be identified, even if they’re colleagues of ours.
Some processes involved in internal research ethics may seem more rigorous than the processes used for external research. For example, it is far more likely that internal participants could be recognised from recordings of sessions – they could even be identified from quotes. They could also be identified through 'jigsaw identification', which is when someone can be identified based on a description of their work. This could potentially make participants feel uncomfortable, especially if you have discussed sensitive or controversial topics.
Even if you know your participant and they’re a colleague, it’s best practice to get consent from them to use their personal data, to take notes in research sessions, and to audio or video record sessions.
You should explain in plain language:
As this blog post on how to carry out user research with colleagues points out, it’s also a good idea to give participants a choice of interviewer if you have multiple people conducting the research.
You should also give internal participants the option to individually specify if they give consent to:
In addition, you should consider that participants may also feel obliged or pressured to take part in your research, especially if their manager or someone senior to them has put them forward for it. It’s a good idea to make it clear in your research information sheet and consent form that:
If you’re putting calls out for participants in public places like Slack, Teams, or Google Spaces, people might be able to see who’s volunteered to take part in your research.
When you’re recruiting participants and booking time in their calendars, other colleagues may also be able to see if they’re taking part in your research.
There are several things you can do to maintain participants’ anonymity while you’re recruiting:
Be aware that some participants may be in the office during your session. If your participant is surrounded by colleagues, this could affect what they say. To help mitigate this, you could book a room for them in their building. Or, you could ask them if they work at home some days and book the session for that day.
What have you learned during internal-facing research? Are there any tips you have for others who might be conducting internal research? Please add your comments below.
]]>
Being able to communicate easily with your participants is a foundation of effective user research. So much of the job relies on being able to interpret behaviour, social context and intonation. That’s why we like working closely with participants, where we can communicate freely.
However, there are times in every user researcher's career where seamless communication isn't so straightforward. One example of this scenario is when the researcher and participant speak different languages. As daunting as it can feel, there is no need to worry, and all is not lost. You can still get highly robust and meaningful insights from the research. In fact, I’ve found that when I can’t rely on understanding what the participants have said to me, I’ve been able to draw more actively on my other research skills. This is my story of how I approached that very scenario.
The team I was part of worked with governments globally to help with various aspects of service transformation and user centred design. In March 2022, I was working on a project in partnership with the Thai government. For this Thai agency, user centred design and user research were new practices for governmental work. They were interested in seeing how other international governments use user research to inform their work and improve designs by directly involving users.
And so, for this purpose, I was asked to do in-person user research on a procurement based Thai web service.
This research itself was to be based in Thailand, primarily using the Thai language - a language not known to me. And so, these factors meant the project involved a daunting combination of:
At the outset, this all was a little frightening. In hindsight, it was hugely motivating.
Here are my reflections on what made this unusual circumstance a smooth experience.
In research projects like this, there can be many moving parts. As the researcher you juggle trying to:
It’s ok to not have full control or get everything perfect. My role was to have general oversight of all the moving parts and see that they were in balance. By setting up each research session so that the team clearly knew their roles, ie. who was greeting the participants at the venue, facilitating the session, note-taking, translating, observing, and even taking a break, meant that everybody knew what to expect and the research was stronger for it. This made the juggling act much easier.
Generally speaking, user research can be weird for participants. They often don’t know what to expect, wonder if they are doing it ‘right’, or might feel ‘watched’. User researchers work hard to inform participants every step of the way, create trust and help the participant feel safe and comfortable throughout.
Added to this, in this Thai-based project, there was an extra element of participating in research facilitated by an ‘outsider’, ie. me. And I was very conscious of this.
Ahead of each session, I made sure to re-iterate with the translator that they would be the first to greet the participant upon entry, welcome the participant and introduce their role as translator. I felt it was important to do this before introducing me and my role to the participant.
This worked well for several reasons:
And don’t forget to amp up your soft skills! Being extra attentive to the participant goes a long way; showing that you are listening, eye contact, nodding, smiling and being receptive to the participants’ emotional responses always helps. But only If it’s culturally appropriate of course.
I also learned that it’s ok to do research on subject matter or in languages you are not familiar with.
You don’t need to know every pathway or every user scenario ahead of the research. I started by making sure I was familiar with the core flow that the research is looking to investigate. In this case, the core flow was the pathway to search for relevant governmental contracts.
Usability testing then showed if users diverted from the ‘ideal’ flow, and at what step the diversion happened. Even without knowing what the content described, it was clear that the flow wasn’t intuitive. I then looked to understand their thought processes with follow-up questions via the translator.
The users were showing me directly the steps they took, their behaviours and their reactions. I was able to get very valuable qualitative insights even when I didn't necessarily understand what was being said.
In not sharing a common language with participants or the subject matter, in a way, observation became the purest form of usability testing. When users got stuck, I couldn’t unconsciously bias them in any way. The users genuinely knew more than me about the site being tested and the language used.
By embracing my naivety, I could approach the sessions from a place of genuine curiosity. In this way, participants understood that I was truly there to learn from them and their experiences. This helped build greater rapport between the participants and me, especially seen when they would confidently take the laptop and say ‘here, let me show you’.
In the end, not knowing how to speak or read Thai was in fact valuable for getting insightful findings for this project. It also reminded me of the amazing power of observing behavioural cues in qualitative user research - the shaking leg that hints at restlessness, for example.
I hope you have found this blog helpful. We’d also love to hear from you!
Are there times when you had to consider very carefully how to build rapport with your participants? Please leave a comment below to share your tips and experiences.
We’re excited to relaunch this month with our vision of who and what this blog site is for. We’d like this to be a space where we share ideas and approaches that will inspire user researchers in government.
We’ll aim to publish posts that will help all the members of our community - from someone who’s just started out to those with more experience - assess and extend the scope, scale and impact of their research.
We’re looking for posts about tools, methods, processes, challenges and successes. We are open to suggestions for topics that will engage us as we build our careers and move forwards on our research journeys.
We will also welcome occasional case studies that show how research works in different settings and with different user groups.
The blog is managed by an editorial panel. The role of the panel is to encourage blog posts from across the government research community, and to guide submissions through the publishing process onto the blog site. We are very happy to talk with prospective authors about ideas for blog posts, and coach less experienced writers through developing a post.
In brief, the submission process is:
When your blog post is ready for publication, you’ll need internal sign-off from your communications team, as well as from the editorial panel, so it might be worth sharing your idea with them at the same time as contacting us.
Louise Petre is the lead user researcher in the Central Digital and Data Office (CDDO). She moved into user research after working as a designer and developer for many years. Louise is particularly interested in using mixed methods in user research practice. Her favourite blog posts are: Why we care more about effectiveness than efficiency or satisfaction and 10 tips for working with your user researcher.
Nichole Browne is a senior user researcher at the Driver and Vehicle Standards Agency (DVSA) in Nottingham. Before joining the Civil Service, she worked in the not-for-profit sector, researching with disengaged and hard to reach users. Nichole is particularly interested in maximising the impact of research within an organisation and has launched a research insight library at DVSA. One of her go-to blog posts is about the eight pillars of user research
Natalie Baron is a lead user researcher in the Government Digital Service (GDS). She started off her research career as a community researcher in local government. Natalie is particularly interested in participatory approaches in research. One of her favourite blog posts is about doing user research with colleagues.
We have some excellent posts planned for the upcoming months, so please subscribe to the user research in government blog. And of course, do get in touch with any ideas you have for future posts by filling in this short survey.
]]>We shared how this approach underpinned the first rapid evaluative research the team conducted. This was to shape new data publishing guidance in relation to the imminent retirement of the performance platform.
This post is about the second and third tracks of user research and what we are planning next.
In our team, research tracks that have a focus on the exploratory element will often probe a problem space that there is little understanding of. Specifically this is about acknowledging the wider context of how people interact with the Service Manual and Service Standard. Understanding how these artefacts make a difference to users (on a day-to-day basis) means that we can ensure guidance stays relevant and useful.
We collaborated with an economist at the Central Digital and Data Office (CDDO) to identify if we could measure the value of using the Service Manual and Service Standard.
We were keen to engage an economist in a joint research exploration so they could witness the stories and scenarios unfolding. Hearing all about how our end-users experienced the Service Manual, they effectively brought the value proposition to life.
Meanwhile, the team had been drafting a hypothesis model to represent the ways people use the Service Manual and Standard – and the roles they want it to perform. Our hope was that the close collaboration would enable us to interpret and articulate any new first-hand research evidence better. How did our products provide value? What benefits had they created? What impact had they had on service teams?
Our Service Manual usage survey had given us a group of people who were happy to be involved with further research. From this group, we selected participants with a range of roles and experience from local and central government departments and external suppliers.
Defining a research approach was tricky. We assumed it would be difficult to get stories of service failure through a direct line of research questioning. It would be equally unlikely that people would be able to quantify the benefits of using the Service Manual and Service Standard.
Therefore, we opted to ask participants to focus on a recent situation when they used the Service Manual to help them to achieve something. We wanted to hear about the kinds of information people wanted and expected to get from the manual. How did the manual (and adhering to the standard) support them with everyday practice in the context of their role and remit?
A scenario helped us get to these things quicker as both types of functional and emotional needs come out. This also made it easier to gather both positive and negative experiences so we could learn more about how the standard and assurance process is perceived.
We learnt that users of the Service Manual and Service Standard value:
Our participants also talked about a variety of functional and emotional benefits from using the Service Manual and Standard.
These translated into 2 overarching areas of economic benefits.
The economic benefits are being considered in light of the overall service assessment and assurance process. Several factors outside of Service Manual use could affect accuracy.
The crux of the approach is about comparing teams who use the Service Manual and consult the Standard against those who do not. Over a set length of time, you could measure the time taken between a service assessment and the service going live, the productivity of new starters, comparing between services who failed on user needs related points in spend controls and service assessments and then revisited their approach.
Overall, this exploratory research helped us understand that best practices will become more common using the Service Manual. Thus, digital projects will be of a higher quality. This leads to better outcomes for citizens who engage with digital government services.
In a more targeted effort to engage with all our user groups, we used workshops to hold structured and collaborative discussions with users of the Service Manual. This is also to start to develop how community contributions to guidance could work in practice.
We know there is no substitute for getting regular exposure to user feedback first-hand. We ran a Services Week session called ‘Tell us what to improve about the Service Manual’ to do exactly that.
We listened to what the attendees said they’d like to improve about the Service Manual – which has helped to validate and prioritise our backlog – and we also asked how they’d like to work with us to help make those improvements. This has set an initial direction for our contribution model.
Services Week also gave us an opportunity to hear from service teams about work they’ve done to build services at rapid speed in response to coronavirus (COVID-19) and Brexit. These teams had to overcome unprecedented challenges, so we were keen to capture how they worked and what they learnt to inform new guidance to help teams in similar circumstances in the future.
More than 60 people from across the public sector attended a meetup to scope the first version of guidance on ‘making services in an emergency’. The meetup invited presentations as input followed by a practical workshop component – these insights were synthesised and formulated into a rough draft of guidance.
The draft was being shared with relevant service teams to invite feedback before further iteration and publishing. This is the start of a two-way proactive relationship to collaborate on the guidance that we hope to repeat on other topics.
These areas of focus have contributed to a rich body of raw research data which we will be actively drawing from for our work on the Manual and Standard. We have learned a lot as a team and these insights are informing guidance and our roadmap.
In the coming months, we’ll be working with our users and service communities to define how community contributions might work, using what we have learnt so far as a starting point.
We are also continuing to proactively identify opportunities for collaboration across the standards and assurance teams at CDDO to plan projects like the creation of personas and mapping of user journeys through assurance.
If you are interested, join the #servicemanual channel on the cross-government Slack to hear more about our ongoing work.
]]>In an earlier blog post, we talked about how the Service Standard and Service Manual team is taking a community-led approach.
We described how we reviewed past qualitative and quantitative research sources and work outputs relating to the Service Manual and Standard. We also created a usage survey that was shared widely across government. With 195 responses, this told us how, when, by whom and for what the Service Manual is used.
The combination of these findings gave us a glimpse into the main problems that our end-users are facing and has subsequently informed more focused user research activity.
This is part 1 of 2 blog posts documenting the user research approach and various methods we have recently used to inform our guidance.
In this post, we will provide an overview of the approach we have adopted to further understand the needs of Service Manual users. We will share how elements of this approach have manifested into the first of 3 different tracks of user research. Another post in the coming weeks will cover the remaining 2 research areas.
We use insights from user research and other feedback channels to inform changes to the guidance in the Service Manual. However, we know that each user’s individual experience and context are unique to them. Their needs are shaped by a lot of different factors.
Creating guidance to meet the needs of such a variety of people and situations is hard. But it starts with learning more about them. We know that the more diverse and broad the range of users we speak to, the more inclusive, relevant and useful the guidance will be. And, depending on the topic or area we’re investigating, we’ve used several different research methods to learn about our users, their different contexts and their needs.
The different kinds of research we’ve been doing can be characterised as:
Our research activities usually have more elements of all 3 of these, though there is more focus in one of these areas.
The Performance Platform was used by service teams to publish data about their services and was retired in March this year. To prepare for the retirement, we needed to create new guidance for the Service Manual about meeting point 10 of the Service Standard – define what success looks like and publish performance data – once the Performance Platform was no longer available.
Our first challenge was that there had been options prepared, but no decision made about how teams should be publishing their service data instead of using the Performance Platform. We had to work closely with subject matter experts to understand the options, which would work better for users, and what the process would be.
Our content lead drafted an initial version of the guidance to use as a stimulus to gather the details we were missing about the process from the relevant specialists. We shared a collaborative document of the draft guidance for comment, set up a Slack channel for open discussion, proactively gathered feedback and held daily check-ins with the relevant people working on the Performance Platform retirement.
It took time to set up these tools and channels for collaboration, but it meant that we could easily get feedback on the many rapid iterations we went through as we developed the guidance.
When the guidance was still at an early stage, we started doing remote usability and comprehension testing of the draft with service teams. We knew we’d need to test with teams with varying capability levels to make sure the guidance, and the process itself, would work for all of them. However, we purposefully spoke with mid- to high-level capability teams first as we knew they would be forthcoming with any significant gaps or issues with the guidance.
Using the draft as a stimulus helped us to test the proposed solution with service teams at the same time as testing the guidance itself. It enabled us to answer questions such as:
We made changes to the guidance so it was clearer what would be a manual process from now on (hosting their own data, the steps to gather, format and publish user satisfaction data).
As a result, we repeatedly tested the guidance with more teams, including less digitally mature teams, to check the iterations improved comprehension.
Once we published the data you must publish guidance, we made sure there were ways for people to contact us about it – including a Slack channel and a dedicated email address – to get feedback and address any queries, something we continue to support now.
Our research also revealed the need to improve the mandatory guidance on key performance indicators (KPIs). We heard that teams want to spend their efforts publishing data that is both meaningful and useful. Not all of the mandatory KPIs were deemed applicable, and in some cases, it wasn’t feasible to get the specific data to publish these KPIs on their service.
But teams collect other meaningful data, so we curated a blog post on how service-specific performance indicators can improve a service to highlight the need to use custom KPIs and the mandatory ones. We’re also looking into further improvements to the guidance about this in the Service Manual.
This user research track was our first taster of consulting with subject matter experts on guidance but using a familiar method and format, that is the usability test and the contextual user interview.
In the next blog post, we share 2 examples of where we utilise the contextual interview and the workshop format to get user feedback. This is in search of opportunities to leverage and evolve the value of the Service Manual and Standard.
If you are interested, join the #servicemanual channel on the cross-government Slack to hear more about our ongoing work.
]]>Sending messages to research participants is one of the most important aspects of user research.
Some user researchers do all their own messaging. Others work in organisations with a dedicated ResearchOps team. Whatever your situation, at some point you'll probably need to send people interview appointments, survey invites or panel alerts.
In this post I’m going to talk about why GOV.UK Notify is a great tool for sending messages and discuss 6 of the benefits for user researchers.
GOV.UK Notify is a platform for public servants who need to send emails, text messages and letters.
Notify is used by over 2,700 services across 697 UK public sector organisations. These include central government, local government, the NHS, police and schools.
Over 1.2 billion emails, 119 million text messages and 3.7 million letters have been sent using Notify. Whether it be electoral registration reminders, updates on state pension eligibility or critical coronavirus (COVID-19) messages, Notify is helping public sector organisations meet a huge variety of user needs.
If you are a researcher working in the public sector, or on behalf of the public sector, you can send messages with Notify. All you need is a public sector email address.
Go to www.gov.uk/notify and register an account. You can then setup a trial service and start sending test messages.
Once you’ve had a play and you’re familiar with Notify’s features, you can make a request for your service to go live. There is very little paperwork and you’ll most likely be up and running the same day.
Here are 6 reasons why Notify can help user researchers.
Notify’s message templates use design patterns that have been extensively tested with users. They’re accessible and include high quality public sector branding. Users feel confident that messages sent with Notify are genuine and official.
There are no costs or limits on the number of emails that you can send with Notify.
Central government researchers can send up to 250,000 text messages for free per year with Notify. Those working in local government have a 25,000 free text message allowance. It costs 1.58 pence (plus VAT) for each text message you send after you've used your free allowance.
This means that using Notify to send text messages will be free for most user researchers.
If you need to send letters the costs are much lower than the market rate and include printing, packing and postage.
Notify allows user researchers to send messages in bulk. To send a batch of messages at once, follow the instructions to upload a CSV file with a list of contact details.
You can also send messages automatically using the Notify API.
Both approaches are less prone to error compared to sending messages in bulk from your own email account.
User researchers don’t have to worry about using their own email address or mobile phone number. This reduces data privacy risk, as we don’t have to handle participants’ personal data. It also means we don’t have to share our contact details.
Turn on inbound messaging to let participants reply to the messages you send them. Then sign in to Notify to read their replies.
The Notify domain is on the ‘allow’ lists of all major email providers. This means emails sent by researchers are less likely to be blocked or marked as spam by the receiving email domain.
So, using surveys as an example, invites are more likely to reach people when sent with Notify than when sent out using a survey management tool.
Notify is highly secure, and is continually assessed against best practises in data storage and processing.
Using Notify in itself will not guarantee that your messages will be effective. Taking recruitment as an example, researchers still need to think carefully about how to communicate the purpose of your research, why we want people to participate and the potential incentive for them to do so.
Notify doesn’t store contact details, so researchers will also need to securely process and store participant data using their existing tools.
There are many potential ways that Notify can be used for user research. We’re interested in hearing how you’re using it, so feel free to share your own tips, comments and questions in the comments section.
While user researchers are adapting to conducting research remotely, there is still a challenge about how to do this with people with access and digital support needs. The user research community at the Government Digital Service (GDS) recently held a cross-government studio workshop to look at how to meet this challenge. The workshop included presentations from an expert panel made up of:
The panel’s talks covered topics including:
This was followed by a lively and engaged Q&A session, and our community learned and shared lots from the event. This first blog from the event draws out some of the most important tips and lessons we explored for working remotely with people with access and digital support needs.
Recruitment was a real issue for many of the researchers present. We heard about contacting third party organisations as a way of recruiting the right mix of people and finding people with access or digital support needs; for example, specific charities working with particular groups.
Prior to the session make sure you personally get in touch with the participant. Most people prefer a phone call, while others might find an email more suitable.
The scope of this is to:
Allowing for extra time at the beginning of the session for the technical setup is a must.
Use the information you learned about the participant to tailor the user research session according to their preferences.
When selecting the communication platform, try to use one the participant is already familiar with. This will avoid additional time and stress for the participant to install or interact with applications that are new to them.
Due to functionality constraints and GDPR compliance you might find that the platforms you can use are restricted. In this case, speak with the participant and offer to assist them over the phone with preparing to join the session, use step by step instructions on how to do this, and install applications. Try to be patient, supportive, and encouraging when you do this. It is best to go through this task ahead of the session. Consider doing this the day before the research session.
Consider running the session when it is most convenient for the participant. Some people might prefer to do the session at certain times of the day when they can focus better.
Also, sitting in front of the computer or phone for one hour doing tasks can be physically and mentally demanding. You will need to build in breaks, shorten the session or to split the research over several shorter sessions.
People with access needs are more likely to have specific settings and technology installed on their devices to help them complete tasks online. It is therefore even more important to run a usability or content test using the device the participant is likely to use.
The most popular communication platforms allow participants to share their screen both on desktop, tablets or mobile devices, with little or no interference on the assistive technology.
Chat messaging during the session can interfere with assistive technology like screen readers that people are using.
Also, on some communication platforms the chat messages can be in a hidden window during a video call. When people are not familiar with this functionality they might find it difficult to navigate to, and it might distract them from their task. If there are things that you need to send to the participant, such as a specific page or link to a prototype, try to do this ahead of the session via email or text messages.
As a researcher, try to use your camera so that the participant can see you. By encouraging people to have their camera on, you will be able to see body language and actions that you otherwise might miss. This will also help with creating a better connection and empathy with the participant.
Chances are, some sort of problem will unfold as you are running the research. The internet connection might crash; your participant might struggle to connect into your session; you might be interrupted by other people around you. Or as it happened to me, the participant’s laptop might be attacked by a virus halfway through the usability session. Try not to panic! It’s all fine! This has happened to many other people before and it’s ok. Just be aware that it could happen, and build in extra time to allow for things to go wrong!
You can still rely on phone interviews for reaching out to people with low digital skills or access needs.
However it will be very challenging to conduct a usability test remotely with participants with low digital skills. One way of addressing this might be to check if there is someone within their network who would be able to give them assistance during the session. To make sure you are not excluding anyone it would be best to prioritise this group of people for future face-to-face sessions.
A powerful way to help your team build empathy for the end users is to facilitate their experience of observing research sessions. Our community has noticed that by hosting the sessions online, team members are finding it easier to join in. It is still good to limit the number of observers so that they do not overwhelm the participants.
At the beginning of the session, introduce the observers to the participant. After that, ask the observers to mute themselves and close their camera.
The workshop also discussed the benefits of running remote accessibility persona testing prior to usability testing with end users, and the capabilities of different communication platforms. These will be discussed in a forthcoming blog.
We hope you’ve found this blog informative and useful for running remote research for people with access needs. Please share your own tips, comments and questions in the comments section!