Being a user researcher means I’ve had lots of experience researching digital products. When my team at the Government Digital Service (GDS) decided to prototype a face-to-face interaction, I had to get creative in terms of how we evaluated it.
The team I was working on was developing tools that were designed to give standardised data about government services. It was designed to give people working in government an overview of how these services were performing.
So that people in government could trust and rely on the data, it was really important that the data we collected was complete and accurate. We therefore knew that we’d have to do the hard work to make it easy for our data suppliers to provide us with the data.
In this post, I share how carrying out research on face-to-face interactions contributed to this work.
Making things better for data suppliers
To make the process of providing us with the data easier for our suppliers, we initially created a form to collect performance metrics. However, when we spoke with data suppliers and the performance analysts on our team, we found that they were spending a lot of time explaining and agreeing the metrics for the data form. It often took multiple emails and calls to reach agreement.
To make this process better, we realised that we needed to think about the steps data suppliers were going through before they used the form. To make sure they were supplying data and metrics that would accurately reflect their service, and to reduce the amount of back-and-forth later in the process.
A face-to-face approach
As part of our aim to make things better for suppliers, we wanted to test a kick-off meeting. This would be a face-to-face meeting between our team and the data suppliers where they would discuss things like deciding the service name, defining the metrics and agreeing when the data could be provided.
To test this approach, we needed to carry out research into these face-to-face meetings, to work out if they were meeting the needs of our team and our data suppliers.
Here’s what we did.
Validating face-to-face meetings
We decided to evaluate the meeting in 4 different ways:
Observing the session
In the session, I took notes on what was discussed and how we reached agreement on the metrics.
Just as important as what was said was observing how people were feeling. Did they appear nervous, or uncomfortable with what was going on? Or were they relaxed and confident in reaching agreement?
I also observed what artefacts were used. Did data publishers show their digital user journey to help explain their service? Did they show their databases? And what impact did this have on the session?
Running post-session interviews
Immediately after the sessions, I talked to the data publishers about their experience of having a kick-off meeting.
We discussed what their expectations of the session were, and how this matched with reality. We talked about what questions they had, and whether there was anything they felt was unclear.
Measuring the quantity and content of emails with Service Performance after the session
We also tracked how many emails we received from data suppliers after the session. This involved measuring not only how many emails we received, but also the content of those emails.
We felt it was important to capture any questions or amendments the users had after the session.
Validating the data
Our final measure was to assess the quality of the data that we received. I worked closely with the performance analysts on the team to validate the data we were getting. To do this, we checked the data sets for errors and missing data.
All of these different data sources enabled us to gather a rich picture of the kick-off meetings, and decide if it was a solution worth developing further.
What we learned
Researching the kick-off meeting taught us how valuable face-to-face contact is for data suppliers. Meeting in person increased our understanding of the service, which in turn made it easier for us and data suppliers to apply the metrics.
Observing the sessions taught us how important the order of the agenda is to get agreement. Our original order meant that crucial decisions weren’t made upfront, making it difficult to reach consensus.
Thinking about the whole service
Carrying out research on the face-to-face meetings also taught us that it’s important to think about the whole end-to-end journey that data suppliers go through when they give us data.
There’s no point in us researching and designing a brilliant data collection form if there are challenges earlier in the process that will prevent suppliers from giving us that data. We need to think of this process as a full end-to-end thing.
This approach is true of any service that involves a face-to-face or phone element as well as a digital touchpoint. As user researchers, we need to be able to look at these interactions and to think about how they relate to the full service.
Do you have any tips on researching face-to-face interactions or researching across an end-to-end service? Let us know in the comments.
1 comment
Comment by Jude posted on
interesting read and definitely something as researchers we should be doing. I've recently been researching a citizen facing digital service I'm working on in the context of the full end to end journey and have found that there's no point in fixing a slice of the journey if when the citizen falls out the digital service the rest of their experience is poor. We've been working with stakeholders to develop and implement non digital solutions to other parts of the journey.