When building a digital-by-default service like Carer’s Allowance, we’re continuously designing, testing and releasing changes to the live service to improve it and better meet user needs.
But how do we measure whether the changes we’re making are improving the service or not? We conduct user research and analyse service data.
In this post I’ll share how we’ve been approaching this and what we’ve learned.
We're moving away from detailed analytics reports
Analytics reports have their place but they don’t help agile product teams make informed decisions while moving fast. We prioritise and release changes every 2 weeks and need to know quickly how the latest changes are performing.
Key metrics help us measure changes
Key metrics are data points that change over time depending on how well the service is meeting user needs. They relate directly to how easy, quick, or understandable the transaction is for users.
The key metrics for Carer’s Allowance are:
- Digital take up
- Time to completion
- User satisfaction
- Completion rates
You can see live data for our key metrics on the performance platform.
If we make a change to the service we can immediately see if the change impacted one or more of our key metrics.
For example, if user satisfaction drops by 10% we have immediate feedback and the team can respond. We can look back at the latest release to understand what happened, using research to help us understand why.
Here's an example
We’ve focussed a great deal on the key metric, time taken to complete a transaction.
By spending time talking to people about the service, we observed the following user need:
As a carer I need it to be quick and straightforward to make an application for Carer’s Allowance because my time is limited.
We know from user research that if someone is with the person they care for, a transaction longer than 20-30 minutes can be difficult to complete.
We’ve been making regular changes to the service to improve average time to completion. For example, reducing the steps needed to navigate through the transaction and removing questions that we didn’t need to ask.
As a result, average completion times have reduced from 35 to around 27 minutes in the last 9 months.
We know that there’s still room for improvement.
You can’t measure everything
It great to be able to measure changes we’ve made to the live service, but I’ve realised that you can’t measure everything in this way.
When you’re developing a product or service to meet user needs, not all changes will be directly attributable to key metrics. Some changes we make to a service are more subtle and therefore harder to measure, or require more user research to understand.