Receiving suggestions, feature requests, complaints, and bug reports from users, like any other type of customer-focused evidence, is essential fuel for high-performing technology teams.
NPS and satisfaction surveys, App Store Reviews, and customer support logs are all important sources of evidence for product decisions, but tend to be too generic, noisy, hard to categorize and measure, making it difficult for teams to focus on the right things and objectively measure the impact of product changes.
How can we optimize the feedback channels so that we get measurable, contextual, precise, actionable information?
Observing the challenges above, our requirements for our feedback tool were:
- Measurable We want to output a number that tells us if feature A causes more dissatisfaction than feature B.
- Specific and actionable We want the ability to learn about specific moments in the user’s journey, specific pages in the app, and features we launch.
- Contextual We want people to give feedback in the moment they’re interacting with something, not recalling it days or weeks after.
- Passive We don’t want feedback prompts to interrupt the action someone is trying to perform.
So here’s what we’ve built: a reusable feedback module that we can add to any page or flow in our app. By clicking it, users are able to provide immediate feedback about what they’re trying to do and how they think we could improve it. First, users are prompted to submit a 5-scale score, followed by a text field to explain why.
First of all, we can now look objectively at how our pages and features are being evaluated by our users, and prioritize which ones need more work:
Also, because the call to action is more contextual than an after-the-fact survey, individual messages become much, much more specific and actionable:
By looking at and communicating the impact of design objectively, it becomes easier to earn a “seat at the table”, since these measurements can bubble up to company objectives.