Into the BLUE (framework): Designing user centered in-product feedback

Based on lessons learned from developing feedback mechanisms for Azure Data, we created a framework, dubbed BLUE, for designing and implementing in-product feedback that supports the end-to-end design and implementation process.

In-product feedback is a valuable tool for gaining insight into how users think and feel about a product or service. The key benefit is unlike simulated test environments (e.g. usability), users provide feedback in their live environment while using the product.

Further, common issues that arise from not following a framework include ad hoc mechanisms designed by individual teams, metrics that do not align to business objectives, and inconsistent ways of capturing metrics. In some cases, these inconsistencies can result in disruption to the user experience.

To provide some context to the framework these are three common feedback mechanisms:

1. System-initiated feedback mechanism (pop-up for overall product use)
This mechanism relies on sampling and is generally focused on capturing metrics to better understand overall sentiment using a toast or pop-up message.

Example of a system-initiated in-product feedback mechanism

2. Behavior-initiated feedback mechanism (intercept based on user action)
The user provides feedback about the actions they just completed using this mechanism. An example of this in use would be the question prompt, How helpful or unhelpful was this error message? The corresponding response item used with this question type is a scale from “Not helpful” to “Extremely helpful.”

Example of a behavior-initiated in-product feedback mechanism.

3. User-initiated feedback mechanism (always available)
This mechanism is an ‘always available’ channel for users to provide feedback and is useful for bug reporting and for capturing feature ideas that can be explored through additional user research efforts. It’s important to note, the objective of this mechanism is not to measure satisfaction of a product but rather provide users with a channel to be heard.

Example of an user-initiated feedback system.

The Framework

  • What UX metrics to capture and how to capture them
  • How to create customized UX health metrics for content
  • Which guiding principles to apply to the relationship between Business and UX metrics
  • How to embed in-product feedback efforts in the context of real-world collaboration, socialization and the UX research process
The 4 pillars of the BLUE framework.


Build principles:

✓ Align with business goals
✓ Investigate product UX outcomes, goals, and metrics
✓ Focus on consistency of scales and ratings


Leverage principles:

✓ Utilize HaTS and adapt as needed
✓ Customize needs around sampling/cadence
✓ Include a user-initiated feedback mechanism

Example of HaTS style granular task-based feedback mechanism


Unlock principles:

✓ Make the data accessible
✓ Keep the user story central
✓ Contribute to a data-driven culture

Power BI report showing Ease of Use trends and comments for exploration.


Embed principles:

✓ Socialize for understanding so that everyone advocates for users
✓ Combine with UXR process and track impact
✓ Get feedback on the feedback

Example of how in-product feedback fits into a UXR process


Disclaimer: this article is also published on Microsoft’s UXR Medium blog. It is written for fun and doesn’t reflect any official Microsoft UXR capabilities.