Measuring Impact

How to track user interactions and use data to prove ROI and optimize your rules.

Sending a recommendation is only the first step. To build a successful, long-term strategy, you must measure how your users respond to it. Measuring effectiveness is the only way to know if your rules are providing real value or just creating noise.

A rule that is ignored by 99% of users should be improved or retired. A rule that drives a positive behavior change should be expanded. This guide provides best practices for "closing the loop"—collecting user feedback to optimize your rules and demonstrate the ROI of your recommendations.


Define "Success" for Each Rule

Before you can measure, you must define what "success" looks like. Not all notifications have the same goal. Your rules should be tagged (e.g., using the Tags feature) by their primary goal:

Goal 1 - Direct Action (e.g., "Pause Charging")

  • Success is: The user taps the primary action button on the notification.

  • How to measure: Tracking which action button was pressed.

Goal 2: Behavior Change (e..g., "You have surplus solar power")

  • Success is: The user performs the recommended action inside the app, even if they don't tap the notification (e.g., they see the notification, dismiss it, open the app, and then turn on their water heater).

  • How to measure: This is more advanced. You must correlate the notification send time with a subsequent change in device state (e.g., Device Status changed to ON within 10 minutes of the notification).

Goal 3: Awareness (e.g., "Your weekly summary is ready")

  • Success is: The user taps the notification to view the content. A dismissal is not a failure; the user may have just read the text and felt informed.

  • How to measure: Tracking the OPENED (tap) rate.


Implement the Feedback Loop

The most important tool for measuring effectiveness is the Notification Interaction API. Your application must report back to the MOOST platform when a user interacts with a notification.

This data is critical for building reports and for future platform features like AI-powered optimization. We recommend reporting all of the following interaction types:

  • OPENED: The user tapped on the main body of the notification. This is your primary measure of engagement.

  • ACTION_TAKEN: The user tapped on a specific action button (e.g., "Pause Charging" or "Remind Me Later"). You should include which specific action was taken.

  • DISMISSED: The user swiped the notification away. This is a crucial metric. A rule with a very high dismissal rate is a signal of a poorly-tuned or low-value recommendation.

  • STOP_NOTIFICATION: The user tapped the "Stop Notification" action you provided (as recommended in our push notification best practices). This is your strongest negative signal. It indicates a user was frustrated enough to opt-out of that rule specifically.


Use Data to Optimize: A/B Testing

You should never "set and forget" a rule. Use the data you've collected to continuously improve. The best method for this is A/B testing.

Copy Testing (Testing your Writing):

  1. Create two separate Notification Templates for the same rule (e.g., one focused on cost, one on eco-friendliness).

  2. Use Properties to assign one template to 50% of your users and the second template to the other 50%.

  3. After one week, compare the OPENED and ACTION_TAKEN rates. Keep the winning message.

  • Example A: "High grid prices! We recommend pausing your car charger."

  • Example B: "Help the grid! We recommend pausing your car charger during this peak time."

Rule Testing (Testing your Logic):

  1. Duplicate a rule and slightly change the logic.

  2. Assign the "Control" rule (e.g., triggers at AVG($GridPower, 15min) > 3000) to one group of users.

  3. Assign the "Challenger" rule (e.g., triggers at AVG($GridPower, 10min) > 3500) to another group.

  4. Compare the results. Does the Challenger rule lead to fewer DISMISSED interactions? Does it lead to more ACTION_TAKEN events?


Prove Your ROI: The Control Group Method (Advanced)

This is the gold standard for proving the business value of your recommendations.

  1. Identify a Key Goal: e.g., "Reduce average household energy costs."

  2. Create a Control Group: When you activate a set of cost-saving rules, apply them to 90% of your user base (the "Test Group").

  3. Isolate the Control: Do not send these rules to the remaining 10% (the "Control Group").

  4. Measure and Compare: After 30 or 60 days, compare the average energy cost between the two groups.

The difference in cost (e.g., "The Test Group saved an average of $5.30 more than the Control Group") is the provable, monetary ROI of your recommendation strategy. This data is invaluable for demonstrating the power of your app and the MOOST platform to your stakeholders.

Last updated