Analyze your experiments
Klaviyo consolidates your A/B test results into 1 place: the Experiments tab. Here, you can click between tabs to review A/B tests specific to each of your marketing channels.
Note: This tab can only be accessed by Klaviyo users who are account owners, admins, managers, or analysts.
Review results by channel
For additional information on reviewing your A/B test results outside of the Experiments tab, toggle into each dropdown, specified by channel.
Campaigns
To view your campaign test results, you must:
- Locate the campaign you tested.
- Click on the A/B Test Results tab.
From there, you can see an overview of the test findings. Plus, review how each variation of your campaign is performing across key performance indicators like clicks and conversions.
Sign-up forms
To monitor a sign-up form A/B test, you must:
- Locate the form you tested.
- Click on the A/B Test Results tab.
Here, you can analyze your test results, noting how much revenue, new submissions, and other key metrics are attributed to your test variations.
Learn more about reviewing sign-up form A/B tests.
Flow messages
To view your flow message test results, you must:
- Select the flow message that has your A/B test.
- In the associated sidebar, click View Test.
You’ll then be able to review an overview of the test findings and specific data for each message variation.
Learn more about reviewing flow message A/B tests.
Flow pathways
To monitor your flow pathway test results, you must:
- Navigate to your tested flow.
- Click Show analytics.
- Evaluate the results of each message within your test pathways, noting which saw a better open, click, and conversion rate to determine the winner.
Learn more about reviewing flow path A/B tests.
Evaluate your test results
Upon running your test, there’s 1 key term you must understand: statistical significance.
Statistical significance is when Klaviyo can mathematically determine whether or not a variation of your content will perform best in an A/B test. It also indicates that you would be able to reproduce the results and could apply what you learned to your future sends. There are 4 categories you will see appear around statistical significance when running your test, as detailed below.
Statistically significant
Statistically significant means that a specific test variation is more likely to win over the other option(s). Use this as an indicator of what resonates with subscribers and apply these insights to your future sends.
Promising
Promising indicates that a specific test variation appears to perform better than the others; however, the data isn’t strong enough from this test alone to prove a hypothesis. Thus, you will be alerted to re-run the test.
Not statistically significant
Not statistically significant means that, while 1 variation has won the test, it has only slightly out-performed the others. As such, the results are not very meaningful. You may choose to test something else going forward and conclude that this element of your marketing did not significantly impact engagement.
Inconclusive
Inconclusive signifies that there’s not enough data to determine if a variation is statistically significant. If the test results don’t match any of the criteria above (i.e., statistically significant, promising, or not statistically significant) it will be labeled inconclusive.