A/B testing a Post-Purchase experience

A/B testing post-purchase experiences can assist your brand in enhancing your offerings or designs, ultimately boosting your average order value (AOV).

Step 1 - Create post-purchase experiences with similar parameters

  • If you're just starting out with post-purchase upsells, be sure to check out this article for a detailed introduction to this feature here.
  • Create a unique post-purchase upsells experience for each variation you'd like to test.
  • Ensure that the 'Audience' targeting remains consistent across post-purchase experiences you wish to A/B test.

  • It is advisable to give similar names to the experiences and add the variant at the end. This approach simplifies tracking and following up on each variation. For instance:
    • POST PURCHASE - All Visitors [1 min timer]
    • POST PURCHASE - All Visitors [3 min timer]

 

Step 2 - Publish each experience at 100% allocation

  • Publish the experiences on 100% allocation at the same time.
  • Since post-purchase experiences are limited to one appearance per purchase, they will be evenly distributed among all users in the specified 'Audience'. Therefore, an equal number of sessions will be recorded. For example:
    • 2 post-purchase experiences at 100% allocation will be divided at 50% / 50%.
    • 3 post-purchase experiences at 100% allocation will be divided at 33% / 33% / 33%.

Please note

If you launch post-purchase experiences with certain parameters along with another post-purchase targeting "All Visitors," the traffic allocation will still be split. This occurs because "All Visitors" includes the parameters of the other experiences as well. For example:

These 3 experiences published at 100% allocation will divide traffic as such:

  • POST PURCHASE - All Visitors > 33%
  • POST PURCHASE - Mascara in cart [1 min timer] > 33%
  • POST PURCHASE - Mascara in cart [3 min timer] > 33%

To prevent this, either:

  • Make sure that when you run different post-purchase A/B tests, they don’t overlap with their target audience.
  • If you are running a post-purchase experience targeting "All Visitors," make sure to exclude any parameters from other post-purchase experiences that you want to test. Remember to update this experience whenever you set up a new post-purchase A/B test.

 

Step 3 - Review the data

Over time, each post-purchase experience will yield different results.

  • Consider waiting for a few weeks to review the data, allowing it to stabilize and achieve statistical significance based on your store's traffic levels.
  • Ensure that all post-purchase experiences have the date filter set to the same timeframe for accurate and consistent data analysis.

 

Step 4 - Set winner or iterate

Once you're set on the winning variation, you can either:

  • Stop the underperforming variations, so the winning variation can be shown to 100% of the traffic.
  • Keep refining! Continuously fine-tune your post-purchase experiences by experimenting with various strategies or designs:
    • Revise the underperforming experience with a fresh strategy or design.
    • Publish both experiences again at 100% allocation to begin a new test with fresh data.