A/B Tests

Overview

A/B tests help you understand what works best in your marketing campaigns. Ortto offers customizable settings to help you run effective tests and boost engagement. This article shows you how to make the most of these features.


Creating an A/B test campaign

You can run A/B tests in both standard email campaigns and email messages within journey or playbook campaigns. Simply enable the A/B test option in the Setup step of the email editor.

Example of how the A/B testing option will appear in the Setup step.

What would you like to test?

Choose which variable you want to test in your campaign. Ortto supports the following options:

  • Subject line: Test two subject lines with a portion of your contacts. The winning one is sent to the rest.
  • From name & email: Test two variations of the "From" name and email. The best-performing combination is sent to the remaining contacts.
  • Content: Test two versions of your email content. The most effective version is sent to the rest.
  • Send time: Test two send times to find when your contacts are most responsive. Use this data to optimize future campaigns.

    TIP: You can enable the A/B test delivery optimization to send emails based on the contact's local timezone, if available.


How should the winner be decided?

Select the metric to determine the winner at the end of your A/B test. Ortto supports these options:

  • Open rate: Selects the variant with the highest open rate.
  • Click rate: Selects the variant with the highest click rate.
  • Total attributed revenue: Selects the variant with the highest total revenue.

Select a size for your test size

Choose what percentage of recipients will receive the test variants and the winning variant.

NOTE: The test size option isn't available for the Send Time variable, as this test evenly splits all recipients across the two timeframes.

In journey and playbook campaigns, A/B tests use a set number of contacts (e.g., 1,000, 2,000, or 3,000) to evaluate variants. This differs from standard email campaigns, where the test size is a percentage of the total recipients, as contacts enter the journey based on dynamic filters.


Advanced options

The Advanced options menu, located at the bottom of the settings pop-up, lets you further customize your A/B test.

  • Evaluation time: This setting determines how long the test runs before a winner is selected based on your chosen metric.
  • Delivery options during evaluation For journey and playbook campaigns, delivery during the test period is based on a set number of contacts (e.g., 1,000, 2,000, or 3,000). After reaching this sample size, the winning variant is sent to the remaining contacts.

You can choose how to deliver emails during the test:

  1. Continue sending equal mix: Both variants are sent equally until the evaluation period ends. Afterward, the winning variant is sent to all remaining contacts.
  2. Pause sending until a winner is chosen: No more emails are sent once the test size is reached. After the evaluation period, the winning variant is sent to the remaining contacts.

This approach ensures consistent test data while adapting to the flow of contacts, allowing you to optimize your campaign performance based on reliable results.


Troubleshooting A/B test results

If your A/B test results seem off, here are a few things to check:

  • Winning variant shows lower metrics This often happens if the evaluation period was too short. The winner is determined based on data from the evaluation timeframe. However, activities after this period may still show up in the final report, which can make it seem like the wrong variant won.

What to do:

  1. Review the bar chart in the campaign report to see the metrics at the end of the evaluation period, when the winner was chosen.
  2. Ensure your test has a long enough evaluation time and sample size to get accurate results.