Skip to content

How Headline Testing works

Parse.ly’s Headline Testing uses a real-time optimization technique based on a machine learning method called multi-armed bandit testing. Specifically, it uses the Thompson sampling algorithm, which dynamically allocates more traffic to better-performing variants over time. This means that if one headline performs well early on, more users will see it, while poorly performing variants are shown less frequently.

You can create tests with up to 10 headline variants per piece of content. One of these is the original (the control), and the others are alternatives to evaluate. As users visit the page, the tool automatically displays different variants to different visitors and collects data on how often each one is seen (impressions) and how often each is clicked. Based on this data, the algorithm adjusts which headlines are prioritized, sending more traffic to those that perform well.

This approach stands in contrast to traditional A/B testing, where traffic is split evenly between all variants for the entire duration of the test. In a bandit-based system like Parse.ly’s, the testing process begins with equal distribution, but quickly shifts traffic toward better-performing options as more data becomes available.

Details on the Headline Testing process

  1. First, a separate JavaScript script must be installed on your website. This script is intentionally designed to be as lightweight as possible, separate from the main Parse.ly script, to ensure it loads very quickly and before any page content is displayed.
  2. The script comes preloaded with the list of active headline experiments. The system attempts to locate the control headline attached to each article link on the page.
  3. If the control headline is found, the script replaces it with the assigned variant headline for that specific visitor.
  4. When the variant headline is displayed the first time, this counts as an impression for that visitor.
  5. If the visitor clicks on the headline the first time, it is recorded as a success for that headline variant, and the system tracks this click.
  6. The selected headline variant is stored in the visitor’s cookies. This ensures consistency so that the same visitor will continue to see the same headline variant on subsequent visits.
  7. The system uses a multi-armed bandit approach, specifically Thompson sampling with a beta distribution, to determine which headline should be shown more frequently. For each variant, the algorithm models the probability of success based on observed data:
    • Alpha (α) represents the number of unique clicks on a headline variant under test.
    • Beta (β) represents the number of visitors the headline has been displayed to (impressions) without clicks.
  8. Over time, the algorithm dynamically adjusts the probability of showing each variant, favoring headlines that perform better while exploring other options to continuously learn.

This entire process happens automatically and quickly, enabling real-time optimization without requiring any manual analysis or adjustments.

A/B testing vs. multi-armed bandit (MAB)

Parse.ly’s Headline Testing uses a multi-armed bandit (MAB) algorithm rather than traditional A/B testing. Here’s how the two approaches differ, and why MAB is better suited for headline optimization:

AspectA/B testingMulti-armed bandit (MAB)
Traffic allocationFixed (e.g., 50/50)Dynamic based on variant performance
Optimization timingAfter the tests concludeOngoing during the test
Handling poor variantsContinues showing all variants equallyGradually reduces traffic to underperforming variants

Multi-armed bandit algorithms do more than just test headline variants; they optimize traffic distribution and adapt in real time.

As the system collects performance data, it dynamically adjusts how traffic is allocated, potentially sending more visitors to the better performing headlines as soon as there is enough evidence to support the shift. This allows the best headline to reach a larger portion of the audience more quickly, while also ensuring that just enough exposure is given to the weaker variants.

The multi-armed bandit approach is especially useful in environments where content has a short lifecycle or where fast optimization is required.

Last updated: June 16, 2025