Marfeel Experiences: A/B Testing and Experiment Groups

Every experience format supports A/B testing. Experimentation is a foundational capability of Experience Manager, not an add-on.

The cost of experimentation determines the rate of innovation. When experiments are expensive — requiring engineering resources, long implementation cycles, or executive approval — teams run fewer of them. Fewer experiments means slower learning.

Experience Manager makes experimentation nearly free: any experience can become a test in two clicks, with no code changes and no coordination overhead. Everything is instrumented out of the box — every impression, click, close, and conversion is tracked automatically. No tagging, no analytics setup, no measurement headaches. You set up the test and the data is already there.

You can A/B test adding images to a recommender, changing a CTA color, or rearranging an entire page layout — and know within hours whether it actually works.

The experimentation panel

The A/B Test panel is always visible on the right side of Experience Manager, next to the live preview.

When A/B testing is disabled, two shortcut buttons appear below the toggle:

  • Original (star icon): Shows the behavior currently in production.
  • Experience (lightning icon): Enables the experience on the preview so you can see how it looks before publishing.

These buttons let you quickly toggle between the current production state and the experience you’re building, without enabling a formal test.

Enabling A/B testing

Toggle A/B Test on to start an experiment. You choose between two modes:

Standalone test

A standalone test isolates a single experience. It splits traffic between the original behavior (control) and the experience (variant).

  1. Toggle A/B Test on and select Standalone Test.
  2. Set the Variant Assignment:
Option Behavior
Per Request User may see a different variation on each request
Per User User always sees the same variant
  1. Adjust the traffic split between Control (original version) and Variant (this experience). For example, expose 60% of users to the experience and keep 40% as a control audience.
  2. Click Save.

Example 1: You want to test whether a recommender below the article increases recirculation. Set up a standalone test with 80% variant and 20% control. The 80% sees the recommender; the 20% sees the page as it was before. Compare CTR and session depth in Explore.

Example 2: You have a Flowcard that promotes a specific article to funnel traffic into it. Set up a standalone test to measure the impact on session behavior. Users in the variant group see the Flowcard and get directed to the promoted article; users in the control group don’t. Then compare session length and depth between both groups in Explore — you’ll see exactly how much the Flowcard contributes to engagement beyond the promoted pageview itself.

Test group

Test groups are experiments that involve more than one experience. They let you bundle multiple experiences into the same test so they’re evaluated together.

Use test groups when you’re testing a combination of changes. For example, if you’re adding three recommender experiences to a page, you can A/B test all three together against no recommenders at all. Or if you’re combining a CTA change with a color modification — two different experiences — you can put them under the same test group to measure their combined effect.

  1. Toggle A/B Test on and select Test Group.
  2. Create a new test group or attach the experience to an existing one.
  3. Assign the experience to a specific variation within the test group.

Each experience in the group is assigned to a variation, and users are split across variations consistently.

Analyzing results

Results are available through two complementary paths:

  • Recirculation reports include a Test Variation dimension. Filter or break down recirculation metrics by variation to compare how each variant performs — CTR, session depth, impressions, or any other metric.
  • Every test automatically sets a user variable that is inspectable through Explore. This means you can go beyond recirculation and perform full analysis across any Explore metric, segmenting by test variation or by the user variable to understand how a module performs depending on the variant a user belongs to.

Going deeper