Unlocking $2 Billion in Added Revenue: Our Experimentation Methodology in Action
Since launching in 2007, our experimentation methodology has helped generate over $2 billion in additional revenue for our clients.
Until now, we’ve only shared fragments of this methodology publicly—concepts like the Levers Framework, Mixed Methods Experimentation, and our Experiment Repository.
Today, for the first time, we’re revealing how these pieces come together into a systematic approach that consistently drives value.
To illustrate this, we’ll walk through an internal training case study featuring a series of experiments we ran for a leading iGaming company aiming to boost sign-ups for its subscription offering.
By blending theory with hands-on application, we hope to demystify our methodology—showing not just how it works for us and our clients, but how it can power your experimentation program too.
🔍 A Note on Confidentiality: This case study dives deeper than our typical content. To protect client confidentiality, some superficial details have been altered, but our approach and insights remain unchanged.
Research & Initial Insights
Like most experiments we run, our first test in this sequence was informed by extensive research, uncovering key friction points in the user journey:
📌 Exit Intent Survey Findings
- The primary reason users abandoned the sign-up process? Lack of information about how the subscription worked.
- The client’s subscription model involved users being entered into various prize draws, but our survey revealed that users didn’t fully understand the mechanics.
📌 Learnings from Past Tests
- A previous experiment introducing a ‘How It Works’ explainer video on the homepage had significantly improved engagement.

When collecting research data, we categorize individual observations into broader insights—overarching themes that guide our strategy.
In this case, all signs pointed to one core insight:
🛠️ Insight: Users lack clarity about how the product works.
From there, we aligned this insight with a Lever from our Levers Framework—a structured system that categorizes UX elements impacting user behavior.

Choosing the Right Lever
The relationship between Insights and Levers is straightforward:
- Insight = The problem identified through research.
- Lever = The UX element we optimize to solve it.
For this case study, we asked:
👉 Which lever should we pull to address users’ lack of understanding about the product?
The answer? Comprehension—specifically, Product Understanding.

If users better understood the subscription mechanics, we hypothesized that sign-ups would increase.
📌 Hypothesis: By improving Product Understanding, we can positively impact our primary KPI: sign-ups.
(💡 We follow a structured hypothesis framework closely tied to our Levers methodology—learn more here.)
From Hypothesis to Execution: Finding the Right Approach
A hypothesis is not an experiment—it’s a theory we test. The experiment execution is how we bring the hypothesis to life.
To refine our approach, we conducted a competitor analysis, examining how industry leaders educate users about their products.
🔍 Key Finding:
Many competitors used ‘How It Works’ interstitials at the start of their sign-up funnels to convey product details.
While we never blindly copy competitors, we do combine their strategies with broader research insights to craft data-backed solutions.
At this stage, we wanted to validate how effective interstitials really are—and we turned to our Experiment Repository for answers.
Leveraging Our Experiment Repository
With a database of 10,000+ experiments, our centralized Experiment Repository allows us to identify high-impact trends across industries.

📊 Key Data Point:
Across industries, interstitials have a 45% win rate (n=45)—significantly higher than average for UI components.
This evidence strengthened our case for introducing a Product Understanding interstitial at the start of the sign-up funnel.
Additionally, our data dispelled a common client concern:
🚀 Does adding an extra step hurt conversions?
Not necessarily. When executed well, comprehension-focused interstitials help filter out low-intent traffic, ultimately boosting overall conversions.
The Road Ahead
This is just one example of how our Levers Framework, Experiment Repository, and research-driven methodology come together to drive measurable growth.
By strategically aligning research insights with the right UX optimizations, we consistently deliver high-impact experimentation—and so can you.
Stay tuned as we share more behind-the-scenes insights into how we drive results at scale.
Interstitial Experiment #1: Testing Product Understanding
Based on our research and supporting evidence, we launched an experiment to test the effectiveness of a ‘How It Works’ interstitial placed at the beginning of the client’s sign-up funnel.
🛠️ Experiment Setup:
- A simple, educational interstitial was introduced before users reached the sign-up form.
- It provided clear, concise content explaining how the subscription and prize draws worked.
The goal? Improve user comprehension and, in turn, increase sign-up conversions by reducing hesitation and uncertainty.

Refining the Interstitial: Understanding the Why Behind the Inconclusive Result
When we finally ran the test, the results were inconclusive—though there was a positive trend in sign-ups, our primary KPI.
This outcome was unexpected, given the strong evidence supporting the concept. However, we weren’t ready to abandon this Lever just yet.
Why Can a Test Fail or Be Inconclusive?
Generally, there are two key reasons why an experiment may not yield a clear result:
1️⃣ The Lever itself is ineffective.
2️⃣ The Lever is correct, but the execution is suboptimal.
Before disregarding a Lever altogether, we always run multiple experiments to determine whether the issue lies with the underlying principle or simply the way it was implemented.
In this case, we weren’t fully convinced by the initial outcome. To get deeper insights, we turned to one of the most powerful tools in our arsenal:
Mixed Methods Experimentation
Mixed Methods Experimentation is a core pillar of our approach at Conversion.
We won’t dive into the full theory here (you can read more [here]), but the essence is simple:
No single research method is perfect.
Each method has strengths and weaknesses, and by combining multiple approaches, we can gain insights that wouldn’t be possible using any one method alone.
📌 In this case, our experiment told us what happened (the result was inconclusive), but not why it happened.
To bridge this gap, we supplemented the quantitative data from our test with qualitative user research—conducting in-depth user studies to better understand where the friction lay.
Key Insights from User Research
🔹 Users understood that the subscription included prize draws—the interstitial effectively communicated that part.
🔹 However, even after 15 minutes on the site, many users were still confused about:
- Which specific prizes they could win
- Which draws they were entered into
- Whether they needed to pay per draw or if all draws were included in the subscription
Here’s what some users had to say:
💬 “I would be very unlikely to buy due to cost and lack of understanding on how it all works. Do I have to pay per draw? Which prizes are tied to each draw?”
💬 “The ‘How It Works’ section is good, but it still doesn’t make a lot of sense to me right now.”
💬 “I don’t want to sit through a video unless I’m really, really interested. I’d prefer some graphics or bullet points to explain the key details.”
Why Did the First Interstitial Fail?
Armed with this qualitative data, we now had a clear reason why the first interstitial didn’t produce a strong result:
✅ The concept (Product Understanding) was correct, but the execution was too generic.
❌ The interstitial answered the wrong questions, failing to address users’ primary concerns.
By integrating quantitative test results with qualitative research, we uncovered the missing link—and put ourselves in a much stronger position to develop the next test iteration.
Next up: refining our execution to directly address the right user questions. 🚀
Interstitial Experiment #2: Refining Execution for Clearer Product Understanding
For our second experiment on the Product Understanding Lever, we retained the core concept of the initial interstitial but refined its execution using key insights from our user testing research.
Key Improvements Based on User Feedback
📌 More precise copywriting – We rewrote the interstitial’s content to directly address the most common user concerns identified in our research.
📌 Clarity on subscription mechanics – We clearly explained how the subscription works, ensuring users no longer had unanswered questions about payments or participation.
📌 Specificity on prize draws – We explicitly outlined which draws users would be entered into upon signing up.
📌 Transparent prize details – We removed ambiguity by specifying which prizes were tied to each draw.
With these improvements, we were eager to re-run the experiment in the sign-up funnel. However, due to blocked swimlanes, testing in that location was temporarily unavailable.
Adapting Strategy: Testing on the ‘How It Works’ Page
Rather than waiting for a month for swimlanes to clear, we leveraged our agile testing philosophy—which prioritizes rapid experimentation to uncover high-value levers as quickly and efficiently as possible.
During user testing, we noticed another pain point:
🔹 Users disliked the format of the existing ‘How It Works’ page, particularly the reliance on video content.
🔹 Many users preferred a concise text-and-icon format rather than watching an explainer video.
With this insight, we decided to deploy our revised interstitial on the ‘How It Works’ page instead.
Test Setup
📌 When users landed on the ‘How It Works’ page, they were presented with a full-page pop-up featuring our newly designed interstitial.
📌 Users then had two options:
✔️ Enter the sign-up flow directly.
✔️ Continue to the main ‘How It Works’ page as usual.
Although this page had lower traffic and higher Minimum Detectable Effects (MDEs) than the sign-up funnel, we saw this as a valuable opportunity to gauge relative performance—giving us early signals on the effectiveness of the revised interstitial before rolling it out on higher-traffic pages.

We tested our new interstitial against the standard ‘How It Works’ page (which did not include an interstitial), and the results were compelling:
📊 Statistically significant +5.5% increase in Average Order Value (AOV)
While our primary sign-up KPI had yet to reach statistical significance, it showed a strong upward trend—a promising sign that we were moving in the right direction.
But beyond just the raw numbers, this test surfaced a novel insight:
👉 By improving upfront clarity around subscription options (a key confusion point identified in user testing), users were willing to spend more.
Though sign-up conversion data remained inconclusive, these findings further validated that the Product Understanding Lever was driving meaningful improvements.
Armed with these insights, we were ready for the next step: testing the interstitial directly within the sign-up flow itself.
Interstitial Experiment #3: Deploying in the Sign-Up Funnel
Given all our prior research and the positive signals from Experiment #2, we had growing confidence that our revised interstitial was an effective Product Understanding execution.
So as soon as swimlanes were cleared, we deployed it where it mattered most—directly in the sign-up funnel.
📌 Test Setup:
- Users entering the sign-up funnel were shown the updated How It Works interstitial before proceeding to the first step of the form.
- The new interstitial retained the data-backed refinements from previous iterations, ensuring it directly addressed the most common user pain points.
With the interstitial now positioned at the most critical decision-making moment, we were eager to see if this execution would finally move the needle on sign-ups and revenue. 🚀

When we tested the new interstitial in the sign-up funnel against the original version (which had no interstitial), the results spoke for themselves:
📈 3.7% increase in sign-ups – our primary KPI
With this result, Product Understanding was now fully validated as a high-impact Lever for this client.
Next Steps: Scaling Product Understanding for Maximum Impact
While these results were a major win, our philosophy at Conversion is about continuous iteration—not just isolated successes.
When we find something that works, we double down—aggressively.
(For perspective: with one client, we’ve tested 46 iterations on a single Lever, and it continues to drive gains.)
With this mindset, we moved on to further exploitation of the Product Understanding Lever:
📌 Expanding to Additional Funnels
- We tested the optimized interstitial in another sign-up funnel on the site.
- ✅ Result: 5.4% additional uplift in sign-ups.
📌 Enhancing Product Understanding Within the Funnel
- Using insights from our user testing research, we introduced extra contextual information within the sign-up flow itself.
- ✅ Result: 4% increase in sign-ups.
📌 Future Experiments in Development
- We are currently testing a complete redesign of the ‘How It Works’ page, integrating all the insights gained from user testing and experiment data.
- Additional Product Understanding experiments are in various stages of development.
And we will keep pulling this Lever until it stops delivering results.
The Bigger Picture: Lever Optimization vs. Exploration
At this point, you might think our strategy is simply about identifying one successful Lever and maximizing it indefinitely.
That’s not quite the case.
Yes, we double down on high-performing Levers—but we also allocate substantial energy to exploring new Levers that could unlock even greater value in the future.
For a visual breakdown of how this balancing act works in practice, check out the diagram below. 🔍📊

While we were optimizing Product Understanding, we were also running experiments on other Levers in parallel—including the Value Statement Lever.
📌 Why? Because no single Lever should ever be the sole focus of an experimentation strategy.
Discovering the Impact of Value Statements
Through rigorous testing, we found that Value Statements were another high-impact Lever for this client. Later, we combined Product Understanding with Value Statements to create a new iteration of the interstitial—one that resulted in an even greater uplift in sign-ups.
By taking a multi-lever approach, we were able to compound our gains rather than relying too heavily on any single optimization.
Balancing Exploration & Exploitation for Sustainable Growth
🔹 Exploitation: When a Lever works, we double down and extract maximum value.
🔹 Exploration: At the same time, we experiment with new Levers to unlock additional growth opportunities.
By diversifying our experiments across different Levers, we ensure that we’re never overly dependent on one strategy.
This approach allows us to maintain the optimal balance between short-term wins and long-term innovation, ensuring that we continuously deliver measurable results for our clients—both now and in the future. 🚀
Increase Revenue (CRO)
Want higher revenue from the traffic you’re already getting? Don’t pour more ad spend into a leaky funnel.
Get Started