Marketing Case – Data Analyst Technical Interview

By: Roman Myskin - Dec. 22, 2025


A manager comes to you and says that they are launching 100 CRM campaigns every month and want to measure how efficient these campaigns are - specifically, whether they attract new users. They also want to understand whether they should launch more campaigns or fewer. Every campaign includes A/B testing with incrementality testing.

Question: How would you measure the success of the CRM channel?

Competencies:


My answer during the meeting

From my perspective, channel efficiency can only be evaluated relative to other channels. The idea is simple: you usually have a limited resource, and you want to allocate it properly, so efficiency lies in resource or budget allocation.

But this is CRM (Customer Relationship Management) - so it’s… free?

That was something I wasn’t prepared for, and interestingly, the interviewer (a Product Analyst, not a Marketing Analyst) didn’t have a clear answer either. We spent some time trying to define what a CRM campaign actually means. I suggested HubSpot, since it integrates with external advertising platforms like Google Ads and Meta Ads, but eventually we agreed that the most realistic example would be push notifications.

Another challenge was A/B testing. I initially struggled to understand how A/B testing could measure the overall CRM channel performance, since A/B tests usually optimize individual campaigns, not the channel as a whole.

At that point, I got stuck because there were no obvious resource constraints. I proposed KPI metrics like conversions and retention rate, but these are absolute metrics - they show levels, not efficiency.

In the end, I started grasping at straws:

I’m explaining the thinking process because if there are no mistakes in the story, you don’t learn anything. I made several mistakes:

  1. Stakeholder Management. I didn’t fully consider that I was talking to a Product Analyst. I applied marketing-specific concepts (HubSpot, Google Ads, Meta Ads) instead of focusing on features, experimentation, and statistics, which would have been more appropriate for the audience.
  2. Lack of Clarification. I didn’t understand how to apply A/B testing because I skipped incrementality. Incrementality is the core of this case.
  3. Loss of Focus. In my notebook, I was confidently writing a solution around data-driven attribution, but I failed to focus on the actual problem details: New users, Incrementality, Success definition.


Incrementality Testing (The Core Concept)

Incrementality testing is a scientific marketing method that uses controlled experiments (test vs. control groups) to measure the true causal impact of a specific campaign or tactic.

It answers the question: “Would this conversion have happened anyway?”

By comparing an exposed group (treatment) with an unexposed group (holdout), we calculate incremental lift - the additional conversions or revenue generated only because of the campaign. This allows us to:


Applying Incrementality to the CRM Case

The logic is actually very simple.

We already have incrementality testing:

So we just need to compare the results of each campaign’s A/B test and determine whether the campaign had a statistically significant incremental impact.


Statistical Framework

A/B testing is based on comparing two groups and relies on the Central Limit Theorem.

In the frequentist paradigm:

We calculate a p-value, which is the probability of observing our data (or more extreme results) if the null hypothesis were true.

In our scenario, conversion to a new user is binary, so we use a two-proportion Z-test.


Two-Proportion Z-Test

The two-proportion Z-test evaluates whether the difference between two conversion rates is statistically meaningful rather than random.

Common use cases include:


Examples

1) No Statistical Difference (Non-Incremental)

Lift: +0.6% relative p-value ≈ 0.67

Result: No statistically detectable impact. The campaign effect is pure noise.


2) Statistically Significant, but Not Meaningful (~1% Lift)

Lift: +1.0% relative p-value ≈ 0.04

Result: Statistically significant, but incrementality is marginal. A classic example of “significant ≠ valuable.”


3) Statistically Significant and Clearly Incremental

Lift: +12.5% relative p-value < 0.0001

Result: Strong incremental impact. A clear candidate for scaling.


Aggregating Campaign Results

Each campaign can be labeled as:

Example:

Since campaigns run continuously, we can aggregate by month:


Distribution Analysis

At this point, we can:

If the distribution is approximately normal, we can estimate an expected average number of successful campaigns per month (N).


Why This Is Not Enough

As my physics teacher used to say:

“A negative result is still a result.”

If we decide to launch N campaigns next month, their efficiency may actually be compromised - because successful campaigns often rely on the presence of unsuccessful ones (random variation, exploration, audience overlap).

Also, the fact that the company launches exactly 100 campaigns every month is bad news for analysts:

So the real question becomes:

Can we grow? Or should we reduce campaign volume?


Using MMM to Answer the Scaling Question

This is where Marketing Mix Modeling (MMM) helps.

MMM is a statistical technique that uses historical data to measure how different marketing activities drive business outcomes and how they scale.

In our case:

Even though we don’t have monetary costs, user exposure volume functions as a resource.


What MMM Answers Here

MMM helps answer two critical questions:

  1. Do CRM campaigns have a measurable impact overall?
  2. Is the channel saturated?

Saturation captures diminishing returns:


I can’t guarantee that this is a fully correct answer to the case study, but I followed the given hypothesis and data. In the real world, there are usually more specific questions about campaign performance - no CRM campaign exists in a vacuum. Deeper and more targeted questions are always required. In our scenario, we were given an abstract set of 100 campaigns, and simply determining the “correct” number of campaigns does not automatically translate into actionable decisions.

In this article, I aimed to demonstrate the theoretical framework that a data analyst should be able to present during an interview.

It’s also important to note that there is rarely a single correct answer to a case study - this is just one example of a structured thinking process. In some situations, it’s essential to actively communicate with the interviewer, as they may have a specific answer or direction in mind and can help guide you if you get stuck.



Home