Search for “campaign effectiveness” in Google, and you’ll see that a lot of people are asking what campaign effectiveness means and how to measure it:

It’s a question that a lot of people ask because the default metrics alone aren’t enough.
Campaign Metrics
Before we go into how to measure campaign effectiveness, let’s review the standard and advanced relevant metrics.
Name Acquisition
The most rudimentary CRM setup typically accommodates new name acquisition or leads created from a campaign. The easiest variation of this involves setting up a form in a marketing automation platform that passes the new contact information through a program that associates the person record with a campaign record and updates the lead source to the campaign type.
Pro Tips:
- Don’t overwrite the Lead Source if there is already a value.
- Preferably set up a requirement that the Lead Source is selected when the record is created to avoid blanks.
- We recommend including Partner, Inside Sales, Field Sales, and Customer Success as Lead Source values.
What It Means
Many campaigns are intended to draw in more prospects to your database and continue to build awareness. A name acquisition metric helps you track which sources generate more names for your marketing automation platform. These include list purchases and sales prospecting.
Campaigns that have high name acquisition stats will likely be trade shows (pre 2020), in-person events, online events, and gated content. Demo requests are less likely to be name acquisition heavy, but that doesn’t mean demo requests are bad–it means your demo request probably heard about you during an event or interacted with gated content first. The demo request is a great example why name acquisition should not be the only metric you use to judge a campaign.
Lead/Account Funnel
CRMs and marketing automation platforms have an inherently person-centric view. This means those of us selling B2B struggle to aggregate data at the account level when it’s split across multiple objects (see the most common data gaps in marketing analytics here).
Pro Tip: If at all possible, avoid using the Lead object in your CRM. It may work best to have the object available for form fills only and then use an automated process to merge duplicates, find account matches, and convert the record into a contact and (when appropriate) account. Restrict access for all other data entry sources (including humans).
Marketing Qualified Account
Ideally, marketers have a customer data platform that automatically assigns person and account records a universal ID that can be used to properly aggregate data. A big bonus to using CaliberMind’s platform is machine learning, which flags accounts for sales that look like accounts that have evolved into an opportunity. This is great news for B2B sellers that have a long sales cycle that involves multiple people on a buyer committee. We’ve seen accounts land with sales 3-4 weeks earlier than using the standard demand generation waterfall in our beta.

Marketing Accepted Lead
This is simply a person in your marketing automation platform database that has not progressed to steps further along the funnel.
Marketing Qualified Lead
If a robust platform does not automatically clean your data, you can still find useful information in the traditional demand generation waterfall. A marketing qualified lead is a person who meets a threshold that indicates they are ready to talk to sales. In our experience, content syndication and gated content are great ways to increase your database size but typically aren’t ready for sales. Push them into a nurture campaign and wait for them to engage in more meaningful ways. If someone attends a webinar, in-person event, or (even better) requests a demo, then they can be considered a marketing qualified lead and passed to sales.
Pro Tip: Be careful when using your ideal customer profile (ICP) to gate marketing qualified leads. If your database is missing key information due to a lack of enrichment/data collection, you may inadvertently gate an interested contact that is a fit. Always account for null (blank) values.
Sales Accepted Lead
In organizations that have an established lead workflow, you may be able to capture when sales accepts the lead. This may involve a “Working” status on the lead object to indicate it’s a good fit and sales is trying to follow up. Alternatively, you may assume a lead reaches this stage if it’s converted into an opportunity or activity is logged on the record.
Pro Tip: Establishing a “Disqualified” status on the lead record is a good idea, but sales should be trained on what it means. We recommend only using the status when the data is clearly fake (e.g., Donald Duck). “Not ICP” is a great status to use for sales to indicate the prospect doesn’t fit the buying profile today. These will need to be revisited if you add additional product lines or features that expand your target audience.
Inside Sales Meeting Set
Organizations that utilize an SDR team should leverage a status prior to Sales Qualified Opportunity to indicate when the SDR believes they have a qualified meeting set. This will help you determine whether your teams are aligned when you start to look at conversion rates.
Sales Qualified Opportunity
Sales qualified opportunities are exactly what they sound like. They are a lead that has progressed to an opportunity.
What It Means
While the metrics can be used to help determine one leg of campaign effectiveness (lead generation), the real advantage is in monitoring your workflows. The conversion points, irrespective of the campaign, can also help uncover hand-off issues between teams. For example, if your Inside Sales Meeting Set to Sales Qualified Opportunity conversion is low, you may have a misalignment between inside and field sales. Similarly, if Marketing Qualified Leads and Sales Accepted Lead ratios are low, you may need to look at your Marketing Qualified Lead definition.
Campaign Engagement
Engagement scoring can be tricky to aggregate at the campaign level and snapshot for a given point in time. Our data experts have figured it out, and we use it to gauge campaign interaction without bias for revenue. Because the engagement score isn’t weighted by deal size–just the quality of engagement–we can determine whether a campaign generated much activity across accounts.
While many of us want data to be revenue biased, this isn’t appropriate in the days or weeks after kicking off a campaign. We need to use earlier measurements to determine effectiveness.
We can also flip the axis and look at campaign engagement by account. If we wanted to see the buyer journey for a promising opportunity, this would be a great way to figure out how to replicate the journey for look-alike accounts.

Primary Campaign
Primary campaign can be calculated in a number of ways. The most common is grabbing the last campaign to take place on the primary contact prior to opportunity creation. Some tools are able to automatically calculate the last touch across the account and append the record to the opportunity as the primary campaign. Others force the salesperson to manually select a campaign (such as Inside Sales, Field Sales, or Partner) if one hasn’t been automatically appended.
Be careful of “active status” dependent campaign selection. In tools using status-dependent selection, the salesperson must update the contact or lead record’s status to indicate they’ve accepted the lead, are working the lead, and ultimately converted the lead. If they don’t follow the process and accidentally “close” out the campaign member record, you’ve lost your primary campaign.
In our opinion, thoughtfully constructed data rules to calculate the primary campaign passively in the background are best. Be sure to think through whether you want to incorporate partner and sales activity when defining your rules (particularly if zero marketing campaign activity took place immediately prior to the opportunity being created). Note that you may want to reserve Primary Campaign for marketing generated activity and save Opportunity Source (up next) for calculating department contribution.
Pro Tip: Think carefully about how you determine whether or not a campaign should be considered primary. Consider putting a time limit on the campaign. If the interaction happened a year before the opportunity was created, it probably wasn’t responsible for generating the opportunity. If the campaign interaction happened weeks before, it was probably meaningful.
What It Means
Whereas name acquisition was calculating how many people the campaign added to your database, this indicator is used to determine how effective a campaign is at encouraging people to commit to the buying process.
Opportunity Source
In theory, Opportunity Source flags which department generated the opportunity. In its most rudimentary form, sales teams are required to indicate which department sourced the opportunity. This becomes very problematic when departments are compensated on opportunity creation stats (because they inevitably pressure sales to promote their contribution). At its most complex, a model looks across the entire account and prioritizes activities that take place within a specific window of time prior to opportunity creation.
While we understand the intent of Opportunity Source, opportunities are usually a result of multiple departments working together to generate interest.
What It Means
The theory is that this is a metric to help the business determine whether each department is pulling its weight. In the case of organizations that leverage partner programs, the goal is often ⅓ distribution across marketing, sales, and partner departments. In other cases, the goal is 50/50 marketing and sales.
Pro Tip: If you rely on user entry and use the metric as a compensation variable, you would be wise to be very suspicious of this stat’s accuracy
Campaign Attribution
Campaign attribution comes in many different flavors. Here’s the short rundown of different types of campaign attribution.
Single-Touch models operate on the assumption that a single activity is pivotal in generating an opportunity.
Single-Touch: First Touch
First touch attribution indicates the first campaign that inspired interaction with a prospect. This is not equivalent to name acquisition. You may acquire a name through a list purchase, and the first touch may be a gated content download after you nurture them via an email program.
Single Touch: Last Touch
The first touch is the first proactive engagement with an account, but in B2B marketing, there are likely a number of interactions that take place between the first touch and the opportunity creation date. Last touch snapshots the last campaign interaction to take place prior to opportunity creation.
Multi-Touch models acknowledge that multiple touch points are required to generate an opportunity and attempt to distribute pipeline/revenue credit in some proportion across these touches. They consider all touches in a designated time frame: X months before the opportunity is created to the date the opportunity is closed. Less common is X months before opportunity creation to the date the opportunity is created.
Pro Tip: Considering partner and sales activity will add legitimacy to your model if you’re attempting to use it in pipeline/revenue contribution calculations. If you want to do more than calculate contribution to creation (in other words, calculate contribution to revenue), consider using a date range that extends to the opportunity close date.
Multi-Touch: Linear
Linear attribution applies an even weight across all campaign interactions in a designated time frame.
Multi-Touch: Time Decay
Time decay assumes that every interaction further away from a significant event (commonly Last Touch) carries less significance than the last. With CaliberMind, the significant point dynamically updates depending on how far the engagement progresses. These points of significance are the point of MQA, Opportunity Conversion, or Closed Won date.
Multi-Touch: W-Shaped
W-Shaped heavily weighs three significant touch points (first touch, marketing qualification, and opportunity creation being most common) and then distributes the remaining percentage points across all other touches.
Multi-Touch: U-Shaped
U-Shaped heavily weighs two significant touch points (first touch and qualification being most common) and then distributes the remaining percentage points across all other touches.
Multi-Touch: Chain-Based
This model leverages machine learning to analyze winning opportunity buyer journeys and weight touch points according to influence. Learn more about this model here.
Pro Tip: Note that some multi-touch attribution vendors can further split out attribution into funnel stages or customized stages. For example, at CaliberMind we use “Source” attribution to indicate touch points that took place prior to opportunity creation and “Influence” attribution to indicate touch points that took place after opportunity creation.
What It Means
Single point attribution gives 100% of your pipeline and revenue credit to a single touchpoint. Multi-touch attribution splits credit across touch points in an attempt to demonstrate the value of each touchpoint in relation to pipeline and/or revenue. This includes awareness/early-stage campaigns and late-stage campaigns meant to help sales maintain momentum. These campaigns are typically not captured in the metrics listed above the “Multi-Touch Attribution” section.
Campaign Effectiveness
Ideally, businesses have the technology necessary for a robust multi-touch attribution model that can incorporate both marketing, sales, and partner activity to determine contribution. This makes budget exercises and determining campaign ROI possible.
Even if a company has multi-touch attribution, there are times when additional campaign metrics are not only useful but necessary. Attribution takes time to accumulate, especially when sales cycles are long. Many of us need to stay on top of campaign ad performance well before opportunities are converted.
We’ll look at how CaliberMind uses our own technology to evaluate the effectiveness, and then we’ll look at another company’s use case.
Real-World Example 1
CaliberMind creates guides for marketers on a regular basis, and some are more effective than others. Sometimes we like to dive into the weeds on an interesting (to us) topic when people really just want to know how to identify engaged accounts or how to design an ICP model.
In this case, we released a guide on Chain-Based Insights, the next step in marketing analytics after Chain-Based Attribution. It highlighted use cases beyond simple attribution where people find value in Chain-Based logic, such as likely lost opportunities or measuring the removal effect (calculating the opportunity cost of not doing a campaign).
To determine early campaign performance, we looked at:
- LinkedIn Cost Per Click
- LinkedIn Result Volume
- Landing Page Conversion
- Marketing Qualified Accounts
- Engagement
Landing Page Metrics:
We compared these statistics to past guide campaigns that followed a similar playbook. We discovered two things:
- LinkedIn had changed its algorithms and campaign types. We selected Website Visits, and although we got a high volume of visits, very few people filled out the form. We should have chosen the Website Conversion campaign type.
- The accounts downloading the guide weren’t continuing to engage with our site. We had very few demo requests generated from this campaign.
Based on a hunch that people would find our Marketing Qualified Account discovery useful, we released the MQA vs. Traditional Lead Scoring guide. We looked at the same metrics over the same time period:
- LinkedIn Cost Per Click
- LinkedIn Result Volume
- Landing Page Conversion
- Marketing Qualified Accounts
- Engagement
Landing Page Metrics:

Looking at First Touch:

Looking at Engagement stats side-by-side:

While we’d love to say there was a giant improvement between the two campaigns, we can only say we generated the same number of engaged accounts in a shorter timeframe. Page conversions increased and bounce rate marginally decreased. Anecdotal feedback from sales was more positive. We shall see how multi-touch attribution numbers progress.
Real-World Example 2
A technology startup in Seattle selling advanced network analytics capabilities had a problem. They were attending a large volume of in-person events, but they weren’t sure the spend was justified. At an aggregate level, in-person events didn’t generate the same conversion rates as digital, and the anecdotal feedback from the sales team was that a portion of the events were a waste of time (wrong audience).
Unfortunately, people couldn’t agree on which events were not worth doing. The finance team pulled together a list of stats that included:
- Name acquisition
- Marketing Qualified Leads
- Inside Sales Meetings Set
- Opportunities Created (Last Touch)
- Pipeline Created (Last Touch)
- Revenue Won (Last Touch)
A couple of the trade shows most loved by sales were amongst the worst in terms of revenue return.
To make sense of the conflicting information, Anna Mowry, VP of Finance & Operations at Igneous, developed a more comprehensive campaign review (to be clear, the company we’re referencing is not Igneous). She added in multi-touch attribution statistics and met with the marketing and sales leadership teams on a quarterly basis to review the campaigns that looked the worst on paper.
Her new spreadsheet included:
- Name acquisition
- MQL
- Meetings Set
- Opportunities Created (Last Touch)
- Pipeline Created (Last Touch)
- Revenue Won (Last Touch)
- Pipeline Created (Multi-Touch)
- Revenue Won (Multi-Touch)
- Anecdotal feedback
“From a financial perspective, I want to see a campaign is generating revenue–either directly or by influencing opportunities–before we sign up for it again next year. When only looking at primary campaigns, we had quite a bit of conflict between the anecdotal feedback we received from sales management and the numbers. Adding in influence helped us be more confident in whether or not a campaign made sense, and we could point to the results when a single person had a contrasting opinion.” — Anna Mowry, VP of Finance & Operations at Igneous
Finance found that some campaigns did have a greater impact on late funnel momentum (user groups in particular) and were able to justify continuing some of the spend. They were also equipped to push back on campaigns that had abysmal results across the board.
On Measuring Campaign Effectiveness
It’s helpful to think about where your campaign is most likely to move the needle (which stage in the buyer journey) and compare similar campaigns to gauge effectiveness. For example:
- Gated content and content syndication are likely to be first touch or awareness campaigns
- Demo requests and in-person meetings are likely to be last-touch campaigns
- Some online and in-person events (like user groups) will be more geared towards late funnel or in-flight opportunities
Accordingly, you would expect:
- Awareness campaigns to perform well in terms of name acquisition and early influence
- Demo Requests to perform well with marketing qualified account and pipeline creation
- Late funnel activities to show a reasonable contribution to pipeline and revenue influence using Multi-touch Attribution
Measuring campaigns according to what they are intended to do is the smartest way to gauge effectiveness.
Have questions or need help? We’re here to answer your questions.