Log In

DIY B2B Attribution: When to Connect Tools & How

Posted June 20, 2023

We’ve had many conversations around requirements gathering and how you need to set attribution up differently based on what you’re trying to do. We’ve also explored data hygiene and best practices. The next stage involves discussing where we need to pull data from and where we’ll be pushing it. Misha Salkinder, Director of Customer Data Strategy, at CaliberMind, shares his insights on data connectivity in multi-touch attribution for B2B.

You can envision a situation where data sits in the CRM, and there are attribution solutions that can plug into that. But that can evolve into much more complex touchpoints coming from various sources. Data can be sitting in a data warehouse both in its raw form and in a post-processed, normalized dataset, too. To figure out which configuration is right for your company, ask yourself “What sorts of signs will let me know this doesn’t make sense in the CRM anymore?”

This decision isn’t always driven by the end users. What I’ve seen more often is there are other teams who have said, “Guess what? We need a unified dataset and marketing can utilize it if they like.” In my experience, data gets dumped into a data lake without understanding how to use it later.  

If deciding to leverage a data warehouse was driven by the marketing team, the decision would align more with the maturity of the marketing organization. Perhaps they realized campaign member data alone isn’t enough to understand the buyer journey. Perhaps they want to incorporate product signals and sales activity. 

If you want to incorporate different types of interactions (outside of marketing) and have a more realistic representation of the buyer journey, you need to start collecting data from different sources. Something like a data lake would be more fitting in this scenario.

Salesforce Has Its Limits, Too

As a longtime Salesforce admin, people would ask me to do things that made sense to them but exceeded what was possible in a CRM. The following things need to be taken into consideration when relying on a CRM for attribution:

  • Is all of the data we need housed in our CRM? Does it make sense to add what’s missing?
  • Which objects do we want to leverage vs. what is out-of-the-box?
  • How many joins do we need vs. what is possible in our CRM?
  • Do we want to backfill or supplement existing data?
  • How much processing and data storage can we borrow from Salesforce? 

If any of these questions are out of alignment with what we know is achievable in our CRM, we’ve exceeded the capabilities of the tools we have. I’ve seen admins replicate sales activity into campaign members because the tool they use for attribution just looked at campaign data.  We’ve also seen them consider pushing web activity into those objects. 

The devil is always in the details.

When we backfill campaign data, the default response date is the day you upload the list. You must build in custom fields and flows that update the response date to an override date. This is just one example of a nuanced data issue that’s introduced with changing how data stored in different objects is stored in the system. When indicators or dates are set incorrectly, things go wrong quickly. There’s flexibility in a CRM, and we’ve events and tasks successfully get incorporated as campaign members to enable attribution. However, it’s very easy for things to go wrong.

Also consider data limits in your CRM. Will storing all web interactions impact your limits? How granular do you want this data to be?

Another Argument for a Data Warehouse: Normalization & Data Management

Normalization of data is a very, very big part of having useful and meaningful reports. For example, the title field on the lead and contact objects is often used by many departments and even populated by different enrichment sources. Analyzing thousands of job title variations offers little value compared to being able to group the data by job function and seniority level. Other examples of fields that are more meaningfully analyzed with normalized data include Region, Industry, Sub-Industry, and Technologies Owned. 

Another great example of the importance of normalizing data is domain information. There are many variations of how people input website information. If we can standardize that data and use a domain, then reference that domain when analyzing email addresses, we have an opportunity to map back orphaned contact and lead records.

It’s possible to normalize some of this data using formula fields in your CRM, but you’ll run into text limits very quickly. This is another moment we recommend asking yourself, “Is there an external resource we can utilize for this?”

What Technology (Outside of a Warehouse) is Needed to Support DIY Attribution?

Today, there are tools that make transporting data much easier, such as ETL and ELT point solutions. Before tools like Fivetran or Boomi were developed, there were a lot of API connections that needed to be developed for each integration point. While these data transport solutions are expensive, they reduce development time and resources needed to do the actual development.

However, even when working with such tools there are still situations where if it’s done without vetting or selecting which objects come through, you can have a costly data ballooning issue. 

A way to potentially reduce data connection costs is to leverage free connectors offered by your CRM or marketing automation system. If you can get the data into your CRM, it’s one less place you’re connecting to. 

Unfortunately, many times marketing data sits in spreadsheets and you have to find ways of incorporating that as well.  Always remember that you might have very interesting data that would be very meaningful to your team’s reporting sitting in SFTP or a drive somewhere that you might also want to incorporate.

Which Skillsets Do You Need to Support Attribution?

Attribution configuration should change depending on what end users want to accomplish. There needs to be a person who can consider requirements from marketers and other executives and then translate those requirements into what has to happen technically to support those asks. We’ve seen scenarios where there were a lot of marketing end users – and a lot of requirements for what they wanted to see – but there wasn’t a technical person on staff to normalize the data or to create unique keys across tables and not have duplicate data or nonsensical reporting. Those projects always fail. 

On the other hand, you must also have engaged end users to produce something people want to use. These are people who ultimately want to make sense of this data. You don’t want to create reports that aren’t very meaningful to the board or the CMO. If you design these reports in a silo and focus only on the technical “correct-ness” of your models, you won’t see adoption of those models. Adoption is entirely dependent on delivering an end product that people want to use.

Unfortunately, what people ask for isn’t always what they want. It takes a lot of skill to wade through those requirements and pull out what people actually want to use the report for.

When we hear, “We actually have a centralized data warehouse. It’s all in there, but we can’t show the CMO anything of value,” we know that the main problem is a lack of communication and ability to translate asks into what people really want. It’s vital to get an understanding of what is available, what’s possible, and even where data might be duplicated. 

For example, if you’re not familiar with how campaign data flows between systems, it’s really easy to accidentally duplicate campaign data when we merge data from a system like Salesforce with Marketo data. 

Determine whether you’ll need a developer resource for data transportation, a database manager to keep all of the connections running and tables structured, an analyst to translate requirements into technical instructions, and someone who can spend time with the end users and train them to use the system. These skillsets rarely exist in a single person.

Better Migrations with Better Data

I’m a big believer in having the raw data stored in your system, then processing it into more normalized and standardized subsequent tables. Storing the raw data gives you the ability to revert to clean information or have a copy should you want to migrate to a new marketing automation platform or CRM. 

Plus, typically with marketing reports, you want to know how trends are forming over time. It’s important to have access to historical data and understand how far back you’ll want to look in future years.

Another major benefit of having a data warehouse is it gives you access to connect previous raw data to new data in a new tool. It’s easy to make a decision about which data you want to include only to discover you forgot a critical segment of contacts or accounts. Having standardized information is important to maintain the data points your business finds important, which can inform new tool setup and make change management easier.

How Can We Avoid Mistakes?

When it comes to a project like building your own attribution solution, I highly recommend speaking to organizations that have gone through the process or speak to us at CaliberMind. We’re always happy to share what pitfalls can be easily avoided with the right planning. 

When it comes to building out attribution, proceed with caution. Think through what a reporting infrastructure would look like. Have internal conversations to gather requirements and understand what it will take for your solution to be adopted. It will save you a whole lot of time and headaches on the back end. 

If you can figure out what all those people want to use attribution for upfront, you can get in front of the need for different models based on the questions they’re trying to answer or course correct development that is not going to solve for a specific problem.

I would also recommend incorporating one new data source at a time. Incorporating many data sources simultaneously in addition to building out a model can make it much more difficult to troubleshoot. A time-cost analysis has to be taken into consideration. It’s easy to spend many cycles testing data, and it gets complicated quickly as you layer on more data sources.

Would you like a review of your data stack by an attribution expert? Contact us here.