We’ve had many conversations around requirements gathering and how to set up attribution differently based on your goals. The next stage involves discussing where to pull data from and where to push it. Experts share insights on data connectivity in multi-touch attribution for B2B.
Picture this: your data starts in your CRM with some basic attribution tools. As your needs grow, you might find yourself dealing with data from all over the place. You could end up with a mix of raw and cleaned-up data in a data warehouse. To figure out if you’ve outgrown your CRM, ask yourself, “What signs tell me it’s time to move beyond the CRM?”
Often, it’s not the marketing team driving this change. I’ve seen other departments say, “Hey, we need all our data in one place. Marketing can use it too if they want.” The problem? Data often gets dumped into a data lake without a clear plan for using it.
If marketing is behind the move to a data warehouse, it usually means they’re leveling up. They might realize that just tracking campaigns isn’t enough to understand the customer journey. They might want to include product usage data or sales activities too.
Bottom line: if you want a fuller picture of how customers interact with your business beyond just marketing, you need to pull in data from different sources. That’s when a data lake starts to make more sense.
CRM vs. Data Warehouse: Choosing the Right Configuration
You can envision a situation where data sits in the CRM, with attribution solutions plugging into it. However, this can evolve into more complex touchpoints from various sources. Data can also reside in a data warehouse, both in raw form and as post-processed, normalized datasets.
To determine the right configuration for your company, ask yourself, “What signs indicate that this no longer makes sense in the CRM?”
If deciding to leverage a data warehouse is driven by the marketing team, it often aligns with the organization’s maturity. They may realize that campaign member data alone isn’t sufficient to understand the buyer journey and want to incorporate product signals and sales activity.
Salesforce Limitations and Considerations
As a longtime Salesforce admin, people often request things that exceed CRM capabilities. Consider the following when relying on a CRM for attribution:
- Is all necessary data housed in the CRM? Does adding missing data make sense?
- Which objects should we leverage vs. what’s out-of-the-box?
- How many joins do we need vs. what’s possible in our CRM?
- Do we want to backfill or supplement existing data?
- How much processing and data storage can we borrow from Salesforce?
If any of these questions are out of alignment with CRM capabilities, you’ve exceeded the tools’ limits.
The devil is always in the details.
When we backfill campaign data, the default response date is the day you upload the list. You must build in custom fields and flows that update the response date to an override date. This is just one example of a nuanced data issue that’s introduced with changing how data stored in different objects is stored in the system. When indicators or dates are set incorrectly, things go wrong quickly. There’s flexibility in a CRM, and we’ve events and tasks successfully get incorporated as campaign members to enable attribution. However, it’s very easy for things to go wrong.
Also consider data limits in your CRM. Will storing all web interactions impact your limits? How granular do you want this data to be?
Data Warehouse: Normalization and Data Management
Normalization is crucial for useful and meaningful reports. For example, analyzing thousands of job title variations offers little value compared to grouping data by job function and seniority level.
Another example is domain information standardization, which helps map back orphaned contact and lead records.
Technology and Skillsets for DIY Attribution
Modern tools like ETL and ELT point solutions make data transportation easier. While expensive, they reduce development time and resources.
To support attribution, you need:
- A person who can translate requirements from marketers and executives into technical specifications
- Engaged end users who want to make sense of the data
- A developer for data transportation
- A database manager to maintain connections and table structures
- An analyst to translate requirements into technical instructions
- Someone to train end users on system usage
Incorporating Third-Party Data and Intent
We have learned that third-party data, including intent data, can be valuable for understanding the buyer journey and engagement. However, caution is advised when incorporating it into attribution models.
Intent data can have a broad or narrow scope. For example, G2 might be likened to a garden hose, while Bombora is more like a fire hydrant in terms of data volume and scope.
When considering intent data for attribution, especially for ROI calculations, there are concerns:
- Intent data is often at the organizational level, not individual
- It may not necessarily constitute a buying signal
- There should be a high threshold for including intent data as an event in attribution models
Intent data is particularly useful for understanding engagement and can be a critical component of the buyer journey. However, it should be thought out critically and only specific events that pass a high threshold should be included in attribution models.
Better Migrations with Better Data
I’m a big believer in having the raw data stored in your system, then processing it into more normalized and standardized subsequent tables. Storing the raw data gives you the ability to revert to clean information or have a copy should you want to migrate to a new marketing automation platform or CRM.
Plus, typically with marketing reports, you want to know how trends are forming over time. It’s important to have access to historical data and understand how far back you’ll want to look in future years.
Another major benefit of having a data warehouse is it gives you access to connect previous raw data to new data in a new tool. It’s easy to make a decision about which data you want to include only to discover you forgot a critical segment of contacts or accounts. Having standardized information is important to maintain the data points your business finds important, which can inform new tool setup and make change management easier.
Best Practices and Avoiding Mistakes
When building your own attribution solution, consider these key tips:
- Speak with experienced organizations or experts like us at CaliberMind
- Think through what a reporting infrastructure would look like
- Gather requirements and understand adoption factors
- Incorporate one new data source at a time to ease troubleshooting
By carefully considering these aspects, you can create a more effective and useful attribution model that aligns with your organization’s needs and capabilities. Remember to proceed cautiously and conduct thorough discussions to ensure your solution is addressing all problems and meets user requirements.
Implementing one data source at a time is crucial. This simplifies troubleshooting and allows for better management of time and resources. As you add more data sources, complexity increases rapidly, so a measured approach is beneficial.
Want an expert review of your data stack to optimize your attribution model? Contact us today for a personalized consultation.
Would you like a review of your data stack by an attribution expert? Contact us here.