My How Lead Scoring Has EvolvedScott is the Senior Manager of Marketing Ops at Dataminr, a Marketo Fearless 50 alumnus, and has a strong background in growth revenue marketing. Needless to say, lead scoring is a topic very close to his heart. Scott shared his insights on lead scoring’s evolution. “It’s something that has definitely evolved quite a bit in the recent past. Historically, it started at a very basic level—maybe scoring a few marketing interactions. Then we added in demographic traits like job titles or functions and tied that to a number. I think what’s happened with the marketing ops space is that a simple scoring model doesn’t cut it anymore. You need more variables to ultimately get a true representation of the exact quality of a lead. We’re seeing an evolution that goes into much more data points and gives a more precise representation of what quality is.” In earlier tools like Marketo, users could only score demographics and engagement. However, there are now applications that take interactions with sales (like email replies and meetings) or intent into consideration. Machine learning can offer value to a lead scoring model, but many marketers have learned to be cautious of AI. Especially if they’re at a small company with a small data set. Scott has seen this at play and offered his opinion on AI. “I’ve actually come across a minimum data set limit myself. But I believe there’s a place for AI. I think it’s where the industry is heading. Nevertheless, I think it should be incorporated at a reasonable level. For example, I’ve seen it used where it was 10% – 20% of the weighting in the total score. AI doesn’t determine the entire score — it’s a small weight from an ABM platform like Terminus, DemandBase, or even 6sense. I’ve found that their intent scoring has been fairly reliable.” Generally speaking, when you’ve only got a total of 150 opportunities in your system, you won’t meet the minimum data set for machine learning to work well. The more data, the better the model. If you want to incorporate machine learning into your scoring model, search for a vendor who’s honest about the minimum data set needed to provide meaningful scores. “You also have to give AI enough time to learn. I wouldn’t recommend integrating any kind of AI without it being thoroughly vetted. You also need to make sure the data being used for your lead scoring model is reliable.” Companies that don’t yet meet the minimum data set to train a machine learning model against won opportunities may want to consider training the model for a point earlier in the buyer journey. Ultimately, optimizing for revenue should be the goal, but that doesn’t mean you can’t get value from a machine learning model in the shorter term. “I would say in my experience, we usually tie a machine learning model back to the won opportunity level. However, in my conversations with sales leaders, meetings do come up quite a bit. It’s an element worth considering. One key takeaway I’d like to express is to always think outside the box. There’s no one size fits all solution that works with every organization. If something like meetings is solid for your business, that should be pursued. You’ll come to find what the primary focus should be by having conversations with sales leadership, marketing leadership, and hosting simple focus groups. That way, you can understand how any model is landing with frontline teams who deal directly with prospects on a day-to-day basis.” A word of caution when it comes to optimizing for meetings: If you pair SDRs with account executives, always check to make sure their definition of a qualified opportunity is in alignment. Any disagreement could signal that optimizing for meetings will not lead to an increase in revenue.
Common Modeling Missteps (& How to Avoid Them)“A lot of missteps I’ve seen stem from a lack of communication between all the teams involved. Marketers get the image of an ideal lead in their heads. Too often, that doesn’t line up with what the sales team thinks. Marketers are frequently in a position where they’re throwing leads over the wall and thinking they’ll stick because it makes sense to them. Solid relationships and a clear line of communication with sales leaders are essential to avoid misalignment. “One major piece of advice I have for marketing ops professionals out there is to have strong relationships with their SDR teams. The best feedback, in my opinion, comes from your SDRs or BDRs. They’re on the frontlines. They’re the ones that are calling all these people. “Another great tip is to always make data-driven decisions. We may think we know best, but when we actually pull the data, we could see a completely different story. It’s important to perform a full analysis like looking at comparing title data against closed opportunities. A deep dive will help you understand what is or isn’t working. “Don’t assume it. Confirm it.” Scott also shared how he has differentiated between a scoring model being wrong or a poorly executed message. “You really have to take a holistic approach when you’re analyzing that type of thing. Again, data is very important. You can always tie back results to personas if you structure the data properly. You can see what’s resonating and what isn’t. “It takes looking at multiple factors to get a proper understanding. You’ve to look at what you’re actually offering, who’s using it, get feedback from sales teams, and then analyze the data. Only then can you make your decision. “I recommend taking it a step further and reviewing some of the sales interactions to see how those conversations go. When you do this, you can get a very good gauge about how well your messaging works from the prospect’s response.” Another missed opportunity is not properly communicating wins and proactively advocating for your scoring model. “If you take a data-driven approach—which I think is absolutely essential today—you can come to a leadership meeting with concrete information on what converts and what doesn’t. That has worked very well for me in the past.”
Lead Scoring Should Be a ScienceBesides conversion rates, there are other data layers that may be factored in when assigning lead scores. Scott offered his insights on taking a more scientific approach to lead scoring. “It takes a real understanding of the full interaction. It’s no longer okay to simply say, ‘This person downloaded an e-book. let’s give them a specific amount of points.’ You need additional intelligence to properly score leads. “For example, someone can download content and not actually read it. This also goes for event attendees and webinars. Historically, we’ve given everyone the maximum amount of points. Having a system that monitors engagement—whether it’s a PDF tracker or webinar engagement data—is hugely beneficial. You can change the lead score according to the person’s rate of consumption. “Don’t just give everyone a flat score for attending an event. Make it appropriate to what they actually did. That’s taking it to a more scientific level because you’re analyzing their actual participation.” Scott also talked about the art of layering account engagement on top of a lead engagement score. “I recently rolled out a lead scoring model with demographic, firmographic, and behavioral elements. I think it’s important to have a concurrent account scoring model and it’s easy to tap into that as well. I found salespeople are very happy when you give them that full package. It’s such an easy sell when you’re talking about a lead scoring model that incorporates demographic, firmographic, and behavioral elements. The response I’ve gotten is amazing when I started talking about who the person is, where they’re from, and what they’re working with in addition to what they did to interact with our brand.”
For more best practices on engagement scoring, listen to the full Revenue Marketing Report episode at the top of the article or anywhere you podcast.