We recently asked top industry experts the following question:
As we enter the second half of 2015, have companies made the adjustments necessary to utilize lead scoring or is the status quo killing results?
Why did we ask? Because it has become clear that marketing automation is making it easier than ever to generate poor quality leads. And sales is sick of it.
Here's the problem. Lead scoring models are:
- Based on assumptions.
- Contain inadequate sales input.
- Overly weighted to arbitrary behavioral signals.
Furthermore, lead scoring teams frequently neglect to establish a baseline or make ongoing adjustments based on feedback and results.
So, we compiled the experts' responses and wrote three blogs that summarize what they have to say.
Here's part two.
|Part 1||Part 2||Part 3|
|Trish Bertuzzi||Kyle Porter||Matt Heinz|
|Ardath Albee||Todd Schnick||Lori Richardson|
|Tony Jaros||Pam Hege||Jamie Turner|
|Amanda Kahlow||Rafe VanDenBerg||Chad Burmeister|
The Sales Development Rep Funnel (Fishing with a Spear) Approach
Kyle Porter of the red hot company, SalesLoft (that just welcomed Derek Grant of Pardot fame to the team), says:
Our clients don't rank leads.
They rely on Sales Development Reps (SDRs) to qualify and book appointments with SQLs. Think of it as putting humans between inbound leads and the sales executives. It works like this:
Inbound SDRs individually process as many as 800 leads per month.
They create a single workflow using a top of the funnel product that combines and processes their emails, calls, social touch points, and even includes accountability.
They contact, qualify, and convert 12% of leads on average to SQLs (demos).
- Finally, these are passed to account executives to close.
I recently read a quote by Jon Miller (Marketo founder and now the founder of the new company to watch called Engagio): "Demand generation [via marketing automation] is a highly efficient model for certain kinds (emphasis added) of businesses." Miller compares marketing automation to fishing with a net and account based marketing automation to fishing with a spear. Funnels, in Miller's opinion, should not exist when the target market is finite because you cannot afford to lose ANY prospects.
I think that is a part of what Kyle is saying as well.
The Relying on Algorithms Approach
Intrepid's Todd Schnick says this:
I think people rely too much on technology and algorithms to determine their sales path forward, especially when that technology solution is below par.
The focus needs to be on building friendly relationships. Much easier to pitch a close colleague than pushing a "name in field" in some outdated or inaccurate database.
The Pretending It is Working Approach
Another new contributor, Pam Hege, Managing Partner at Homeport Marketing, had quite a lot of on-point observations about the topic:
In my experience lead scoring continues to remain just another marketing 'to do' that has been checked off as completed with little follow-up on whether or not it is working.
Three Reasons Why This is the Case:
Marketing automation vendors have over-simplified lead scoring.
Most vendors position lead scoring as part of the platform set-up completed through the following 3-step process:
- Determine your ideal target.
- Align sales and marketing objectives.
- Select the scoring criteria.
The truth is that successful lead scoring systems are created with a strategic and data-driven approach regardless of the technology used to implement them.
Lead scoring requires an understanding of what success looks like and that remains unknown.
- Most companies are not using any form of predictive analysis to build their lead scoring; it is all gut instinct.
- The result: Poorly designed lead scoring systems based on intuition that just don't work.
Taking the guesswork out of lead scoring will not happen until marketing automation systems have predictive engines built into them.
Companies do not recognize that their lead scoring systems are broken.
Marketing flips the switch and lets it run, and the sales team ignores the scores.
Few companies are going to blame poor revenue performance on their lead scoring system, and so the problems will not be diagnosed or repaired.
I am hopeful the recent buyer interaction findings from SiriusDecisions’ 2015 B2B Buyer Study give marketing and sales leadership a legitimate reason to step back and assess their entire lead-to-revenue process.
If they do, I think they’ll find and address the gaps in their leading scoring systems. Without that end-to-end assessment, lead scoring’s impact on revenue performance will remain minimal.
The Guesswork Approach
Rafe VanDenBerg, editor in chief at MindBrew, adds his perspective:
Assumptions, opinions, and guesswork are still the basis for far too many decisions in business today.
- Most sales and marketing teams just assume that certain prospects are good targets.
- They just guess about their prospects' hot-buttons and priorities.
- And of course, everyone has an opinion about why things aren't working the way they want them to.
So, why would lead scoring algorithms and designs be any different?
- They're often established with very few facts and little data to begin with.
- Then, imbued with this "data-driven" air of accuracy, they somehow become locked-in and entrenched.
That said, some teams are indeed making adjustments. Which ones? The teams whose only assumption is that whatever they start with will be wrong and will require ongoing adjustment based on in-market feedback and performance. That's my two cents.
The common themes are:
- You can't rely on technology.
We recommend that only the bottom 35 - 40% at the low-end of the market be in your marketing automation strategy, unless you sell a low-ticket commodity.
- Marketing automation is never checked off the to-do list.
- Build relationships, not databases of useless names.
Coming Up Next
In the next blog in this series, you’ll hear from the following experts:
Matt Heinz talks about how easy yet dangerous it is to stick with the status quo. His daring advice: "Admit you were wrong in the first place."
Lori Richardson says she believes more and more sales and marketing teams are finding an agreement on the basic parameters of the definition of a good lead.
Jamie Turner talks about the fact that models will never completely reflect reality but that companies are finding ways to adapt.
Chad Burmeister is excited about predictive analytics, despite what he sees as their shortcomings.
Rich Wilson offers a lot of advice about lead scoring, particularly the importance of collaboration.