The title of this post is overly complicated but I promise you the content of the presentation is beautifully easy to understand (drop a comment if it isn’t).
Glenn Fiedler wrote an insightful post on the cheats and player-exploitable glitches in the Ubisoft video-game “Tom Clancy’s The Division”.
He says that it seems the game uses a trusted client networking model versus the more secure server-authoritative network model. After reading his post, it took me a few minutes to sit and think about what he wrote, to actually understand what he meant.
I’m not a noob to gaming… created DFBHD maps in 2004 and have tinkered quite a bit with the map-making for Counter-Strike 1.6
So I decided to go ahead and explain these two concepts in what I hope is an easier way for the average gamer to understand. If the Google Presentation fails to load here’s the direct link.
When deciding on a lead scoring model for your enterprise B2B SaaS software, base your scoring on the following points:
- The lever on which your pricing is based. For example, at VWO our pricing depends on the website traffic that you want to A/B test. So this becomes an important question for us to ask on any free-trial or ‘Request a Demo’ form.
- How similar are they to the kind of people who normally buy your product? Here you’ll have rules like [+10 points because has ‘Director’ in title] AND [+20 because industry is ‘Ecommerce’], and so on.
- The number of people from the same company that have signed up for your free-trial. The more the better.
Most people in SaaS expect (and hope?) that their lead-scoring model will be super scientific and involve lots of data… and they scoff at the notion of marketing just walking up to the sales team and asking them what factors should they give “points to”.
So here’s the deal, if you have a LOT of sanitized and well maintained data, then your data-scientist delivered lead scoring model will be awesome. If you don’t, like every other SaaS in the world, then the data-scientist will come back and say they’ve found no real correlations.
In that case, it’s better to just ask your experienced sales colleagues for the factors they consider important, and run a quick analysis of your Google Analytics + marketing automation data to compare converters vs. non-converters on pages per session, time on site, new vs. returning visitors, number of forms filled, etc.
We did this and the resultant lead score was pretty damn correlated to them being an enterprise customer or not.
Again, lots of people might laugh at you, but those people have no context of the [quality + quantity] of data that’s required to build a real, ‘scientific’ lead scoring model.
P.S. — Did you notice that I missed out on factoring in-app activity in your lead scoring model? Well here’s where ‘context’ comes in. In-app activity is important when your SaaS is mostly self-service, and the majority of your MRR comes from customers adding their credit card details and paying monthly. These are usually apps focused on SMBs.
Enterprise SaaS doesn’t work that way yet. For us, the number of users in the company joining the product demo is a better indicator than their actions inside the app.