When launching or even revising a scoring model, the temptation is to solve it in one go. To spend all your effort up front defining the perfect elements to include, and how best to weight them.
There’s nothing wrong with methodical consideration of this stuff, but it tends to get drawn out too long, and the fact is, that it’s hard to know how the model will really perform until you push it live.
Better to build the first stab at the model, and iterate as you have data to suggest performance.
The best thing about this approach is there are always elements of a scoring model that are trickier than others to implement. Product usage, for instance, is usually a tricky variable to score on, at least compared to something like job titles. This is absolutely a variable to consider including in your model, but it’s not necessarily required for day 1. You can get the easy stuff live first, and add in the more in-depth measurements later.
Most of all, once you start getting real data, you start to realize there were variables you didn’t even consider before that now need to be added. No amount of pre-work will make those easier to identify ahead of time.