In this fourth and final installment in our series on forecast accuracy, we’ll examine the forecast process itself to identify ways in which that process can foster - or hinder - forecast accuracy.
Part 4: Want Forecast Accuracy? Get to Know Your Sales Forecasting Process.
If you’ve done the work to understand your clients and their buying process, to design a sales process that is responsive to that buying process, and to deploy a sales information system that captures essential data related to that process, you’re ready to forecast. Over time, you should be well-positioned to prepare forecasts that are accurate predictors of actual results. However, we have seen many sales organizations fail to achieve forecast accuracy even after they’ve built their processes and systems following the guidelines we’ve provided throughout this series.
Why does this happen, and, after all this work, how do you correct it?
In many of our engagements, we’ve seen truly inspiring work done to lay the foundation for accurate forecasting, only to have it all go wrong because leadership failed to ensure organization-wide clarity on their expectations about the output of the forecasting process. Often this lack of clarity revolves around very fundamental issues that can be easily corrected through preparation and training:
- What time intervals are covered?
- What’s the expected level of accuracy for each time interval?
- What level of discretion does middle-management have in adjusting their team’s forecasts before they are submitted to executive leadership?
- What are the implications for each sales rep regarding his or her forecast accuracy over the long-run?
Leaders will do well to consider the responses to these questions carefully, because each impacts the others in supporting (or obstructing) forecast accuracy. For example, we recently worked with a client who sold professional services with highly variable average sales values to clients who operated in a highly reactive buying environment. Despite this fact, sales leadership was expected to deliver rolling quarterly forecasts covering the current quarter and the next two quarters, for a potential total forecast horizon of up to 9 months. Once a quarter was included in the forecast, the forecast for that quarter was considered the sales team’s “commit” value for that period. Meanwhile, high performing sales reps submitted lowball forecasts so as not to call attention to themselves throughout the quarter and underperforming sales reps submitted forecasts they had little confidence in achieving just so they had something on the board.
Executive management’s forecast requirements created conflicting motivations throughout the sales organization, as sales leadership had to ensure that each quarter was achievable over an extremely long forecast horizon with highly variable sales opportunity values, both on the high and low side.
One way to improve this situation is to apply the concepts we discussed in part 3 of this series to provide corroboration of each rep’s forecast values. This company’s sale was relationship driven. By tracking the status of the sales reps’ relationships with key buyer stakeholders, we were able to provide a second “proof point” for each sales opportunity in addition to sales stage, based on the correlation between relationship development and sales stage movement. We designed a process of reconciling each forecast with the actual results for the same period in order to highlight for further examination any opportunities whose forecast value had been off (high or low) by a target percentage and/or dollar value. Not only did this information give our client a place to begin their quality assurance process for the next period’s forecast, it created a culture of accountability down to the rep level.
In order to address the highly unpredictable nature of close dates, we created an algorithm that combined time horizon and opportunity stage to remove from the forecast all revenue for those opportunities that had not achieved a minimum sales stage in each forecast period. After a bit of tweaking over a few periods, this algorithm provided an objective, repeatable way of adjusting the risk out of individual rep forecast submissions.
It’s easy to think of forecasting as a purely administrative process that simply compiles information against a pre-defined set of criteria. But, as we saw in this engagement and we continue to see in others, if you want your forecasting process to yield consistently accurate results, you must approach forecasting as an organic process that considers a variety of perspectives and motivations and utilizes a number of approaches and data points to ensure the ultimate quality of its output.
To get you started on the path to more accurate forecasting, we close this blog series with a few Do’s and Don’ts that will sharpen your efforts and lay a well-thought foundation upon which you can build your own forecasting process.
DO begin with organization-wide agreement on how the forecast will be used. Will it be used as the basis for corporate planning? Over what forecast horizon? Will it be submitted to the board basically unfiltered, or will executive management modify it? These factors may influence the level of accuracy you should plan for. You may also want to consider deploying several types of forecasts that allow you to use the same or similar forecast information for different purposes. For example, you might use a revenue plan as a static prediction of full-year sales, a revenue outlook as a rolling long-term (say, longer than 2 quarters) prediction of sales, and a revenue forecast as a “take it to the bank” prediction of the next 3 to 6 months sales. Use of different forecast types such as these would allow you to apply different expectations as to the accuracy of each.
DO consider carefully all the factors that can cause your team’s overall sales results to fluctuate as you determine the level of forecast accuracy you expect. If the products and services in your average sales transaction are subject to highly variable sales values, if client demand is in any way not fully controlled by the client (a phenomenon often seen in the legal services industry, for example), or if your sales are highly concentrated among a small number of clients, you should expect your forecast accuracy to vary from period to period.
DO develop risk mitigation techniques as part of your forecast quality assurance procedures. As we stated in our previous point, the more variable your overall sales results may be, the more important it is that you create techniques to remove some level of risk from your forecast. Better to beat forecast than to constantly be exposed to material shortfall.
DO develop secondary data points to substantiate the accuracy of forecast timing. You must get into the habit of consistently questioning your reps about key aspects of their opportunities that affect forecast accuracy. If an opportunity has been in the pipeline longer than your average sales cycle, what makes the rep believe the client is ready to move now? What is the financial case for the client to buy from your company versus from a competitor? Have all demos, business case presentations, pilot tests, proof of concepts and any other pre-close activities typical to your industry been completed and milestones successfully passed?
DON’T use the forecast as a motivational tool for your sales reps. The sales forecast is just not the right tool for this purpose. Remember the old saying, “be careful what you wish for, because you may get it?” There’s no faster way to produce unreliable, overly optimistic forecasts than to tell your reps their job security depends on showing consistently higher forecast values month after month.