Posts Tagged ‘modelling’
A great example of a historical forecasting methodology is PECOTA. PECOTA forecasts future performance for a player by first going back in time and finding historical players that are “similar”. It then uses information on how those historical players evolved over their careers to project how the current player will evolve. So far, this approach is as good as it can get for forecasting player performance.
Not all forecasts are this obviously historical, but all forecasts really are about intelligent selection of historical comparators.
This key relationship indicates why forecasts will always need both quantitative and qualitative components. Quantitative components — from numerical data that describes the past — are key to anchoring estimates of magnitude in an objective way.
Qualitative components are necessary to adjust for limitations in the data and to accommodate the possibility that this time “really is different”. Frequently, historical data exists because of convenience or some other business purpose; rarely is the historical data directly applicable to the current problem. A significant degree of wisdom is required to judge when “this time really is different”, as is perhaps obvious.
This post may appear to be a truism, but I've found that model interpretation and forecasting errors frequently stem from a lack of appreciation regarding the relationship between history and prediction. Curiosity, energy, and time are all required to investigate the past in a comprehensive way. It is difficult — even in retrospect — to identify key causes for historical events. It is exponentially more difficult to select and measure which of those causal relationships will be the key drivers in the future.
Companies would do well to keep in mind that forecasts are as much about the past as they are about the future. The better you know where you've been, and why, the better you will be able to navigate where you will be.
 Even quantitative measures are susceptible to subjective interpretation and biases that influence the selection of the data. Nevertheless, quantitative evaluation helps provide a degree of dispassion, if used wisely.
“The Machine Knows” is a classic Office episode that teaches us about Actuarial modeling in the following clip:
There are many lessons embedded in this two minutes.
1) Be careful with model interpretation. Michael wanted to interpret the result literally; be warned, if you do this, you may get very wet.
2) Models are only guides. The Map is Not the Territory; the model is not reality. From the clip, it seems possible that the GPS system (aka the Map, aka the Model) was wrong and Michael was following it into disaster (like these individuals did in real life). Even if the GPS was correct, either way the clip illustrates that reality itself needs to be paid attention to, regardless of what you believe any model says.
2-alternate) Michael fell prey to the Reification Fallacy, one of the most prevalent and powerful modeling fallacies.
3) If you can’t understand a model, then be warned that disastrous results may follow. Sometimes, models erroneously embolden you. Always be humble when interpreting model results, and be open to contrary evidence. All models are wrong; some are useful.
4) Don’t be a passenger in a car driven by someone who takes his models literally. Unless you make your living in disaster recovery.
Tomorrow I’ll post on the analysis of risk, and how that could be applied to this video.