In case you have not heard about Nate Silver, here is the bluffer’s guide. “Nate Silver is the co-founder of FiveThirtyEight. A massively popular data focused blog that gained fame for its accuracy predicting the outcomes for the U.S. elections in 2008. Silver generates predictions using a clever poll aggregating technique which accounts for biases, such as pollsters who only call people with landlines.”

The other protagonist in our story, Nassim Taleb, is a rockstar author, a successful options trader and generally a smart, outspoken guy.

The twist in the tale is that Taleb has announced that FiveThirtyEight does not know how to forecast elections. This announcement has triggered a two year Twitter war between Silver and Taleb. We can learn a lot about forecasting from this debate.

“The primary source of controversy and confusion surrounding FiveThirtyEight’s predictions is that they are ‘probabilistic.’ Practically what this means is that they do not predict a winner or loser but instead report a likelihood. Further complicating the issue, these predictions are reported as point estimates (sometimes with model implied error), well in advance of the event…

Their forecast process is to build a quantitative replica of a system with expert knowledge (elections, sporting events, etc.) then run a Monte Carlo simulation. If the model closely represents the real-world, the simulation averages can be reliably used for probabilistic statements. So what FiveThirtyEight is actually saying is:

x% of the time our Monte Carlo simulation resulted in this particular outcome

The problem is that models are not perfect replicas of the real world and are, as a matter of fact, always wrong in some way. This type of model building allows for some amount of subjectivity in construction. For example, Silver has said on numerous occasions that other competitive models do not correctly incorporate correlation. When describing modeling approaches, he also makes clear that they tune outcomes (like artificially increasing variance based on the time until an event or similar adjustments). This creates an infinitely recursive debate as to whose model is the ‘best’ or most like the real world. Of course, to judge this, you could look at who performed better in the long run. This is where things go off the rails a bit.

Because FiveThirtyEight only predicts probabilities, they do not ever take an absolute stand on an outcome: No ‘skin in the game’ as Taleb would say. This is not, however, something their readers follow suit on. In the public eye, they (FiveThirtyEight) are judged on how many events with forecasted probabilities above and below 50% happened or didn’t respectively (in a binary setting)…

The public can be excused for using the 50% rule without asking. For example, in supervised machine learning, a classification model must have a characteristic called a ‘decision boundary.’ This is often decided a priori and is a fundamental part of understanding the quality of the model after it is trained. Above this boundary, the machine believes one thing and below it the opposite (in the binary case)…

If FiveThirtyEight has no stated decision boundary, it can be difficult to know how good their model actually is. The confusion is compounded when they are crowned, and gladly accept it, with platitudes of crystal ball-like precision in 2008 and 2012, due to the implied decision boundary. However, when they are accused of being wrong they fall back to a simple quip: You just don’t understand math and probability….

What is not clear is that there is a factor hidden from the FiveThirtyEight reader. Predictions have two types of uncertainty; aleatory and epistemic. Aleatory uncertainty is concerned with the fundamental system (probability of rolling a six on a standard die). Epistemic uncertainty is concerned with the uncertainty of the system (how many sides does a die have? And what is the probability of rolling a six?). With the latter, you have to guess the game and the outcome; like an election!

Bespoke models, like FiveThirtyEight’s, only report to the public aleatory uncertainty as it concerns their statistical outputs (inference by Monte Carlo in this case). The trouble is that epistemic uncertainty is very difficult (sometimes impossible) to estimate. For example, why didn’t FiveThirtyEight’s model incorporate, before it happened, a chance that Comey would re-open his investigation into Clintons emails? Instead, this seems to have caused a massive spike in the variation of the prediction. Likely because this event was impossible to forecast….I think this is what has Taleb up in arms. The blog feels more like a slick sales pitch, complete with quantitative buzzwords, than unbiased analysis (though it may very well be). If a prediction does not obey some fundamental characteristics, it should not be marketed as a probability. More importantly, a prediction should be judged from the time it is given to the public and not just the moment before the event. A forecaster should be held responsible for both aleatory and epistemic uncertainty.

When viewed this way, it is clear that FiveThirtyEight reports too much noise leading up to an event and not enough signal. This is great for driving users to read long series of related articles on the same topic but not so rigorous to bet your fortune on.”

If you want to read our other published material, please visit https://marcellus.in/blog/

Note: the above material is neither investment research, nor financial advice. Marcellus does not seek payment for or business from this publication in any shape or form. Marcellus Investment Managers is regulated by the Securities and Exchange Board of India as a provider of Portfolio Management Services. Marcellus Investment Managers is also regulated in the United States as an Investment Advisor.

Copyright © 2022 Marcellus Investment Managers Pvt Ltd, All rights reserved.



2024 © | All rights reserved.

Privacy Policy | Terms and Conditions