Natalie Jackson: “Forecasts based on polls and election ‘fundamentals,’ like what FiveThirtyEight, The Economist, and Nate Silver produce, are more intriguing from an empirical standpoint. The statistical machinations are genuinely challenging and interesting to work on: combining national and state-level polls, economic factors, incumbency factors, and vote history, and then spinning it all up into a state-by-state prediction that can be used to simulate presidential elections. We’re talking thousands and thousands of lines of code.”
“Sounds pretty empirical, right? It is, in the sense that you build a big model that takes in a ton of data and puts out a ton of data. But every decision about what goes into the model is subjective: The modeler decides what polls are used, whether they are adjusted for the quality or past accuracy of the pollster, how much any individual poll is able to move the trend, which economic indicators to use, which political factors are important, and how all those are coded in. Make a different decision at any step, and you change the model’s predictions…”
“We can calculate probabilities all day long, but we have no idea what their accuracy is. Polls have margins of error, plus additional error. Models have error of their own. Judgment calls made by those constructing the models have error. We don’t know how big all of that cumulative error is.”