New episode of Bad Takes is out today all about the debate over Louisa May Alcott’s gender identity.
Each December here at Slow Boring, I like to end the year with both a list of probabilistic predictions for the year ahead and a look at the previous December’s predictions. According to the research presented by Philip Tetlock and Dan Gardner in their book “Superforecasters,” practicing predictions in this way can help you get better at predicting things over time.
And I sure hope that’s true because last December, when I did my first-ever look back, my results were terrible and I’ve been dreading this post all year.
The good news is that this year my forecasts did in fact get better — 75 percent of the things I said would happen with 90 percent confidence actually happened, as did 83 percent of the things I had 80 percent confidence in, 70 percent of the things I had 60 percent confidence in, and 50 percent of the things I had 60 percent confidence in.
Remember the goal here is not to get all the predictions right (which would mean only predicting very boring things) but to really nail the calibrations rate. If you say 10 different things have a 70 percent chance of happening, a good calibrations rate would mean you’d see seven of them happen. And since some error is inevitable, what you’d really like to see (and here I failed) is error happening symmetrically in both directions. You can see I suffered here from systematic overconfidence, the columnist’s cardinal sin.
And that’s the main point of the exercise: not that I per se want to become a superforecaster, but that writing these things down with odds attached is a good way of trying to beat back that overconfidence. “If I grab a pair of dice, I probably won’t roll snake eyes five times in a row” and “D.C. probably won’t cancel five days of school for snow this winter” are both predictions I would stand behind, but there’s actually an incredibly large gap between the probabilities associated with these two things. Casual writing tends to elide the difference between “this would be unusual” and “this is spectacularly unlikely.” It also often struggles with efforts to express an idea like “this probably won’t happen, but the odds of it happening aren’t tiny and are rising, and the consequences would be really bad so I’d like you to worry about it.” Attempting to quantify with exact numbers even occasionally is a way of trying to break bad habits.
But to do that, it’s important to examine what went right and what went wrong.
Some predictions that I made about politics
I started off with a troika of predictions about the midterms, all made with high confidence, of which only one came true:
Democrats lose both houses of Congress (90%)
Democrats lose at least two Senate seats (80%)
Democrats lose fewer than six Senate seats (80%)
We’ve discussed the midterms a fair amount in previous columns, but looking back on this, the very early overconfidence about Democratic Senate losses was based on very crude extrapolation from the historical record. The president’s party almost always loses ground in the midterms. The 2022 map, while probably the most friendly Senate map Democrats can get, left them with a bunch of vulnerable seats and zero margin for error. We know that the thermostatic pattern is sometimes disrupted (in 2002, for example), but those disruptions are rare. So I basically reasoned that because thermostatic disruptions are rare, Dems’ odds of holding the Senate were extremely bad.
The right way to think this through would have been to be more specific. If you’d asked me a year ago whether the Supreme Court strike down Roe v. Wade, my forecast would’ve been overwhelming odds of gutting its core protections (à la John Roberts’ proposal in Dobbs) and a greater than 50 percent chance that there would be five votes to rip off the band-aid and formally strike it down. And from there I could have reasoned that there were decent odds that overturning Roe would have a counter-thermostatic effect. I still would have ended up predicting Democrats lose the Senate — Dobbs explains a lot but there was more to it, like good ads from Democrats and bad candidates from the GOP — but I should have been able to reason my way to a more restrained forecast here.
Some better political forecasts:
Keep reading with a 7-day free trial
Subscribe to Slow Boring to keep reading this post and get 7 days of free access to the full post archives.