Hey folks, hope everyone had a Merry Christmas.
I realize that in the future I should probably be explicit about holidays and days off. So, for the record, there won’t be a New Year’s Day edition of the newsletter.
This hasn’t been the typical slow news week at the end of the year, but I’m not sure I really have a ton to add to the coverage of the Covid relief bill standoff. The bill is good, a bigger bill would be better, and the president is an immature jackass. I’m glad he signed it, though, which will be good for Americans in need and also open up space for some of the normal year-end reflections.
In terms of my personal growth, I think the most important book I read in 2020 was Philip Tetlock’s Superforecasting: The Art and Science of Prediction, which came out five years ago. I’d read his earlier book Expert Political Judgment: How Good Is It? How Can We Know? and liked it a lot, but somehow the title of Superforecasting turned me off of reading it when it first came out.
But it’s a really good book.
It’s got a lot to say about the people who are good at predicting things, which turns out to be a somewhat complicated subject. But to me the most gripping and broadly relevant insight of the book is pretty simple, so simple in fact that once it was pointed out to me, I couldn’t believe I hadn’t always known it.
But it goes like this: people who are good at predicting things actually try to check whether or not their predictions are any good. It’s just like anything else in life. Unless you actually pay attention to what you’re doing, you won’t do it well. So how do you do that? Well, here’s how a disciplined predictor of things would behave:
If you’re inclined to offer a prediction, make sure to write it down. And be really specific about what you’re predicting.
And even though there’s necessarily going to be an air of false precision to it, give your prediction a numerical possibility. When you say “X” is going to happen, does that mean you’re saying it’s like a 99 percent near-certainty or a 51 percent more-likely-than-not.
Then go back periodically to check whether the things you predicted would happen did, in fact, happen.
The key thing is that “good predicting” doesn’t mean that everything you said would happen does in fact happen. Rather, good predicting means that if you predict 10 different things each with 70 percent confidence then 7 of them should happen. If all 10 happen, you’re being under-confident in your forecasts. If none of them happen, of course, your predictions are garbage.
I’m not particularly interested in trying to train myself to become a superforecaster. But this is interesting nonetheless because basically nobody does it.
Twitter is full of bullshit
I’ve been thinking about Tetlock’s book over the past couple of months as I witnessed a bunch of people worry that Trump was going to pull a coup, then some saying that his post-election antics were an attempted coup, while others dunk on the coup-worriers and say the whole thing was hysterical, and then others dunk on the dunkers for being more worried about democratic vigilance than Trump’s assaults on democracy.
Applying some Tetlock Thought to this whole discourse would have taken it in a very different direction:
First you define terms: “coup” is a colorful and not totally inappropriate term, but what we were discussing here was not a military seizure of power but a judicial effort to throw out absentee ballots and flip the election result.
Then you ask exactly what the prediction is. Ex ante there was some chance that Trump would just win the election. There was also some chance of a huge Biden landslide. The coup prediction was that conditional on neither of those being the case, there was some odds of the Trump Coup happening.
So then, what odds exactly? 90 percent chance of coup? 10 percent chance?
My strong sense is that a structured conversation along these lines would have led to a more restrained conversation in which almost everyone agreed that a Trump Coup was pretty unlikely but not so unlikely that one should dismiss all concern about it.
But nobody tried to engage in anything like that, because the people “predicting” things on Twitter are mostly just BSing. There’s a set of people who like to raise the alarm about GOP misconduct, and there’s another set of people who like to scold liberal hysterics.
I’ve been trying to discipline myself to actually say what I mean rather than toss off vague predictions — what’s true about this whole “stop the steal” mishegas is that the quantity of Republican Party politicians who’ve gone along with it is a disturbing portent, not that it ever seemed likely to succeed.
But that really is my main takeaway from the study of predictions: don’t predict so much stuff! Predictions are commonly used as one form or another of bad faith rhetorical device in punditry. People predict doom for politicians as a way of saying they don’t like them or predict failure of political tactics as a way of saying they don’t approve of them. Or they’ll issue dire prophecies of doom as a way of saying they want to get people more concerned. This encourages sloppy thinking. And its alarm-raising form is particularly harmful. If you think back to January 2020 it was perfectly reasonable to think the new virus in Wuhan wouldn’t become a global pandemic. But a 15 percent chance of a global pandemic is really bad! We need people to be able to discuss moderately improbable bad events without sounding like the boy who cried wolf.
Trying my hand at some predictions for 2021
The flip side of resolving to do fewer tossed-off predictions is that I did think it would be instructive to try my hand at some rigorous predicting. As I sat down to do this, I immediately got apprehensive.
I have no real experience with trying to do proper forecasting, so the odds are that my skill level is low. I kept thinking this list stands a good chance of turning out to be really embarrassing. But that’s actually the point. It feels potentially embarrassing because writing down explicit predictions with odds means I’ll be held accountable if I turn out to be wrong. Yet that’s the discipline — it’s only by doing something with accountability that you get the possibility of improving.
So here goes, 25 predictions for 2021:
Jon Ossoff and Raphael Warnock win the Georgia Senate races (60%)
The same party wins both Senate races in Georgia (95%)
Joe Biden ends the year with his approval rating higher than his disapproval rating (70%)
Joe Biden ends the year with his approval rating above 50% (60%)
US GDP growth in 2021 is the fastest of any year of the 21st century (80%)
The year-end unemployment rate is below 5 percent (80%)
The year-end unemployment rate is above 4 percent (80%)
Lakers win the NBA championship (25%)
Joe Biden ends the year as president (95%)
Nancy Pelosi sets a definitive retirement schedule (60%)
A vacancy arises on the Supreme Court (70%)
The EU ends the year with more confirmed Covid-19 deaths than the US (60%)
Substack will still be around (95%)
People will still be writing takes asking if Substack is really sustainable (80%)
Apple releases new iMacs powered by Apple silicon (90%)
Apple does not release a new Mac Pro powered by Apple silicon (70%)
Monthly year-on-year core CPI growth does not go above 2 percent (70%)
Monthly year-on-year core CPI growth does not go above 3 percent (90%)
Lloyd Austin not confirmed as Defense Secretary (60%)
No federal tax increases are enacted (95%)
Biden administration unilaterally relieves some but not all student debt (80%)
United States rejoins JCPOA and Iran resumes compliance (80%)
Israel and Saudi Arabia establish official diplomatic relations (70%)
US and China reach agreement to lift Trump-era tariffs (70%)
Slow Boring will exceed 10,000 paid members (70%)
The basic picture, as I think you can see here, is that I think the economic situation in 2021 will be good, the Biden administration won’t have the congressional support needed to do much controversial stuff, and as a result, he’s going to be seen as a popular and successful leader. But the whole point of this is to make some specific predictions instead of that vague forecast.
We’ll check back in a year and see how I did.
I worked on the Federal Reserve's staff macro forecast for a decade, starting in July 2008. I have learned how to be a vey good forecaster. I am. I affirm many of the points you made. Let me suggest some baby steps into the craft:
1) start with your 'most likely' forecast, that is, the 50% or the median forecast. That's easier than thinking through the entire distribution. state the conditions on your forecast. good forecasts are ALWAYS conditional. tell the 'story.' go beyond the outcomes, tell us what gets us there.
2) when you are ready to move off the most likely, think through some 'upside' and 'downside' risks. then tell us are the risks tilted to the downside or the upside.
3) a100% agree, re-elevate your forecast (most likely, risks, conditions) regularly. then UPDATE YOUR FORECAST. key is paying attention to ALL the data available. watch for blind spots.
4) Superforecasters are often'non experts.' why? one reason is that experts tend to have blind spots. they want so badly for their models to be right that they miss signs they are wrong. Fed staff grappled with that after missing the warning signs of the housing bubble. it's hard.
5) talk with non-experts, go out in the world. my walks have taught me and conversations with non-economists since Covid arrived have taught me so much. actively seek it out.
6) finally, be humble, especially, when your forecast proves right. after a really big hit, a senior officer came by my office and told me, "do not gloat in our next meeting. it could have been luck." and he said with a smile, "we all know you nailed it. you don't have to say a word."
I was fortunate and learned from the very best. no one has a crystal ball, but some of them give damn good advice.
Scott Alexander has been doing this since 2014 I think? See if you can outscore him!