The economic research policymakers actually need
I was a senior administration official, here’s what was helpful
Reminder: Please take our reader survey, if you haven’t already! It only takes three minutes, and five participants will win Slow Boring merch.
Slow Boring staff is on spring break this week, but we’re excited to share some fantastic content with you while we’re gone. Today’s guest post is from Jed Kolko, an economist who recently completed two years of service as the Under Secretary for Economic Affairs in the Department of Commerce.
I’ve spent the majority of my career as an economist in the private sector and at think tanks, producing research that I hoped would be useful for policymakers. But I recently completed two years in the Commerce Department, consuming research that could inform the work of the Biden-Harris Administration and the Commerce Department.
And having now seen this from the other side, more as a consumer than a producer of research, I can tell you that most academic research isn’t helpful for programmatic policymaking — and isn’t designed to be. I can, of course, only speak to the policy areas I worked on at Commerce, but I believe many policymakers would benefit enormously from research that addressed today’s most pressing policy problems.
But the structure of academia just isn’t set up to produce the kind of research many policymakers need. Instead, top academic journal editors and tenure committees reward research that pushes the boundaries of the discipline and makes new theoretical or empirical contributions. And most academic papers presume familiarity with the relevant academic literature, making it difficult for anyone outside of academia to make the best possible use of them.
The most useful research often came instead from regional Federal Reserve banks, non-partisan think-tanks, the corporate sector, and from academics who had the support, freedom, or job security to prioritize policy relevance. It generally fell into three categories:
New measures of the economy
Broad literature reviews
Analyses that directly quantify or simulate policy decisions.
If you’re an economic researcher and you want to do work that is actually helpful for policymakers — and increases economists’ influence in government — aim for one of those three buckets.
New data and measures of the economy
The pandemic and its aftermath brought an urgent need for data at higher frequency, with greater geographic and sectoral detail, and about ways the economy suddenly changed. Some of the most useful research contributions during that period were new data and measures of the economy: they were valuable as ingredients rather than as recipes or finished meals. Here are some examples:
An analysis of which jobs could be done remotely. This was published in April 2020, near the start of the pandemic, and inspired much of the early understanding of the prevalence and inequities of remote work.
An estimate of how much the weather affects monthly employment changes. This is increasingly important for separating underlying economic trends from short-term swings from unseasonable or extreme weather.
A measure of supply chain conditions. This helped quantify the challenges of getting goods into the US and to their customers during the pandemic.
Job postings data from Indeed (where I worked as chief economist prior to my government service) showed hiring needs more quickly and in more geographic and occupational detail than official government statistics.
Market-rent data from Zillow. This provided a useful leading indicator of the housing component of official inflation measures.
These data and measures were especially useful because the authors made underlying numbers available for download. And most of them continue to be updated monthly, which means unlike analyses that are read once and then go stale, they remain fresh and can be incorporated into real-time analyses.
Broad overviews and literature reviews
Most academic journal articles introduce a new insight and assume familiarity with related academic work. But as a policymaker, I typically found it more useful to rely on overviews and reviews that summarized, organized, and framed a large academic literature. Given the breadth of Commerce’s responsibilities, we had to be on top of too many different economic and policy topics to be able to read and digest dozens of academic articles on every topic.
A great example of a broad overview is Katharine Abraham and Melissa Kearney’s analysis of the declining US employment rate in the two decades before the pandemic. Their paper incorporates results from a wide range of other academic research and quantifies how much different factors — like competition from Chinese imports and adoption of robots — contributed to the declining employment-population ratio. Helpfully, they quantify different effects to make an apples-to-apples comparison, and they note which explanations can’t be quantified because of limited evidence.
Another great model for broad overviews is a 50-year review of industrial policy, published by the Peterson Institute for International Economics. This review shows that the US has long had policies designed to favor particular firms, industries, or sectors, and these policies have taken many forms, some more successful than others. Because the Commerce Department has been central to much of the Biden-Harris Administration’s industrial policy — such as for semiconductors and regional tech hubs — this and other overviews of industrial policy were essential for learning lessons from the past and developing measures of success.
Comprehensive, methodical overviews like these are often published by think-tanks whose primary audience is policymakers. There are also two academic journals — the Journal of Economic Perspectives and the Journal of Economic Literature — that are broad and approachable enough to be the first (or even only) stop for policymakers needing the lay of the research land.
Analysis that directly quantify or simulate policy decisions
With the Administration’s focus on industrial policy and place-based economic development — and Commerce’s central role — I found research that quantified policy effects or simulated policy decisions in these areas especially useful.
One example was an estimate of job creation from the CHIPS Act. Importantly, this study quantified the foreign-born share of workers in key occupations in the semiconductor workforce: for instance, 22% of engineers and software developers in the U.S. semiconductor industry are not U.S. citizens. Combining that with other estimates and projections, the study estimated that at least 3,500 foreign-born workers would be needed to staff eight new semiconductor manufacturing facilities. Lots of assumptions go into estimates like this; many of these assumptions will turn out to be wrong. But it was invaluable to have a starting estimate — and a framework for how different assumptions could change the estimate — in developing workforce policy for CHIPS.
Another example is the work of Tim Bartik, a labor economist and expert on local economic development. In a short essay, he summarized a large academic literature and estimated how effective different local economic development policies are in terms of the cost per job created. Cleaning up contaminated sites for redevelopment creates jobs at a much lower cost per job than job training, which in turn is much more cost-effective than giving businesses tax breaks or grants to create jobs. By comparing different policy options using the same metric, this analysis followed the form that policy implementation often takes: deciding which policies or approaches will be most effective to achieve a stated goal within a set budget, with Congress having stated a goal and setting a budget, tasking departments like Commerce to work out the policy and implementation details.
A final example was several analyses that ranked places as potential tech hubs, in anticipation of Commerce designating 31 places to invest in regional innovation and job creation, as part of the CHIPS and Science Act. All three of these analyses laid out abstract criteria for which places would make the best tech hubs, such as local innovation capacity and economic development need; selected data that quantified the criteria, such as local workforce skills, research universities, and local cost of living; and then ranked places on how they scored on the combination of these measures. Who won in these rankings? One analysis had Rochester NY at the top, another crowned Greenville-Anderson SC and Provo-Orem UT, and the third honored Madison WI. The rankings were entertaining — no one can resist a good top-ten list — but the real contribution of these analyses was the weighing of different abstract criteria for what makes a good tech hub, the translation of abstract criteria into quantifiable measures, and the detective work of finding good data sources.
How else can researchers help policymakers?
In addition to these three kinds of analyses, researchers who want to help policymakers can directly participate in policy and technical debates. How? One way is to respond to Federal Register Notices. Government agencies ask for comments on all kinds of technical issues — such as statistical policy changes that researchers care a lot about. Agencies really do pay attention to comments submitted in response to FRNs; such comments end up being more effective than, say, social media outrage.
Another way is to get on advisory committees. For example, the statistical agencies have multiple advisory bodies that weigh in and give feedback on technical issues and user needs. Calls for nominations happen frequently, and you can find them in the Federal Register Notices. And finally, come take a tour in government. Many of the economists I worked with in the Administration were on leave from an academic position, learning how policymaking actually works and bringing that knowledge back to make their future research more useful.
Jed Kolko is an economist who recently completed two years of service as the Under Secretary for Economic Affairs in the Department of Commerce. While there, he led a research team that advised Secretary Gina Raimondo on economic policy and the macroeconomy and advised the Department’s many bureaus on program implementation. He was previously chief economist at Indeed and Trulia.
I am curious if the situation arises where research is modified to fit what the government wants to hear?
I am not thinking of situations where whole research documents are complete works of fiction, but rather where researchers make post-hoc edits to a methodology in order to ensure the resulting numbers fit the desired policy. This is rather akin to a scientist deciding that a given inconvenient piece of data 'is an outlier', or where they run a battery of statistical tests on data to find the one that gives the most flattering results.
Part of my job is in transport economics, and let me tell you everything I have described (and more) is absolutely routine. The freedom we have to monkey around with numbers to 'make it work' is extensive - future year projections for traffic growth, model simulations, assumptions galore. And our clients don't care, because they are the people who just want to bring good news to their bosses. Nobody actually sees anything wrong with this arrangement, it's just how things work.
So I'm wondering - does this ever happen? Who wants to be the person that tells the government that their new plan is not going to result in new jobs, or won't work?
Kolko's examples seem harmless, but one background concern I have is whether academics are harming academia by trying too hard to influence policy debates. There's a lot of folks out there who are doing stuff like "Historian here, here's why voting for Trump is exactly the same as supporting Hitler in 1932" on Twitter that makes academia look completely partisan and makes the public mistrust it. This is related to stuff like the public health guys' open letter saying science showed the lives saved by BLM protests would outweigh COVID deaths, which helped polarize COVID and did great harm to public health.
I don't think it is so great for academia to have lots of people there involved in the project of "how can I help Democratic Party politicians". It will impair needed credibility with the public. It's probably best if politicians used more think tank stuff and take what they can from academia without academics thinking in these terms.