84 Comments

So, I think it’s worth stressing that what makes a school good depends on definition. How likely it is to make a low income kid attend or graduate college is certainly one important measure for national policy makers to consider, but is obviously not what upper middle class parents- for whom college graduation of their kids is almost a given- are looking for when they seek a good school.

It would be a mistake however to conclude that school choice ought not to matter for those parents. Teaching at an elite college I’m struck everyday how radically school preparation affects the outcomes of my students. Presumably it also affects their quality of life while in college and both the college experience and gpa have some medium and long term consequences.

Finally we ought to remember that good schools should be about more than just academics. The social skills you develop, and hopefully friends you make, can literally change your life too. These aspects should not be neglected in public policy either - socialization is a key function of education- although I’m not sure exactly how we ought to measure these aspects.

Expand full comment

I feel fairly strongly that we should de-emphasize college attendance, and focus more on whether people find their way into a career that pays enough to be independent and have a family if wanted, and more broadly whether they feel good about their lives.

I have fancy-pants degrees myself, and work as an engineer, but I work closely with a ton of electricians and technicians who didn't go to college, and just aren't the sort of people who want to sit in classrooms all day, but are all the same much smarter than many people who got degrees at the same schools I did. These folks do important, socially-valuable work -- and they're pretty well compensated for it, too. To the extent their high school programs (and in some cases "vocational technology" type programs) helped foster their curiosity and practicality, those were good schools!

Also, I know there are a decent chunk of kind of union-hostile folks around, but I happen to know that both the Plumbers and Pipefitters local, and the Electrical Workers local, in my area, have _fantastic_ training programs. If I knew some young person who was struggling with more abstract academics but had a sharp mind and seemed to enjoy working with their hands, in the real world, I wouldn't hesitate to encourage them to consider applying to become an apprentice.

Expand full comment

Millwright’s often make more than Engineers in my industry.

Expand full comment
Comment removed
Mar 25, 2023
Comment removed
Expand full comment

And in part well-paid because there is just objectively a shortage! We're seeing delays in building the buildings and other infrastructure projects we need, because we can't find enough people from the skilled trades!

There's even a surprising amount of skill involved for folks in the Laborers' Union -- digging a ditch may sound like "grunt labor", but have _you_ ever tried to dig a ditch that's like eight feet deep and six feet wide, and doesn't have the sides cave in, on a schedule? And if you're interested in, say, laying new underground utility lines, you're gonna be digging a LOT of ditches like that!

Expand full comment

Yes, but you may do well to get a community college certificate at the very least.

Expand full comment
Comment removed
Mar 25, 2023
Comment removed
Expand full comment

What do they go by?

Expand full comment
Comment deleted
Mar 25, 2023Edited
Comment deleted
Expand full comment

Well, admissions don’t seem to factor in preparation all that much at all. They may well factor high school grades but schools vary so much it’s almost meaningless. I’ve had many an undergrad who couldn’t write a paragraph , let alone an essay, and had no conception of a research paper, coming into undergrad. Others have already done serious work as high schoolers. Moreover those undergrads who were less prepared typically also were far less likely to *realize* their problem (at least at first). They typically always got easy As because they went to schools way below their actual skill levels. They never actually had to work hard to achieve academically and the change to a school where they’re at risk of failing if they don’t work hard is jarring. Worse still - they have no conception of what working hard looks like. Basically they developed almost no study skills.

Expand full comment

That was me when I went to college in the mid-80’s. I coasted through high school, got into the top state university in my state and then had a brutal wake-up call about the level of work and effort I would need to succeed there - it was very difficult but everything turned out ok in the end.

Expand full comment
Comment deleted
Mar 25, 2023
Comment deleted
Expand full comment

I don’t know how admissions work, nor how tests scores do (though I’d note parenthetically that other SB commentators pointed out that SAT scoring system changed over the years so it’s not clear elite students today are actually doing any better on those tests compared to those 20 years ago).

Most of my students- at any Ivy League school- are smart. But there are HUGE gaps in preparation. Many of those starting “behind”, absolutely start working hard after the initial shock and make enormous progress in a short amount of time. What you’d probably expect me to say is that they are so clever they quickly catch up completely. However that’s not my impression. In most cases they make enormous progress relative to their starting point and *narrow* the gap with their better-prepared peers, but my impression is that often gaps linger, though to a lesser extent, all the way to graduation. This is my repeated anecdotal experience, very familiar to colleagues with whom I discuss it. It’s one of those things that are “known.”However for better or worse that’s not a formal study. I can’t give you figures for the extent of the problem, or its effects. But I’d be surprised to learn they are entirely negligible even in the longer term. My advice to parents, even well educated and wealthy ones, is to send their kids to a school where they are academically challenged commensurately with their abilities and inclinations. They’re likely to be happier in the process (not bored etc) and to have skills and knowledge that will serve as an “infrastructure” for their future.

Expand full comment

I read all of this and I still can’t figure out what makes a “good” school “good” other than it has “good” outcomes, which seems… circular?

Like, what are these schools doing to create these outcomes that other schools aren’t? Is any of it observable? Is any of it replicable at schools that aren’t doing those things?

Expand full comment

The study is pretty vague. Figure 1 on p. 44 says they used,

- Tested academic skills and knowledge

- Untested academic skills and knowledge

- Social-emotional skills

- College knowledge and aspirations

But it doesn't give any details about what those thing are.

Expand full comment

Adelman says school quality is some combination of students' test scores, students' attendance, and the percentage of those students going on to earn a bachelor's degree. As far as your other questions, who knows?

Expand full comment

Ok but then how does he then separate that from inputs (ie “better” students going in are going to have better test scores coming out)?

Expand full comment

No clue. Blank slates or something.

Expand full comment
Comment deleted
Mar 25, 2023
Comment deleted
Expand full comment

Those states are also all quite rich and fairly white - are they the best when controlling for those factors?

Expand full comment
Comment deleted
Mar 25, 2023Edited
Comment deleted
Expand full comment

That is interesting (although a bit hard for me to see it from that chart...but I will take your word for it!).

But another explanation for the performance of Black students is immigration. We tend to think of immigrants as Asian and Hispanic and overlook "white" and "black" immigrants, but there are many of the latter. Fully 20% of Black mothers who gave birth last year were immigrants.

Worse, our census lumps white and black immigrants in with natives of the same "race" when their cultures can be very different (race: a social construct). But Black descendants of American slavery differ markedly from Black immigrants at school performance, and the states you're highlighting have HEAVILY immigrant black populations. In New England over half of new Black mothers were immigrants. In Colorado and NJ it's about 1/3, but in Virginia immigrants are only 1/6th of Black mothers.

The above doesn't prove anything, but when you're comparing Black student performance in NE to the deep South you are comparing an immigrant culture to a non-immigrant culture as well as schools, and I suspect the former plays a larger role than the latter.

Expand full comment

Those things are not moving the needle. Those are all wealthy states. Having a ten DEI admins isn’t the reason.

Expand full comment

Does DEI explain why schools in some European countries are better than other European countries? Or why Harvard is better than HBUs? Or why NJ students score better on Math than students in New Mexico? And do you think that if you looked within state you'd find the best schools in, say Virginia, are outperforming the worst schools in Virginia, b/c of DEI?

Expand full comment
Comment deleted
Mar 27, 2023
Comment deleted
Expand full comment

There's no evidence to support that causation. You are talking about correlation.

Expand full comment

To continue what Matt said, when A is correlated with B it may be that A causes B or that B causes A. Or it may be that C causes both A and B. Or it could be coincidence if the sample size is small.

I would guess that "C" is the educational attainment of parents. Here's a map of that variable from 2009, which might proxy for the parents of today's students. It lines up pretty well with the state rankings of schools and could very plausibly cause both student success and the embrace of DEI:

https://commons.wikimedia.org/wiki/File:Map_of_states_percentage_of_population_with_Advanced_Degree_in_2009.svg

And for reference, here's a map on school quality (other ones I can find are pretty consistant): https://scholaroo.com/report/state-education-rankings/

The best states seem to all be in New England and the mid-Atlantic, plus Virginia and Colorado. California is pretty bad...aren't they DEI-heavy? They are fairly low on 2009 educational attainment, so they are a good example of a state that is better explained by parental education than DEI.

Expand full comment

I could not agree with this article more. I currently teach at a somewhat high performing charter. The school serves a population that is overwhelmingly poor and from underrepresented minority communities. Demographically similar schools in our area usually score at around a 7% pass rate on state wide assessment whereas 15% of ours students passed.

The differences are not massive, but real. While its likely some of the effect comes from direct and indirect, there are a thousand things the school does to help our students succeed. Many of the biggest differences I have seen between my current school and previous lower performing schools I've taught at aren't measured in tests at all.

I suspect my school is reducing the gap between our students and their more privileged peers by about 10%. The school is far from a silver bullet, but that doesn't mean high quality education can't make real gains.

Expand full comment

There’s no way school matters; I have 23 years of schooling and I’m dumb as shit

Expand full comment

I'd count up mine on my fingers and toes but I don't think I have enough :C

Expand full comment

Yup. Feels like I spent my smart years in school and cognitive decline years actually applying what I learned.

Expand full comment

The reform-pessimistic position is that *education is not an effective instrument at reducing social inequality*, not "lol, nothing matters." Education is tremendously important for its ability to deliver absolute gains at a population level and help students achieve their potential, while providing high-quality daycare and socialization opportunities.

But the pessimists observe that every "once-promising idea" has turned out to be a statistical mirage or an epiphenomenon, just like you observe in this article, despite stowing the information away in a footnote! After touting how a recent paper shows that "achievement scores are rising, and gaps are closing across income and racial line", you acknowledge:

> The data for these three papers ends in 2015, 2015, and 2017, respectively. Achievement scores have fallen since then, particularly for lower-performing students and especially in the wake of COVID-19. But I’d attribute those declines, in part, to the “education reform nihilism” movement that was already taking root by then.

To put it politely, I am skeptical that it makes sense to attribute declines in scores over such a short time frame to the malign influence of a dissident minority within the field. It seems more likely that the gains were actually measurement errors, p-hacking apparitions, and/or confounded.

(Edit, added missing conjunction)

Expand full comment

Every couple days for a while now I’ve read an anecdote from a teacher that suggests that conditions in public schools across the country have basically collapsed? The kids can’t stop looking at their phones, half of them don’t know how to read, can’t be held back, teachers getting assaulted and the kid is back in the classroom the next day, etc. This seems like a huge story?

Expand full comment
Comment deleted
Mar 25, 2023
Comment deleted
Expand full comment

Good luck!

Expand full comment

Another long one from me. Here's the gist: the poor quality of tests used to measure student growth is important and, I think, overlooked. I mentioned in a comment on one of Matt's recent education posts that these tests suffer from both low reliability and low validity. The tests are, in short, poor-quality instruments for informing teacher accountability policies or determining school quality. We need better tests if we want these policies to be meaningful in any way.

The tests are not reliable because they are not reproduced under the same conditions. States change test constructs from year to year, often in major ways that make it a completely different test - i.e. adding and removing entire sections, changing the number of multiple-choice options per question, lengthening or shortening writing portions. Yet, these tests are used to determine student performance and compare both across years of takers in a grade and as growth measures following a kid's test scores up through the grades. Some states even flat out say that they measure "new-learning" and that their tests are not meant to compare across school years, but then also have policies where those tests are used for school quality purposes (New York, for example). Furthermore, student populations fluctuate demographically, schools are rezoned to include different student populations, and tested student bodies vary in meaningful ways due to issues from attendance. Texas and Florida are busing asylum seekers, often women and children, to New York and DC. Those kids are being enrolled in local schools. They take the tests and fail and make the schools look worse. Is it any wonder that the schools are trying to block these students' enrollment?

In an example from my personal experience, when our state accountability testing became computerized, the school lacked enough computers to test all the kids at once. The last kids to take the test were students with special needs students. Because we had so few computers, they were tested days after the official end of the testing period laid out by the state. This meant that the scores for the SPED students at my school were not initially included in the headline data and ended up in an amended report released later that year. All the attention (and accountability) went into the first, incomplete score report making the school look significantly better than it was. These tests are not reliable. Also, the tests were only offered in English, and any kid who didn't speak English was still expected to take the test. In English. But that's probably more of a validity issue.

Likewise, these tests have low validity. Or, probably more accurately on my part, these tests are being used in ways go beyond what they actually measure. The most famous example here is teacher accountability through measuring their "value added." The idea is that if we test a class at, say, the end of fourth grade and again at the end of fifth grade, we will know how much more that class of kids has learned (or can do) as a result of that 5th grade teacher. But there's a catch. What ends up happening a lot of the time is that there really isn't a value-added measure taking place. Instead, we end up with the less useful "student growth measure" which does not adequately control for student demographics or changes in the student population of that teacher's class (see above). States think that by simply reporting the average growth of various sub-groups (african-americans, free-and-reduced-lunch, ESOL, etc.) that they are somehow controlling for those differences. But VAM requires proper statistical controls that are often not put in place by the districts or states collating the data. The validity problem emerges because these tests are probably measuring out of school factors just as much as they are measuring in-school factors.

Additionally, the way these tests are used does not differentiate between the impact of multiple teachers. If a student receives strong instruction from her social studies teacher or her science teacher, this can positively impact her scores on ELA and math. I always struggled in math, but after taking a physics class, my algebra skills improved dramatically and my math scores in the latter half of high school were way better. The ongoing early childhood longitudinal study, for example, finds that students who receive more social studies instruction end up with higher reading scores (interestingly, it results in higher math scores for boys but not for girls). A test that cannot tell us with any degree of precision which teachers contributed to a student's growth is not a valid test when used for teacher accountability purposes.

Likewise, Aldeman points to attendance and shows us a graph saying that, on average, *35%* of students were chronically absent in the 2016-17 school year (Holy shit, that's high!). This means they missed upwards of 10% of school days that year: ~18-20 days of missing school - and that's the bottom cutoff, many kids missed more. We also see in that graph that chronic absenteeism is higher in african american and hispanic/latino populations. We know that those students tend to be in schools that are predominantly students of color. Schools that enroll high percentages of students of color have higher rates of chronic absenteeism. When those kids are there on test day, they take the test, they fail, and we make a note that their teacher did not add value (but, really, uncorrected student growth). But is that what the test's results are showing? Is the test showing us a shitty teacher or a kid who didn't show up for a tenth of the year? Or, maybe, it's the teacher's responsibility to ensure student attendance so that she can proceed to add value? There is not enough work being done in state departments of education to make sure that tests properly measure what they are meant to measure.

Compare all of this to, say, NAEP or the SAT, or PISA where these tests are deployed with sometimes years of construct evaluation, high correlation coefficients (good luck finding any states that even bother to measure that!), and careful work to ensure representative populations take the tests year after year. The accountability tests used by states to make determinations about student growth, teacher performance, and school quality are, frankly, awful. If we're going to write ed policy, we need to ensure the mechanisms that are used to inform and enact those policies are effective.

Expand full comment

Which states are demonstrating better practices in this regard: "[T]ests are deployed with sometimes years of construct evaluation, high correlation coefficients..., and careful work to ensure representative populations take the tests year after year." Why aren't state education departments and/or the pedagogical experts at the state flagship universities engaging in this data better? This just seems like an area where there is so much data out there, it's frustrating that seemingly few are trying improve its collection and engage with it better. (Also--did I read the chart in the post correct-- that over 40% of African Americans in California schools were chronically absent in the last school year?)

Expand full comment

It's really hard to find out how states are managing their tests because the more they make public, the more schools can teach to the test or perform other shenanigans. So, like, Tennessee is a great example. They have been using VAM for a while and have tried hard to build out some kind of conformity across schools with regard to test environments, following kids between school districts, piloting changes to tests before they include those changes in measures, etc. But all you can do is access the raw test score data, but not to see how they use that data to inform their accountability system. That said, you could probably take the data and check some of the underlying issues I mention, such as looking for the correlation coefficient among the same cohort of kids taking the same tests and so on. If I had to guess, I'd bet it's ~0.4.

Expand full comment

Great comment. Spot on!

Expand full comment

“Weapons of Math Destruction” had some insightful counter-claims about measuring teachers against expectations. I think the summary was that the std dev for a single teacher in a single year is so high that the statistical evaluation mechanism against expectations is almost useless, and the people in DC who came up with the idea caused a lot of damage and ill will with the idea. Would be curious to hear the point grappled with / countered as part of this series.

Expand full comment

For the linked working paper: I'm not sure what standard practice is, but the claims on things like "having only 70% of student data discoverable is ok" seem sketchy to me... The paper looks at so many inputs/outputs that it seems like they might be finding noise instead of insight. What I am really curious about is if they apply the same model across many years, does school ranking one year correspond strongly with school ranking the next? Or are the results within a single year likely all noise?

Expand full comment

Right. Submit that to a research journal and see how it goes.

Expand full comment

>>We lack earnings data for approximately 30% of the analytic sample. The state UI data do not include earnings from self-employment, employment in the federal government or military, under-the-table pay, or earnings from employment in another state. As such, we cannot detect whether individuals with no reported earnings are unemployed, out of the labor force, or have nonreported earnings. As expected, students who leave Massachusetts for college and students who do not earn a high school degree are more likely to be missing earnings data, but probably for different reasons. We are reassured that our estimates reflect impacts on true earnings for several reasons.

I get that these are standard problems education researchers face but their answer for this isn't to muck about with their methods. They just say, "hey, Raj Chetty showed that it's probably the same whether students stayed or not" (pg. 10 if you're looking for it). But let's think about that in terms of their overall purpose and question.

>>First, what is the variation in high schools’ effects on four-year college graduation and adult earnings?

>>And second, are schools with larger than expected effects on these longer-run outcomes those that also improve test scores, improve students’ attendance, influence their academic trajectories, and/or promote college going?

Here's the line summarizing the Chetty finding: "Furthermore, students who leave the commuting zone have reasonably similar earnings (about 5% higher) than those who stay in the zone." So, wait, you want to investigate the impact of schools that these kids attended and then you cite some research that shows that kids *who leave their schools* perform 5% better than those who stay?

Expand full comment

They're discussing people who lived in one locale when they were children and in another locale as adults.

Expand full comment

Right. And that kind of problem is totally normal for this kind of research to encounter. We want to follow all these kids through the years and seek how they turn out, there's some kind of attrition preventing us from following up on about 30% of them as adults. But it's weird to me that they kind of brush it off as okay because of Chetty's work because 1) it's one study and 2) the study says the opposite of what they're trying to identify in this study. If kids who move away (and therefore change schools) do just as well or better as adults than kids who stay in place, then it's really odd to use that fact to justify this missing 30%. It gives me less confidence in their results.

Other people who do studies like this do not approach sample attrition problems by appealing to already established literature. What they do is try to control statistically for how that attrition might impact the results. They could say "any kid for whom we cannot locate as an adult is excluded from this study." That's a choice that presents its own problems - survivor bias being the one that tops my mind. The larger point, and to the OP's questions about the study, is that this kind of research is hard to do well, so taking the findings of a single position paper so seriously is not warranted.

Expand full comment

We have a big challenge measuring outcomes of school quality because so much depends on going to college. If schools incrementally improve the college attendance rate of their students, they look good. What about the kids who were already going to college, or the ones who probably weren't going to go anyway? All of these studies should look at college-educated outcomes, non-college outcomes, and the shift between these two groups separately, and we generally need a better mental model of good education. Encouraging and enabling more kids to go to college is great, but we risk focusing on that too much.

To get on my soapbox for a minute, our model of college needs to change too. A lot of kids are impatient to get into the real world and do things that have a real impact. College just continues the "sandbox" aspect of formal education, which I think is often pretty demotivating. I bet that results in a lot of kids skipping college as not for them, or showing up and then floating along for a few years with a sense of alienation.

It would be interesting to see more new models, like say work-based college. Let's say you want to be a graphic designer. You do a 6 month bootcamp to get basic skills, then you start a half-time entry level job. The other half of the time, you have classes to fill in your professional skillset and perhaps to teach general education requirements. The education connects to your real work as well as rewards like raises and promotions. If it turns out that you don't like graphic design, you switch jobs during college rather than waiting until your twenties to test the job market. By the time you graduate, you already have X years of work experience and are more likely doing something you enjoy.

Expand full comment

Northeastern University has done the co-op idea for decades now. It's a 5 year degree, and students have to complete I believe 3 separate 3-6 month co-ops with local employers in that time. I have no objections to the system, but if it's such a great idea wouldn't other colleges have copied it decades ago?

Expand full comment

Good question. I think a number of Canadian universities are more work experience oriented as well. I don't know much about why it hasn't spread more. It could be that it's not that great, but I also wonder whether there's a herd aspect to it. I imagine the 5 year degree is a disincentive too.

Expand full comment

It's crucial to evaluate the impact of spending on schools vs. other spending priorities, particularly when it comes to using education as a poverty fighting tool.

I read FDB's post and my conclusion was that the impact of incremental education spending needs to be compared to spending on programs like Child Tax Credits.

I suspect money directly in the hands of parents is more impactful. We did have a one year national experiment in this regard in 2021 when there was a significant federal CTC and childhood poverty went down by about a third.

My second ever post, below, was about the demise of this program. "The Cruel, Untimely, and Much Too Quiet Death of the Expanded Federal Child Tax Credit (the post is under 1,000 words)

https://robertsdavidn.substack.com/p/the-cruel-untimely-and-much-too-quiet

Expand full comment

The example I like to cite is when Zuckerberg gave Newark Schools hundreds of millions of dollars and there was not a lot of improvement. If parents aren't invested in their kids' education, odds are the kids won't be that invested either.

Expand full comment

To what extent is the 0.05 standard deviation improvement per decade real? I want to see big improvements, not something only a statistician can peer at and see a difference. I conclude covid vaccines work and masks didn’t because I can look at a graph.

Expand full comment

The Harvard study mentioned is plausible, but I did not understand how they identified school “quality” independent of the observed outcomes.

“But it’s still hard to convince the public that school quality should be defined based on outcomes and not all of the surface-level inputs.”

? Isn’t the real difficulty that “quality” is popularly measured by outcomes, not improvement in outcomes?

Expand full comment
Comment deleted
Mar 25, 2023
Comment deleted
Expand full comment

But that job would NOT be an improvement for the middle class kid, so I'm not sure what your point is? [This is not to say that you do not have a point or than I disagree, just that I don't understand.]

Expand full comment
Comment deleted
Mar 25, 2023
Comment deleted
Expand full comment

OK. You are saying that a school might be good for bringing up under-performing students might not work for average and above average students. Right. The topic was the former, but the latter is also relevant for "education reform."

Expand full comment

Was gonna reach for that "like" button, the first I've ever granted to a non-Milan guest post (I think), but...then you did the same thing Matt does, mischaracterizing FdB's stance based on what seems to be an overly-literal surface level read. This runs both ways, with Freddie continually dunking on Matt's YIMBYism...based solely on his tweets, seemingly. Not sure where this historical beef comes from, but as someone who subscribes to both blogs, it's very weird.

The article in question: https://freddiedeboer.substack.com/p/education-commentary-is-dominated

The larger meat behind that bone: https://freddiedeboer.substack.com/p/education-doesnt-work-20

...building on FdB's first book, "The Cult of Smart", available at your friendly local bookstore.

Obviously no one's gonna wade through all that just on my say-so; I'll try and summarize one last time: Freddie doesn't claim schools "don't work" writ large, actually he goes to great pains showing that of course there's absolute gains from school, and yes a lot of that came from successful reform. I notice that you've grokked his distinction between absolute and relative gains, which is one step in the right direction. But even here, the argument is not that disparate relative gains ("gaps") never change - as you point out, research is quite clear on this, and anyway it follows naturally from absolute values changing. It'd be a very strange situation if absolute shifts just happened to coincidentally line up along the current relative-gap lines.

Here's the rub though: even though we can move the needle here and there, sometimes even in meaningfully big ways, the bulk of those gaps will always come from outside school. Like all other measures, school reflects our society's inequities back at us, so at some point one must change society. These are exogenous factors which school can't do much to change. (Wraparound services attempt to ameliorate somewhat, and are even sometimes kinda successful e.g. school lunch, but there's plenty of reasons to be wary of "mission creep" here. Iron Law of Instutitions, Law of Comparative Advantage, credentialism, etc.) Given this plateau, we ought to be really clear about what school can and cannot do...that is, think a bit like an EA, and realize there are probably better ways to improve those long-term outcomes than indirectly via school's diminishing returns. Get everyone through highschool, sure, get everyone through college? Hold up.

That was the point of the optimism-bias post, that "school" has become this way to launder hopes and dreams, a proxy battle that sucks energy out of larger debates. (School lunches work, but wouldn't be necessary if we actually solved hunger in America, which is totally within our power and yet we don't. Incremental gains are great, but don't forget to think bigger!) Ultimately, focusing on improving this correlate to long-term outcomes is a type of Goodharting*. Which isn't to say it doesn't have its own unique and intrinsic benefits - I would not want every kid to be "unschooled", and it's a load-bearing part of the current compromise on welfare (e.g. school-as-childcare, freeing up parents for more work hours). The question is, what is schooling for, what role do we want it to play in society? Because we're really muddled on that direction right now, to everyone's detriment.

I think I'll sit out of future education-related posts if they do the same thing. Too much talking-past, which is unfortunate cause I don't see the positions as contradictory. Not SB's strongest topic.

*And of course this also leads to Goodharting the Goodharting: https://educationrealist.wordpress.com/2021/09/18/false-positives/

Expand full comment

> School lunches work, but wouldn't be necessary if we actually solved hunger in America

Well… they would, because you can't "solve hunger" in a way that stops people from being hungry again at lunchtime, which is when they're in school.

My memory of FdB's article is that he definitely sounded like he was arguing no school intervention could ever change the relative achievements of students. (But I can think of one - take the highest performing students and hit them on the head until they perform worse.)

I propose putting nootropics in the school lunches and seeing if that causes any more absolute improvements.

Expand full comment

How is it determined what a good school is? That seems like exactly the sort of extremely difficult thing to measure that pops up all the time in social science, and instead of acknowledging that it's difficult, "experts" just take bad metrics and pretend that they're good. Is that what's happening here?

Expand full comment

A good school is one with good students.

Good students + bad teachers > bad students + good teachers.

Expand full comment

Adelman says school quality is some combination of students' test scores, students' attendance, and the percentage of those students going on to earn a bachelor's degree.

(I'm guessing they aren't measuring this in elementary schools. Good luck picking the right one based on the college criteria.)

Expand full comment

I would be fine with school reform if it relied on clear, clean data from about K-6. After that, how can you score and pay a teacher based on value add if their kids are taking a 8th grade test and the kid in their room has been to 6 schools in their life and reads at a 3rd grade level? If that teacher spends a ton of limited time focusing on building that kid’s reading fluency, and even succeeds, that still wouldn’t show on an 8th grade test, because that test is still far beyond a student’s reading level. These students are statistical noise because there’s nothing you can do for them that shows up on a test. Happen to have fewer of them in one year? Then your scores look great.

Multiple this by 10 in high schools. I worked under value add, and one year I was a genius AP teacher and the next I was awful. The truth is somewhere in between, but it’s not measurable as long as kids who are extremely behind are included in the data.

I’ve come to the conclusion that those kids need one more burst of intensive catch up support in middle school, and then if they’re still behind, they go immediately to an alternative school and graduate in three years with a vocational degree.

Expand full comment

When I rolled out my basic literacy program for 9th graders, they made huge gains in reading. This led their scores to go way up. I added value. Problem: they're not "my students" because my class was technically a study skills type elective. According to the tests, their English and Math teachers added that value.

Expand full comment

A typical English class has to hit so many standards that it couldn’t use an intensive reading program. States require all sorts of weird junk: give a digital presentation where a student uses eye contact, collaborate on a wiki, write a personal narrative, write a research paper, etc etc.

Expand full comment

When I first developed the program, I was told that I was not allowed to teach 9th graders to read because there were no basic literacy standards for 9th graders. I was, apparently, harming the students by denying them access to grade-level instruction.

Expand full comment

This is maybe a nitpick, but...

"It turns out that schools had a big, long-term effect on students. Low-income students who attended a high school at the 80th percentile of quality were 6 percentage points more likely to earn a bachelor’s degree and earned 13% more money (or about $3,600) per year at age 30."

The linked article makes it clear that this increase is in comparison to be students who attended a high school at the 20th percentile of quality. The effect is pretty small for going from 20th to 80th percentile.

And given that one of (if not the) main factor in school quality is the composition of the student body (and their parents), then there is going to be a hard limit on how many low-income students you can add to a given high-quality school before it is...no longer high-quality.

Additionally, the paper was limited to schools in Massachusetts. Which isnt inherently a problem, but my understanding is that MA is generally a significant outlier w.r.t. to education. So I'm not sure how useful it is.

Expand full comment

i think they tried to regress out the ses factors in making that statement

Expand full comment