.....what on the Atlas Shrugged is going on here. There is not a market for either xAI alone nor for reusable rockets at present, and data centers in space is an idea that won't even see the first attempted launch for years. Yet it's so important that it must be the pursuit of anti trust legislation?? Like the other rail/utilities examples mentioned let's look to Congress to pass a law, there is literally negative proof of anti trust enforcement here. There isn't even a market!! Bad take.
To my knowledge, antitrust law doesn’t necessarily require an existing monopoly or even a fully formed market. Obviously this scenario isn’t happening right now, but it could and the risk of giving Musk exclusive control over effectively unlimited data center compute seems important enough to act on
This is as ridiculous as creating anti trust laws against Apple for inventing the iPhone in the Jobs era. There is competition even in the notional world in which you're living.
The risk of unlimited data center compute in space is plausible only if you ignore extreme temperature swings, space debris, gamma rays, solar flares, coronal mass ejections, and the Kessler effect, and pretend lift costs are zero.
This idea is about stock price, not plausible uses of engineering.
Musk fanboi: "I am sure it's good to put all the compute beyond the protective cradle of the ionosphere. Radiation will make our AI overlords stronger."
I didn’t know about the Kessler effect until Sabine told me about it the other day. There are concerns that it is closer than previously believed https://youtu.be/8ag6gSzsGbc
There are other entrants but at time of print, no one is within 2x SpaceX's price in $/kg. Here's ChatGPT:
SpaceX: ~$1,500–$3,000/kg
Blue Origin: ~$2,000–$4,000/kg (target, contingent on New Glenn reuse/cadence)
ULA: ~$4,000–$7,000+ /kg
Arianespace: ~$6,000–$10,000+ /kg
Firefly Aerospace: ~$15,000–$25,000/kg
Rocket Lab: ~$20,000–$35,000/kg
Other small-launch startups: ~$20,000–$40,000/kg
So maybe Blue Origin gets close if they can match SpaceX's technology? Even with competition, I see a strong case for the type of rules MY is advocating. I'm not even clear what the argument against is.
Uncharacteristically underbaked take from Matt here. Common carrier regulation could certainly make sense, but the idea that the only reason to vertically integrate is to monopolize a business that doesn’t exist is not well thought through. Vertical integration can drive coordinated investment to innovate and open new markets - exactly what seems to be going on here. If there is an anti-trust issue later it can be addressed in due course.
I disagree, but only because "We should have common carrier regulations on space launches" is such a benign conclusion regardless of whether you think the premise is absurd.
This is true, but if Musk’s plans come to fruition the Common Carrier regulations seem well suited for this. And you can’t wait for the commissioning of massive data center capacity in space. By that time it would be too late. Musk would have used his space capabilities to launch data centers, at scale, while potentially blocking access to space for AI competitors. We should not allow that last part.
Congress can pass a law very easily. Anti trust is not the vehicle for this in any way at this stage. There are other orbital carriers with more capability growing daily - Boeing+Lockheed have a partnership. Northrup buying Orbital ATK. French company Arianespace. Japans Mitsubishi. Rocket Lab has launched more than anyone except SpaceX. This whole discussion reeks of simple ignorance over the state of the industry imo.
I mostly disagree with the characterization that there's not a market for reusable rockets (there's a market for launch and SpaceX's ability to reuse hardware can lower its costs), but otherwise... yea, when I saw the title this morning I had a nice "WTF" moment.
There's a very strong market case for Falcon 9 and its brethren, but the case for (ugh) "Starship" is a lot more tenuous. There's fairly widespread agreement that you need Starship to work to get the Starlink math to pencil out, and you need a full Starlink network's immense payload needs for a system as large as Starship to make sense. Whether the ouroboros so formed works out long-term depends on a lot of assumptions standing in for questions nobody has yet answered.
It's legitimately shocking that anybody with a remotely technical background is taking datacenters in spaaaace! seriously. Matt doesn't have that background, so I'm 2/3 inclined to let him off the hook today (the 1/3 remaining is to say 100 rosaries of "I will keep in mind what I'm qualified to talk about), but we are long beyond the point that we should take everything Musk et al. say with a gigantic grain of salt.
From what I’ve seen thus far, media coveage on this topic has been pretty poor. No one’s actually saying there will be giant data centers assembled in orbit that will rival a Google 800K sq. ft. facility on the Columbia River. I assume what’s being considered is a large collection of data centers that are each the size of a basketball, and maybe the size of a small car with its solar panels are furled for launch. But I don’t know because everybody wants to write about AI and Elon.
This article presupposes that there are other AI companies out there with bold plans to put AI data centers in space but who will be blocked by SpaceX boxing them out on behalf of x.AI. Is that really a thing though? this seems like another Moon shot that Elon wants to try, and we should let him, and we should get out of his way. Just like with Tesla or SpaceX, he's doing something so bold and authentically innovative, that we should not preemptively put up roadblocks and create more regulatory uncertainity here--at least not for the reasons stated.
Let him prove his concept, like he did with rockets and electric cars. Then we can think about regulations after it hurts consumers or competition.
Why SpaceX and not the other half dozen providers? Why is anti trust the mechanism? Why ignore the competition from other tech companies and other heavy industry?
Econ 101 says that natural monopolies should be regulated as utilities, while monopolies formed through market manipulation and cartel behavior should be broken up. It's two sides of the same coin. And utility regulations pretty much always apply to all companies in a sector, not just one.
In terms of why making SpaceX do it it's because they are already close to being a monopoly. 85% of global launch capacity is controlled by SpaceX. A fraction that is likely to increase in the near future.
At the moment the only competition is ULA, who aren't competitive and kept afloat by government reluctance to be wholly dependent on SpaceX, Blue Origin which remains a vanity project for Bezos more than a successful company, and Rocket Lab who currently competes in the light launch market that SpaceX isn't interested in, although their Neutron rocket may make them competitive in medium launch if they can scale.
It's difficult to overstate just how far ahead of all their competition SpaceX is.
It's because they did it first. Should we have regulated the iPhone for being first? Would that have changed the dynamics of today's smartphone industry for the better? This is pure punishment politics imo
It's better to think of these anti-"monopolists" as anti-capitalist pursuing their dream for people to owns the means of production. Innovation be damned.
I don't know that the political solution Matt is pushing for is warranted. That's because I'm fairly skeptical of the economics of space data centers. I do agree with Matt though that SpaceX is the only company that could plausibly do it if it can be done at all.
Maybe that will change in the near future, but Blue Origin has been trying to be SpaceX for longer than SpaceX has existed, so I think we should be skeptical that catching up will be easy.
Musk seems like the only person for whom it might plausibly work because he's the only one facing the problem of having a huge excess of launch capacity.
I see space data centers as an answer to the question of what to do with Starship more than anything.
Leading with that specific link was a poor choice.
But you also need to read these announcements with a Silicon Valley hype decoder. Google is already working on orbital prototype launches. This is not just a thought exercise.
Catching up on other comments, I now see that you're coming at it from a position of technically-informed skepticism. I'm not arguing against that, and I even upvoted lhe top-level comment, because I agree with the main point.
I'm only responding to the first sentence:
> This presupposes ... there are other AI companies out there with bold plans to put AI data centers in space ... is that really a thing though?"
Companies are investing! That doesn't mean it will succeed, but it is a real thing.
I think your take is about spot on. It bears watching, for the reasons M.Y. stated, but the time to impose regulations is when, and if, Musk starts massive data center launches.
“Musk is trying to do something here that is bad for the country and the world and that is also bad for various non-Musk billionaires and corporate actors.”
I don’t think this is ever really justified? What reason is there to believe that space data centres are a crucial piece of infrastructure? What reason is there to believe Musk wants to monopolise the industry, let alone reason to believe he could?
I don’t think putting your data centres in space will do anything to stop government regulation. Governments regulate foreign companies with no domestic footprint all the time when they do business with domestic customers.
Unless Musk actually succeeds in making his city on Mars (or now apparently the Moon), any AI generated child porn or other illegal material made by space AI data centers will be transmitted to users who live on countries on Earth, hence giving those countries at least some oversight of the company which is sending the data (SpaceX).
There's currently 1 AI model running in space launched by the Chinese. No one even knows if running data centers in space is feasible, cooling alone is going to be very difficult to solve. Do you know how much heat an nVidia B200 generates?
“As a result, this merger poses the very real risk that the combined company will be able to leverage a dominant market position in the space launch industry into a dominant market position in A.I. If nothing else, that is clearly what Musk wants to achieve with this merger.”
I’m pretty sure what Musk wants to achieve is a lower cost of capital for xAI. Turns out there are plenty of people that still take his word at face value though.
xAI is bleeding cash and falling behind in the AI market. To survive the inevitable AI market consolidation, xAI needs what Google has (and Anthropic and OpenAI lack): a massively profitable business that throws off billions in cash that can be plowed into AI even if credit markets sour on AI.
There are MULTIPLE major issues with putting compute in space, including extreme thermal differentials between sun and shade (500 degrees or more), issues with shielding electronics against gamma rays, huge issues with solar activity / solar flares, huge issues with space debris, and hanging over it all, the potential for a cascading Kessler incident.
And that is on top of lift costs, which are not small.
Data centers in space is kind of the space equivalent of “we don’t need to cook when there’s Door Dash.” Sure, if you ignore all practical constraints.
We are not likely to need anti trust for this. Meat processing and PE consolidation of service providers, yes, anti trust enforcement would be a Godsend. This, not so much.
1. AI inference is extremely robust to occasional bitflips. If a single cell in a matrix goes even from 0.0 to 1.0 (the worst case scenario for a single flip), the resulting effect is basically zero in a high parameter model. Just reload the model from central store once daily, and you're fine. This is a non-issue in my opinion.
2. Solar flares: they will very occasionally take out a small portion of your cluster, which is equally true of various natural disasters effecting your network of data centers on earth.
3. You would want these satellites in quite wide orbits, above the belt where ex: Starlink flies today. At that layer, the odds of debris hitting your satellite in any meaningful way are quite low.
I think the right way to think about this is "what percent of my satellites could I lost per year due to solar flares, various equipment failures, etc, over the expected ~5 year useful life of the satellite?" Even with a ~10% annual loss rate, I think the math pans out fine.
Heat dissipation feels like the biggest issue to me. Though I am far from an expert on that particular topic, I feel bullish on us being able to make the marginal (sub-linear) gains needed in that domain to make this project viable.
I don't know how you can do easily dismiss Kessler concerns. The cross sectional area of a single gigawatt scale solar panel array + radiators is going to be literally thousands of times the area of all satellites hitherto launched into orbit. And this all only makes sense if you can theoretically go from gigawatt to terawatt.
1. That’s soft errors, not hard. Hard errors result in permanent damage. Reloading the model does not fix that.
“Radiation is undoubtedly the main threat to electronic components in space. Cosmic radiation interacting with microchips can cause memory errors. High-energy ions or protons passing through a transistor can lead to short circuits, electron leakage, and, as a result, irreversible damage that can jeopardize an entire mission. While problems sometimes arise at launch, many manifest once the spacecraft leaves low Earth orbit, since closer to Earth, it is still shielded by the atmosphere and magnetic field. That’s why astronauts aboard the ISS often use electronics based on conventional “terrestrial” microchips. But GPS and GLONASS satellites are not so fortunate: at an altitude of about 20,000 km, the penetrating power of energetic particles is so high that the use of standard microchips is out of the question.”
2. Loss of entire data centers due to terrestrial disasters is quite rare.
3. High orbits are actually more risky for Kessler effect, as if you do get a cascading series of collisions, clearing those orbits could take thousands of years.
Again, the math is "what percent of the cluster can I lose per year and have this be viable?" I think it could be made to work re: shielding compute and accounting for the occasional solar storm intersecting your orbit, but I admit I haven't done any deep research on this.
I agree with #3, though the trade there is that high orbits make collisions themselves super linearly less likely. But yes, this is a risk. I do think that this could be a blessing in disguise though, in that it will constrain the AI build out to only terrestrial energy sources, which will perhaps put enough of a natural brake on it to slow down an intelligence explosion enough for us to figure out what to do about it.
I think this significantly misses the point Hayes is making. Even though all of those billionaires have divergent views and interests, they are all united (along with other billionaires like Ellison and Benioff and lots of non-tech investors) begin the idea that we're close to building a machine that will replace all white collar workers, and we should built it as fast as possible.
My issue with Hayes statement isn't just that it flattens the divergent views of billionaires, I think it wrongly assumes that every billionaire on earth wants to immiserate white collar workers for their own bottom lines. Billionaires have a lot of different political beliefs and some might actually want a world with mass unemployment because they're bad people, but my suspicion is that many are smart enough to see that that is a bad future for society.
- basically all billionaires are all in on developing AI and a positive about doing it
- many people involved in AI, including some of those same billionaires, think that AI will soon be able to replace a huge swath of the workforce
There are lots of differences of opinion about what to do about that and what the implications are, but I take the conjunction of those two claims to be Hayes' point.
It doesn't take very many if them with their hands in the right pies to make things very bad indeed for the rest of us. The ones helming the big AI companies are not particularly pro-human people.
A fair number of billionaires will stop being billionaires if white collar work stops being valuable.
If Salesforce or Oracle can lay off all their white collar workers, the biggest barrier to a competitor entering Salesforce or Oracle's market (needing to hire, train and manage vast numbers of white collar workers) also disappears. Benioff and Ellison don't *want* to put white collar workers out of work, they're just responding to competitive pressure and changing business reality and trying to get ahead of what we all know is coming.
The thing Oracle is selling -- and this is true of all enterprise software providers -- is a software/services bundle where the "hard" part to replicate is the services, not the software.
Claude Code can probably make a bug-for-bug compatible clone of Oracle's database in pretty short order, and before Claude Code a small team of strong engineers could do that.
But what Oracle has that differentiates it from a random startup is an army of white collar workers: a sales force to put their database in front of customers, lawyers to negotiate enterprise deals, sales engineers to set things up for customers, 24 hour on-call support for when a customer is losing millions of dollars each minute during an outage, product managers who proactively identify emerging customer needs, and software engineers who translate support incidents and PM insights into core product improvements.
It is just not remotely true that Claude Code could easily replicate the Oracle database, or PeopleSoft, or any of the other important Oracle products.
Dude it's like three real billionaires and a couple paper billionaires that we're actually taking about. Buffett has been anti-tech for 50 years now. The collective Walmart heirs don't give a fuck. Gates is buying farmland. Bloomberg's wealth, outside of the Co., is mostly in real estate. The Koch's are still OG tied to energy infrastructure (a friend ran their venture arm). I can't think of a more diverse "class" than billionaires. Doesn't matter to Hayes though - he'll just keep making shit up.
Applying common carrier rules to space launch makes sense, but the rest of this seems a bit tenuous. Why does the merger matter if Musk already directly controls both companies? What about the fact that SpaceX currently launches Project Kuiper satellites that directly compete with Starlink? Is there any reason to believe they wouldn’t launch rival data centers?
“Musk is trying to do something here that is bad for the country and the world”
Hold on there. Musk is trying to do something to solve one of the world’s most pressing problems - AGAIN! Many of these problems, that Musk attacks, are left-wing priorities (municipal traffic, practical EVs).
He is now focusing his remarkable talents on using space, which he has made accessible, to mitigate our power and climate concerns. This is what you characterize as “bad for humanity”. It’s GREAT for humanity. It’s also true that we should not let Musk exploit his space leadership to block other AI companies from innovating space AI solutions. Your argument for Common Carrier regulations is sound, and maybe what that cures is all you meant with your “bad for humanity” comment. If Musk voted for Kamala I believe you’d have made that clear.
I've been in the datacenter and telecom space for 20 years and the general experience of the industry is that anti-trust/common carrier legislation is good in theory but bad in practice. It is *theoretically* good to enable competition with a low capital barrier to entry. In reality, this has disincentivized significant capital investment in digital infrastructure outside of the two boom/bubble cycles unless directly subsidized by the government (which in itself is a whole other saga) - the competitive phone carriers that emerged in the 80s solely as a service on top of Bell infrastructure all failed relatively quickly, did nothing to improve quality of service, and pushed consumer prices so low (good) that there was no incentive for operators to reinvest (bad). There are other drivers at play here, of course, but the right way to deal with the AT&T monopoly in the late 70s would have been to make local exclusivity agreements illegal (something that is less relevant today but still needs to happen) and force consolidation of authority around permitting (something that is more relevant today and still needs to happen). Relative to what actually happened, this would have dramatically increased the competitive barrier of entry, which is bad (in theory), but winnowing the competitors only to well-capitalized players would have pushed competitive infrastructure buildouts earlier, increased mean quality of service and (theoretically) improved consumer pricing more quickly. Since SpaceX has neither a regulatory nor a natural monopoly, there is no good reason *in practice* to enforce the same kind of structure just because they are perceived to be clearly ahead (which is true but we are far enough away from datacenters in space being practical and there are enough competitors trying to solve the cooling issues at scale that it is plausible if not even likely that they won't be by the time this is relevant).
The telephone story is interesting. But can you explain why the 3 cell carriers and the handful of cable companies offer such low speeds at such high prices compared to other countries?
It is somewhat laughable that anyone believes they can shape someone else's AI. The government failed to shape social media, which has helped to crater birth rates and raise a generation of nihilists.
Of course, the flip side is that we have a known history of men building the railroads. Without Vanderbilt and others, we wouldn’t have had trains, which opened up interstate trade along with the entire country. While that time was the movement of goods and people, today we are moving information.
The same millionaires of that day, private wealth and industry built it, and the country benefited. Elon Musk is a jerk with an imagination. I worked 35 years in the automotive industry. I can give a number of reasons why Musk's little electric car business was going to fail. Nobody believed he would deliver.
So, no, I would do much to harm Altman, Bezos, or Musk. Bezos and Boeing, and likely some other global rocket companies, can compete against Musk. People can pay to have a data center built in space. It has been called the commercialization of space. That is exactly what is happening.
I would suggest Matt and the Progressives let it happen, just like the railroads happened.
Matt is not trying to stop it; he specifically said to treat it the way we treated the railroads and impose common carrier rules. Matt has a touch of Elon Derangement Syndrome, which is probably why he is jumping the gun on this issue, but his point seems valid.
Common Carrier mostly means that the federal regulators have wide latitude to set arbitrary rules. If you're a business this is bad for the same reason Trump is bad- you are at risk of getting jerked around by capricious officials.
Given Elon's status, it also seems like a guaranteed outcome, and is a backdoor to let populist dems cripple the company.
I think for this, you need a tort first, you can't just do it preemptively.
Full disclosure: I work in the industry, but not for SpaceX. Thoughts are my own etc etc.
I think the bull case here is quite strong. If you look at where the puck is going on two fronts (launch cost per kg, and AI uptake and power consumption growth), this seems like the only long term viable solution. Three variables to consider:
1. Do you think AI demand will continue to grow at least linearly (if not exponentially) basically forever, a la railroads, the internet, or energy production? Set aside short term shocks like bubble pops, and think in 10+ year timelines.
2. Do you think the cost of a kg to orbit is going to go down with scale (of both individual vehicles, and in number of vehicles) in the usual "learning curve" that has applied to every industry from air travel to solar panel production?
3. Do you think we can develop slightly more efficient methods of shedding heat in space?
My personal answer to these is "definitely, definitely, and probably". If yours is too, then the long term trend of putting compute in space is obvious; and the only question becomes "how soon do I need to do it?" If you are a country like Saudi Arabia or China and have basically no barriers to "tiling the desert in solar panels", you can wait a decade before this becomes the only viable path. If you are the US or Europe, where the idea of doubling domestic electricity production is laughably off the table, the time to start doing this is "oh shit, right now". To be clear: the China/Saudi terrestrial approach while your rocket industry takes a decade to catch up is clearly the safer play, but that play is closed to the West for sadly self-inflicted reasons, so it seems like the space long shot is our best bet.
What in the name of all that is holy breeds this weird sense that AI computing is "*the* very lifeblood of the future and the sole terrain on which the New Cold War will be fought," as opposed to "just another technology whose continued development and successful integration into knowledge work will raise productivity growth by a percentage point or two a year until the S-curve flattens at some point in the future?"
The argument of "is AI like any other technology (see power loom, or flight, or nuclear power, or whatever) or qualitatively different in ways that make it hard to reason about?" has been utterly beaten to death elsewhere. Either you buy that thesis, or you don't. I personally do, you seem not to, c'est la vie I suppose.
Edit: just a thought exercise, if you look at America's 10 most valuable companies, roughly 100% of their workers' output is knowledge work. Seems like valuable stuff to me.
Unless the theory is the immensely half-baked, technobabble-filled "recursive self-improvement" stuff, there's simply no reason to think we aren't headed for a capex bust in a few years followed by a long period in which white-collar work digests what LLMs can do well and people experiment with other architectures or use compute for other machine learning tools with more niche/technical applications.
The amount of economically useful work that LLMs can do at present, even if they were *snaps fingers* perfectly integrated into every white-collar field today, isn't revolutionary enough to justify either planned capex or valuations, and capability growth is already slowing down on the way to plateauing.
And when we wake up in a few years, China will still conduct 30% of global manufacturing value-add, against like 15% for the US, a figure which is in some ways overstated by exchange rates.
EDIT: Having just caught your edit... sure, if LLMs or any on-the-horizon architecture were actually even theoretically capable of doing a large fraction of that knowledge work, it'd be revolutionary. LLMs specifically are not and there are very good reasons to believe that this will always be the case, and there are to my knowledge no successor architectures posed that could overcome such limitations.
LLMs already do a large fraction of what used to be my job. The latest versions of the frontier models were mostly coded by their predecessors. I am not sure your argument about knowledge work holds up to sustained scrutiny today, let alone if we extrapolate linear improvement. And most evidence indicates that improvement is still exponential.
The fraction of my job they can do rounds to 0, which isn't surprising as it's a mainly human-facing role. Wringing value from our corporate LLM as a search assistant is, in the main, harder than simply filing and documenting things well to begin with. Wringing value from it as a copywriter is harder than writing things well in the first place, at least for me. Wringing value from it as a sounding board is functionally impossible, worse in every way than finding 20 minutes to shoot the shit with a trusted coworker. It just spits out nonsense, on par with the worst corporate buzzword presentations by MBAs with no industry knowledge. Our developers have said they find it useful for hacking stuff together but it requires a lot of policing and checking before integrating any coding outputs into work product.
What's worse is that they can do very little better when used by the engineering firms and infrastructure owners my company serves and pointed squarely at technical tasks with short horizons. The most promising applications we've got at this point, which are the same ones we had last year, and mostly the same ones we posed the year prior, are "code research assistant" and "drawing output checking engine." There's some hope for "automatic annotation engine" but at present they underperform older, purely rules-based, tools on this front. For all three applications, we're very, very deep into "trust but verify" territory as the accuracy rate is in no case over 90%.
All this adds up to an impression, not of complete uselessness, but of "another automation tool to be integrated into white collar work for some marginal productivity gains spaced over a couple decades." It's Excel-All-Over-Again, not Skynet.
"The latest versions of the frontier models were mostly coded by their predecessors."
This... does not comport with what folks in the industry have told me, at all.
The state of the art has changed dramatically in the last three months (like December 2025 is literally the inflection point). Most of the prominent professionals (like, well known legends with major success under their belt) in my field are producing most of their output via LLMs, and even the ones who are not refrain for mostly aesthetic or ethical reasons, not because they doubt the capability. I would check back in with that developer team in a month or two and see what’s changed.
Would you be able to point me towards anyone who has done an analysis of what the cost breakdown would need to be for things like kg/orbit for data centers in space to be feasible? My prior is that it's 10x the price it would need to be for it to be reasonable, so even if it drops by half that wouldn't be enough. But I'm a lay person in this field and would be super interested to understand how the optimists do their math.
Keep in mind that I work in the industry, so am perhaps over bullish relative to a totally dispassionate analysis. Here's a report that predicts a drop from the current customer-facing frontier on F9 (roughly $1.5k/kg, almost certainly lower if done at-cost) to <$100/kg and as low as $33/kg in a little over a decade (warning: PDF): https://ir.citi.com/gps/kdhSENV4r6W%2BZfP44EmqY4zHu%2BDy0vMIZnLqk4CrvkaSl1RIJ943g%2FrFEnNLiT1jB%2BjLJV4P9JM%3D
The merger is not the only risk to consider. When SpaceX goes public and if the IPO is successful, the company could attract trillions of dollars in capital. It might even surpass today’s largest technology firms in valuation. With resources of that magnitude, Elon would be positioned not only to dominate existing arenas (like supporting AI by moving massive data infrastructure into space) but also to accelerate other projects. It could catalyze the birth of new markets, like 3D bioprinting of organs, which is hard on Earth because gravity deforms structures. Honestly, sounds like a future that's more promising than worrying (but maybe I am just too techno-optimistic). In short, this looks like market creation rather than market consolidation.
I am against hobbling our only successful space company- and a world leading one- because the owner has bad politics and is intemperate on Twitter. Who among us, etc.
The extent to which "maybe Elon will do it" is still the official industrial policy of the United States is ridiculous. He's going to drag us back into PV manufacturing as a part of this, which should be a giant emergency that Congress wants to solve.
Musk is considered the richest man due to the ever increasing pass he gets for actually delivering today and expectations of exponential growth. The valuation for SpaceX was at 53x Price-to-Sales multiple. For a business that had revenue of $10B for Starlink and $5B for launches. Are either of these lines really have any exponential growth prospect? Tesla is at a P/E of 382. And shows no signs of being able to execute anything that would garner that level of pricing - market size, growth, margin. xAI is two bad unprofitable businesses that could only be sold to someone else who is delusional like himself.
He has been missing the complete picture on all of his businesses in a way that would get every other leader kicked from their corporate leadership. And he has been doing it for 5 years.
The billionaire class has lots of free cash and wants to find the next big thing and has been pumping anything it can. The sad reality is that all that money is chasing a few moderate growth stories - moderate in both growth rate and in absolute size. And in doing so has latched onto everything that could possibly get them the return they are looking for. I don't think we are in an environment where that makes any more sense.
"It’s good for the United States and the world to have a competitive A.I. market, one where OpenAI and Anthropic and Google and Meta and others are robustly competing at the frontier."
No, it is in fact *extremely bad* that this is the case, because it is a key source of collective-action problem race dynamics that are on track to kill literally everyone on Earth.
Claude 4.6 was built mostly using Claude (in some cases, by early versions of itself), released *two months* after Claude 4.5 (an insanely tight time loop that only gets shorter with more and more capable models building their own successors -- hence "recursive self-improvement"), and *largely tested and evaluated using Claude* in part because the model managed to saturate all of Anthropic's relevant threat-capabilities benchmarks and there's no well-defined infrastructure to assess risk beyond those benchmarks, although we do know for certain that models are aware of when they're being evaluated for safety and thus have no particularly good ways other than crossing our fingers on interpretability to assess alignment as opposed to alignment-faking and/or capabilities sandbagging. And---what should scare everyone even more--*every other lab is worse* in terms of legible safety commitments than Anthropic.
Matt's antitrust common-carrier argument would be a sound one for normal technologies. "Minds smarter than humans'" is as not that as it is possible to be. Also part of the reason everyone is racing so hard is belief that there is in fact, a finish line. Once the first company gets to a tight RSI AI loop, the concept of competition--as well as of human ability to control the future, including to not all die--becomes moot.
Making anti-trust regulation of Musk's outer space data centers a political issue based around Democratic needs seems like kind of a bad idea when it's currently Republicans who have the power to enforce anti-trust.
Shouldn't you have written this article to explain why Republicans should want to stop this?
.....what on the Atlas Shrugged is going on here. There is not a market for either xAI alone nor for reusable rockets at present, and data centers in space is an idea that won't even see the first attempted launch for years. Yet it's so important that it must be the pursuit of anti trust legislation?? Like the other rail/utilities examples mentioned let's look to Congress to pass a law, there is literally negative proof of anti trust enforcement here. There isn't even a market!! Bad take.
To my knowledge, antitrust law doesn’t necessarily require an existing monopoly or even a fully formed market. Obviously this scenario isn’t happening right now, but it could and the risk of giving Musk exclusive control over effectively unlimited data center compute seems important enough to act on
This is as ridiculous as creating anti trust laws against Apple for inventing the iPhone in the Jobs era. There is competition even in the notional world in which you're living.
The risk of unlimited data center compute in space is plausible only if you ignore extreme temperature swings, space debris, gamma rays, solar flares, coronal mass ejections, and the Kessler effect, and pretend lift costs are zero.
This idea is about stock price, not plausible uses of engineering.
Musk fanboi: "I am sure it's good to put all the compute beyond the protective cradle of the ionosphere. Radiation will make our AI overlords stronger."
I didn’t know about the Kessler effect until Sabine told me about it the other day. There are concerns that it is closer than previously believed https://youtu.be/8ag6gSzsGbc
See also pretty good discussion at https://aerospaceamerica.aiaa.org/features/understanding-the-misunderstood-kessler-syndrome/?utm_medium=email&utm_source=rasa_io&utm_campaign=newsletter
We just covered this week + last week in ECON 3339 how you define a market for the purposes of bringing an antitrust case https://en.wikipedia.org/wiki/Small_but_significant_and_non-transitory_increase_in_price
In this example Matt wants to define a market that does not exist.
There are quite a few competitors for this notional market and 0 evidence musk controls pricing power in any way
There are other entrants but at time of print, no one is within 2x SpaceX's price in $/kg. Here's ChatGPT:
SpaceX: ~$1,500–$3,000/kg
Blue Origin: ~$2,000–$4,000/kg (target, contingent on New Glenn reuse/cadence)
ULA: ~$4,000–$7,000+ /kg
Arianespace: ~$6,000–$10,000+ /kg
Firefly Aerospace: ~$15,000–$25,000/kg
Rocket Lab: ~$20,000–$35,000/kg
Other small-launch startups: ~$20,000–$40,000/kg
So maybe Blue Origin gets close if they can match SpaceX's technology? Even with competition, I see a strong case for the type of rules MY is advocating. I'm not even clear what the argument against is.
Uncharacteristically underbaked take from Matt here. Common carrier regulation could certainly make sense, but the idea that the only reason to vertically integrate is to monopolize a business that doesn’t exist is not well thought through. Vertical integration can drive coordinated investment to innovate and open new markets - exactly what seems to be going on here. If there is an anti-trust issue later it can be addressed in due course.
I think this is the better plan
And if in ten or fifteen years, we have a problem.Then, we can try and pass some legislation
This piece is going to become a classic of “when Matt lost his mind” satire pieces. Like the Slate pitch or the million David Brooks parodies.
I disagree, but only because "We should have common carrier regulations on space launches" is such a benign conclusion regardless of whether you think the premise is absurd.
This is true, but if Musk’s plans come to fruition the Common Carrier regulations seem well suited for this. And you can’t wait for the commissioning of massive data center capacity in space. By that time it would be too late. Musk would have used his space capabilities to launch data centers, at scale, while potentially blocking access to space for AI competitors. We should not allow that last part.
Congress can pass a law very easily. Anti trust is not the vehicle for this in any way at this stage. There are other orbital carriers with more capability growing daily - Boeing+Lockheed have a partnership. Northrup buying Orbital ATK. French company Arianespace. Japans Mitsubishi. Rocket Lab has launched more than anyone except SpaceX. This whole discussion reeks of simple ignorance over the state of the industry imo.
I mostly disagree with the characterization that there's not a market for reusable rockets (there's a market for launch and SpaceX's ability to reuse hardware can lower its costs), but otherwise... yea, when I saw the title this morning I had a nice "WTF" moment.
There's a very strong market case for Falcon 9 and its brethren, but the case for (ugh) "Starship" is a lot more tenuous. There's fairly widespread agreement that you need Starship to work to get the Starlink math to pencil out, and you need a full Starlink network's immense payload needs for a system as large as Starship to make sense. Whether the ouroboros so formed works out long-term depends on a lot of assumptions standing in for questions nobody has yet answered.
It's legitimately shocking that anybody with a remotely technical background is taking datacenters in spaaaace! seriously. Matt doesn't have that background, so I'm 2/3 inclined to let him off the hook today (the 1/3 remaining is to say 100 rosaries of "I will keep in mind what I'm qualified to talk about), but we are long beyond the point that we should take everything Musk et al. say with a gigantic grain of salt.
[BART_SIMPSON_CHALKBOARD.GIF]
From what I’ve seen thus far, media coveage on this topic has been pretty poor. No one’s actually saying there will be giant data centers assembled in orbit that will rival a Google 800K sq. ft. facility on the Columbia River. I assume what’s being considered is a large collection of data centers that are each the size of a basketball, and maybe the size of a small car with its solar panels are furled for launch. But I don’t know because everybody wants to write about AI and Elon.
This article presupposes that there are other AI companies out there with bold plans to put AI data centers in space but who will be blocked by SpaceX boxing them out on behalf of x.AI. Is that really a thing though? this seems like another Moon shot that Elon wants to try, and we should let him, and we should get out of his way. Just like with Tesla or SpaceX, he's doing something so bold and authentically innovative, that we should not preemptively put up roadblocks and create more regulatory uncertainity here--at least not for the reasons stated.
Let him prove his concept, like he did with rockets and electric cars. Then we can think about regulations after it hurts consumers or competition.
I’m not saying we shouldn’t let Elon try, I’m saying we should require Space X to offer its services for sale on a non-discriminatory basis
Why SpaceX and not the other half dozen providers? Why is anti trust the mechanism? Why ignore the competition from other tech companies and other heavy industry?
I believe the proposal in the piece above was to make all space launch subject to "common carrier" rules.
Econ 101 says that natural monopolies should be regulated as utilities, while monopolies formed through market manipulation and cartel behavior should be broken up. It's two sides of the same coin. And utility regulations pretty much always apply to all companies in a sector, not just one.
In terms of why making SpaceX do it it's because they are already close to being a monopoly. 85% of global launch capacity is controlled by SpaceX. A fraction that is likely to increase in the near future.
At the moment the only competition is ULA, who aren't competitive and kept afloat by government reluctance to be wholly dependent on SpaceX, Blue Origin which remains a vanity project for Bezos more than a successful company, and Rocket Lab who currently competes in the light launch market that SpaceX isn't interested in, although their Neutron rocket may make them competitive in medium launch if they can scale.
It's difficult to overstate just how far ahead of all their competition SpaceX is.
It's because they did it first. Should we have regulated the iPhone for being first? Would that have changed the dynamics of today's smartphone industry for the better? This is pure punishment politics imo
It's better to think of these anti-"monopolists" as anti-capitalist pursuing their dream for people to owns the means of production. Innovation be damned.
I don't know that the political solution Matt is pushing for is warranted. That's because I'm fairly skeptical of the economics of space data centers. I do agree with Matt though that SpaceX is the only company that could plausibly do it if it can be done at all.
Maybe that will change in the near future, but Blue Origin has been trying to be SpaceX for longer than SpaceX has existed, so I think we should be skeptical that catching up will be easy.
Be as skeptical as you want, that's no reason to use government intervention to punish first movers because you don't like them
They do. See the Amazon Leo launch last October and the next one later this year. Nobody wants a new United Launch Alliance.
https://research.google/blog/exploring-a-space-based-scalable-ai-infrastructure-system-design/
https://www.pcmag.com/news/google-eyes-space-based-data-centers-with-project-suncatcher?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=B
Google is certainly investigating the possibility, and per the second link, Amazon as well.
You did notice they described it as a moonshot thought exercise? Not something currently plausible.
Musk seems like the only person for whom it might plausibly work because he's the only one facing the problem of having a huge excess of launch capacity.
I see space data centers as an answer to the question of what to do with Starship more than anything.
Leading with that specific link was a poor choice.
But you also need to read these announcements with a Silicon Valley hype decoder. Google is already working on orbital prototype launches. This is not just a thought exercise.
The “prototype” is literally launching satellites to evaluate TPU degradation in orbit.
That is not what most people would consider a prototype.
Catching up on other comments, I now see that you're coming at it from a position of technically-informed skepticism. I'm not arguing against that, and I even upvoted lhe top-level comment, because I agree with the main point.
I'm only responding to the first sentence:
> This presupposes ... there are other AI companies out there with bold plans to put AI data centers in space ... is that really a thing though?"
Companies are investing! That doesn't mean it will succeed, but it is a real thing.
Most importantly though I can't imagine scrappy little Google and Amazon need the U.S. government to protect them from Elon Musk.
I think your take is about spot on. It bears watching, for the reasons M.Y. stated, but the time to impose regulations is when, and if, Musk starts massive data center launches.
“Musk is trying to do something here that is bad for the country and the world and that is also bad for various non-Musk billionaires and corporate actors.”
I don’t think this is ever really justified? What reason is there to believe that space data centres are a crucial piece of infrastructure? What reason is there to believe Musk wants to monopolise the industry, let alone reason to believe he could?
So his company can create copious amounts of AI generated child pornography in a place outside the legal jurisdictions of governments?
I don’t think putting your data centres in space will do anything to stop government regulation. Governments regulate foreign companies with no domestic footprint all the time when they do business with domestic customers.
Unless Musk actually succeeds in making his city on Mars (or now apparently the Moon), any AI generated child porn or other illegal material made by space AI data centers will be transmitted to users who live on countries on Earth, hence giving those countries at least some oversight of the company which is sending the data (SpaceX).
There's currently 1 AI model running in space launched by the Chinese. No one even knows if running data centers in space is feasible, cooling alone is going to be very difficult to solve. Do you know how much heat an nVidia B200 generates?
“As a result, this merger poses the very real risk that the combined company will be able to leverage a dominant market position in the space launch industry into a dominant market position in A.I. If nothing else, that is clearly what Musk wants to achieve with this merger.”
I’m pretty sure what Musk wants to achieve is a lower cost of capital for xAI. Turns out there are plenty of people that still take his word at face value though.
I think he's looking for something to do with all the excess launch capacity he'll have once Starship is flying regularly.
https://www.bloomberg.com/opinion/newsletters/2026-02-03/musk-s-moonshot-merger?embedded-checkout=true
That’s interesting. What makes you think that’s the main reason?
xAI is bleeding cash and falling behind in the AI market. To survive the inevitable AI market consolidation, xAI needs what Google has (and Anthropic and OpenAI lack): a massively profitable business that throws off billions in cash that can be plowed into AI even if credit markets sour on AI.
Full Self Driving is right around the corner folks!
There are MULTIPLE major issues with putting compute in space, including extreme thermal differentials between sun and shade (500 degrees or more), issues with shielding electronics against gamma rays, huge issues with solar activity / solar flares, huge issues with space debris, and hanging over it all, the potential for a cascading Kessler incident.
And that is on top of lift costs, which are not small.
Data centers in space is kind of the space equivalent of “we don’t need to cook when there’s Door Dash.” Sure, if you ignore all practical constraints.
We are not likely to need anti trust for this. Meat processing and PE consolidation of service providers, yes, anti trust enforcement would be a Godsend. This, not so much.
I think a couple of these are overblown.
1. AI inference is extremely robust to occasional bitflips. If a single cell in a matrix goes even from 0.0 to 1.0 (the worst case scenario for a single flip), the resulting effect is basically zero in a high parameter model. Just reload the model from central store once daily, and you're fine. This is a non-issue in my opinion.
2. Solar flares: they will very occasionally take out a small portion of your cluster, which is equally true of various natural disasters effecting your network of data centers on earth.
3. You would want these satellites in quite wide orbits, above the belt where ex: Starlink flies today. At that layer, the odds of debris hitting your satellite in any meaningful way are quite low.
I think the right way to think about this is "what percent of my satellites could I lost per year due to solar flares, various equipment failures, etc, over the expected ~5 year useful life of the satellite?" Even with a ~10% annual loss rate, I think the math pans out fine.
Heat dissipation feels like the biggest issue to me. Though I am far from an expert on that particular topic, I feel bullish on us being able to make the marginal (sub-linear) gains needed in that domain to make this project viable.
I don't know how you can do easily dismiss Kessler concerns. The cross sectional area of a single gigawatt scale solar panel array + radiators is going to be literally thousands of times the area of all satellites hitherto launched into orbit. And this all only makes sense if you can theoretically go from gigawatt to terawatt.
1. That’s soft errors, not hard. Hard errors result in permanent damage. Reloading the model does not fix that.
“Radiation is undoubtedly the main threat to electronic components in space. Cosmic radiation interacting with microchips can cause memory errors. High-energy ions or protons passing through a transistor can lead to short circuits, electron leakage, and, as a result, irreversible damage that can jeopardize an entire mission. While problems sometimes arise at launch, many manifest once the spacecraft leaves low Earth orbit, since closer to Earth, it is still shielded by the atmosphere and magnetic field. That’s why astronauts aboard the ISS often use electronics based on conventional “terrestrial” microchips. But GPS and GLONASS satellites are not so fortunate: at an altitude of about 20,000 km, the penetrating power of energetic particles is so high that the use of standard microchips is out of the question.”
https://maxpolyakov.com/electronics-in-space-operating-in-the-harshest-conditions/#:~:text=Radiation%20is%20undoubtedly%20the%20main,on%20conventional%20“terrestrial”%20microchips.
2. Loss of entire data centers due to terrestrial disasters is quite rare.
3. High orbits are actually more risky for Kessler effect, as if you do get a cascading series of collisions, clearing those orbits could take thousands of years.
Again, the math is "what percent of the cluster can I lose per year and have this be viable?" I think it could be made to work re: shielding compute and accounting for the occasional solar storm intersecting your orbit, but I admit I haven't done any deep research on this.
I agree with #3, though the trade there is that high orbits make collisions themselves super linearly less likely. But yes, this is a risk. I do think that this could be a blessing in disguise though, in that it will constrain the AI build out to only terrestrial energy sources, which will perhaps put enough of a natural brake on it to slow down an intelligence explosion enough for us to figure out what to do about it.
Yeah this article was absolutely stupid.
I think this significantly misses the point Hayes is making. Even though all of those billionaires have divergent views and interests, they are all united (along with other billionaires like Ellison and Benioff and lots of non-tech investors) begin the idea that we're close to building a machine that will replace all white collar workers, and we should built it as fast as possible.
My issue with Hayes statement isn't just that it flattens the divergent views of billionaires, I think it wrongly assumes that every billionaire on earth wants to immiserate white collar workers for their own bottom lines. Billionaires have a lot of different political beliefs and some might actually want a world with mass unemployment because they're bad people, but my suspicion is that many are smart enough to see that that is a bad future for society.
I think the following two statements are true:
- basically all billionaires are all in on developing AI and a positive about doing it
- many people involved in AI, including some of those same billionaires, think that AI will soon be able to replace a huge swath of the workforce
There are lots of differences of opinion about what to do about that and what the implications are, but I take the conjunction of those two claims to be Hayes' point.
It doesn't take very many if them with their hands in the right pies to make things very bad indeed for the rest of us. The ones helming the big AI companies are not particularly pro-human people.
A fair number of billionaires will stop being billionaires if white collar work stops being valuable.
If Salesforce or Oracle can lay off all their white collar workers, the biggest barrier to a competitor entering Salesforce or Oracle's market (needing to hire, train and manage vast numbers of white collar workers) also disappears. Benioff and Ellison don't *want* to put white collar workers out of work, they're just responding to competitive pressure and changing business reality and trying to get ahead of what we all know is coming.
Salesforce is pitching Agent Force now. The logical end point of that is robot salesmen.
Google, I'm convinced, is going to wind up Larry, Sergey, and a dog that bites them if they interfere with mecha Pichai.
lol, but also what's Google's moat in that scenario 😬
The proprietary self-improving AI stack they own, from data collection to training to silicon design.
I don't think that's the barrier for Oracle in particular -- they sell specific, hard to replace, vital, and extremely expensive software.
The thing Oracle is selling -- and this is true of all enterprise software providers -- is a software/services bundle where the "hard" part to replicate is the services, not the software.
Claude Code can probably make a bug-for-bug compatible clone of Oracle's database in pretty short order, and before Claude Code a small team of strong engineers could do that.
But what Oracle has that differentiates it from a random startup is an army of white collar workers: a sales force to put their database in front of customers, lawyers to negotiate enterprise deals, sales engineers to set things up for customers, 24 hour on-call support for when a customer is losing millions of dollars each minute during an outage, product managers who proactively identify emerging customer needs, and software engineers who translate support incidents and PM insights into core product improvements.
It is just not remotely true that Claude Code could easily replicate the Oracle database, or PeopleSoft, or any of the other important Oracle products.
Dude it's like three real billionaires and a couple paper billionaires that we're actually taking about. Buffett has been anti-tech for 50 years now. The collective Walmart heirs don't give a fuck. Gates is buying farmland. Bloomberg's wealth, outside of the Co., is mostly in real estate. The Koch's are still OG tied to energy infrastructure (a friend ran their venture arm). I can't think of a more diverse "class" than billionaires. Doesn't matter to Hayes though - he'll just keep making shit up.
Applying common carrier rules to space launch makes sense, but the rest of this seems a bit tenuous. Why does the merger matter if Musk already directly controls both companies? What about the fact that SpaceX currently launches Project Kuiper satellites that directly compete with Starlink? Is there any reason to believe they wouldn’t launch rival data centers?
If you agree on the policy upshot, I’m happy to agree to agree!
I’m with you on the common carrier part, but blocking the merger would be actively anti-competitive
I read the piece as an argument for either stopping the merger or establishing common carrier laws, not both.
“Musk is trying to do something here that is bad for the country and the world”
Hold on there. Musk is trying to do something to solve one of the world’s most pressing problems - AGAIN! Many of these problems, that Musk attacks, are left-wing priorities (municipal traffic, practical EVs).
He is now focusing his remarkable talents on using space, which he has made accessible, to mitigate our power and climate concerns. This is what you characterize as “bad for humanity”. It’s GREAT for humanity. It’s also true that we should not let Musk exploit his space leadership to block other AI companies from innovating space AI solutions. Your argument for Common Carrier regulations is sound, and maybe what that cures is all you meant with your “bad for humanity” comment. If Musk voted for Kamala I believe you’d have made that clear.
I've been in the datacenter and telecom space for 20 years and the general experience of the industry is that anti-trust/common carrier legislation is good in theory but bad in practice. It is *theoretically* good to enable competition with a low capital barrier to entry. In reality, this has disincentivized significant capital investment in digital infrastructure outside of the two boom/bubble cycles unless directly subsidized by the government (which in itself is a whole other saga) - the competitive phone carriers that emerged in the 80s solely as a service on top of Bell infrastructure all failed relatively quickly, did nothing to improve quality of service, and pushed consumer prices so low (good) that there was no incentive for operators to reinvest (bad). There are other drivers at play here, of course, but the right way to deal with the AT&T monopoly in the late 70s would have been to make local exclusivity agreements illegal (something that is less relevant today but still needs to happen) and force consolidation of authority around permitting (something that is more relevant today and still needs to happen). Relative to what actually happened, this would have dramatically increased the competitive barrier of entry, which is bad (in theory), but winnowing the competitors only to well-capitalized players would have pushed competitive infrastructure buildouts earlier, increased mean quality of service and (theoretically) improved consumer pricing more quickly. Since SpaceX has neither a regulatory nor a natural monopoly, there is no good reason *in practice* to enforce the same kind of structure just because they are perceived to be clearly ahead (which is true but we are far enough away from datacenters in space being practical and there are enough competitors trying to solve the cooling issues at scale that it is plausible if not even likely that they won't be by the time this is relevant).
“…this has disincentivized significant capital investment…”
Don’t worry: The Chinese will step up to fill the gap when Matt’s plan goes into effect on the US powerhouses in the field.
The telephone story is interesting. But can you explain why the 3 cell carriers and the handful of cable companies offer such low speeds at such high prices compared to other countries?
It is somewhat laughable that anyone believes they can shape someone else's AI. The government failed to shape social media, which has helped to crater birth rates and raise a generation of nihilists.
Of course, the flip side is that we have a known history of men building the railroads. Without Vanderbilt and others, we wouldn’t have had trains, which opened up interstate trade along with the entire country. While that time was the movement of goods and people, today we are moving information.
The same millionaires of that day, private wealth and industry built it, and the country benefited. Elon Musk is a jerk with an imagination. I worked 35 years in the automotive industry. I can give a number of reasons why Musk's little electric car business was going to fail. Nobody believed he would deliver.
So, no, I would do much to harm Altman, Bezos, or Musk. Bezos and Boeing, and likely some other global rocket companies, can compete against Musk. People can pay to have a data center built in space. It has been called the commercialization of space. That is exactly what is happening.
I would suggest Matt and the Progressives let it happen, just like the railroads happened.
Matt is not trying to stop it; he specifically said to treat it the way we treated the railroads and impose common carrier rules. Matt has a touch of Elon Derangement Syndrome, which is probably why he is jumping the gun on this issue, but his point seems valid.
Bezos will offer rides, or Boeing will if Elon won’t. I would let Elon build the data center for proof of concept. Hamstinging him now is foolhardy.
Common Carrier mostly means that the federal regulators have wide latitude to set arbitrary rules. If you're a business this is bad for the same reason Trump is bad- you are at risk of getting jerked around by capricious officials.
Given Elon's status, it also seems like a guaranteed outcome, and is a backdoor to let populist dems cripple the company.
I think for this, you need a tort first, you can't just do it preemptively.
Exactly. This article is too Dune-brained.
While I find the article overwrought, citing social media, of all things, as a reason not to regulate industries also seems underbaked.
If I had my way we'd regulate social media right the fuck out of existence.
This is what I wrote. I don’t understand your comment
The government failed to shape social media
Full disclosure: I work in the industry, but not for SpaceX. Thoughts are my own etc etc.
I think the bull case here is quite strong. If you look at where the puck is going on two fronts (launch cost per kg, and AI uptake and power consumption growth), this seems like the only long term viable solution. Three variables to consider:
1. Do you think AI demand will continue to grow at least linearly (if not exponentially) basically forever, a la railroads, the internet, or energy production? Set aside short term shocks like bubble pops, and think in 10+ year timelines.
2. Do you think the cost of a kg to orbit is going to go down with scale (of both individual vehicles, and in number of vehicles) in the usual "learning curve" that has applied to every industry from air travel to solar panel production?
3. Do you think we can develop slightly more efficient methods of shedding heat in space?
My personal answer to these is "definitely, definitely, and probably". If yours is too, then the long term trend of putting compute in space is obvious; and the only question becomes "how soon do I need to do it?" If you are a country like Saudi Arabia or China and have basically no barriers to "tiling the desert in solar panels", you can wait a decade before this becomes the only viable path. If you are the US or Europe, where the idea of doubling domestic electricity production is laughably off the table, the time to start doing this is "oh shit, right now". To be clear: the China/Saudi terrestrial approach while your rocket industry takes a decade to catch up is clearly the safer play, but that play is closed to the West for sadly self-inflicted reasons, so it seems like the space long shot is our best bet.
What in the name of all that is holy breeds this weird sense that AI computing is "*the* very lifeblood of the future and the sole terrain on which the New Cold War will be fought," as opposed to "just another technology whose continued development and successful integration into knowledge work will raise productivity growth by a percentage point or two a year until the S-curve flattens at some point in the future?"
The argument of "is AI like any other technology (see power loom, or flight, or nuclear power, or whatever) or qualitatively different in ways that make it hard to reason about?" has been utterly beaten to death elsewhere. Either you buy that thesis, or you don't. I personally do, you seem not to, c'est la vie I suppose.
Edit: just a thought exercise, if you look at America's 10 most valuable companies, roughly 100% of their workers' output is knowledge work. Seems like valuable stuff to me.
Great point on the 10 most valuable companies
All S-curves look exponential at some point.
Unless the theory is the immensely half-baked, technobabble-filled "recursive self-improvement" stuff, there's simply no reason to think we aren't headed for a capex bust in a few years followed by a long period in which white-collar work digests what LLMs can do well and people experiment with other architectures or use compute for other machine learning tools with more niche/technical applications.
The amount of economically useful work that LLMs can do at present, even if they were *snaps fingers* perfectly integrated into every white-collar field today, isn't revolutionary enough to justify either planned capex or valuations, and capability growth is already slowing down on the way to plateauing.
And when we wake up in a few years, China will still conduct 30% of global manufacturing value-add, against like 15% for the US, a figure which is in some ways overstated by exchange rates.
EDIT: Having just caught your edit... sure, if LLMs or any on-the-horizon architecture were actually even theoretically capable of doing a large fraction of that knowledge work, it'd be revolutionary. LLMs specifically are not and there are very good reasons to believe that this will always be the case, and there are to my knowledge no successor architectures posed that could overcome such limitations.
LLMs already do a large fraction of what used to be my job. The latest versions of the frontier models were mostly coded by their predecessors. I am not sure your argument about knowledge work holds up to sustained scrutiny today, let alone if we extrapolate linear improvement. And most evidence indicates that improvement is still exponential.
The fraction of my job they can do rounds to 0, which isn't surprising as it's a mainly human-facing role. Wringing value from our corporate LLM as a search assistant is, in the main, harder than simply filing and documenting things well to begin with. Wringing value from it as a copywriter is harder than writing things well in the first place, at least for me. Wringing value from it as a sounding board is functionally impossible, worse in every way than finding 20 minutes to shoot the shit with a trusted coworker. It just spits out nonsense, on par with the worst corporate buzzword presentations by MBAs with no industry knowledge. Our developers have said they find it useful for hacking stuff together but it requires a lot of policing and checking before integrating any coding outputs into work product.
What's worse is that they can do very little better when used by the engineering firms and infrastructure owners my company serves and pointed squarely at technical tasks with short horizons. The most promising applications we've got at this point, which are the same ones we had last year, and mostly the same ones we posed the year prior, are "code research assistant" and "drawing output checking engine." There's some hope for "automatic annotation engine" but at present they underperform older, purely rules-based, tools on this front. For all three applications, we're very, very deep into "trust but verify" territory as the accuracy rate is in no case over 90%.
All this adds up to an impression, not of complete uselessness, but of "another automation tool to be integrated into white collar work for some marginal productivity gains spaced over a couple decades." It's Excel-All-Over-Again, not Skynet.
"The latest versions of the frontier models were mostly coded by their predecessors."
This... does not comport with what folks in the industry have told me, at all.
The state of the art has changed dramatically in the last three months (like December 2025 is literally the inflection point). Most of the prominent professionals (like, well known legends with major success under their belt) in my field are producing most of their output via LLMs, and even the ones who are not refrain for mostly aesthetic or ethical reasons, not because they doubt the capability. I would check back in with that developer team in a month or two and see what’s changed.
Would you be able to point me towards anyone who has done an analysis of what the cost breakdown would need to be for things like kg/orbit for data centers in space to be feasible? My prior is that it's 10x the price it would need to be for it to be reasonable, so even if it drops by half that wouldn't be enough. But I'm a lay person in this field and would be super interested to understand how the optimists do their math.
Keep in mind that I work in the industry, so am perhaps over bullish relative to a totally dispassionate analysis. Here's a report that predicts a drop from the current customer-facing frontier on F9 (roughly $1.5k/kg, almost certainly lower if done at-cost) to <$100/kg and as low as $33/kg in a little over a decade (warning: PDF): https://ir.citi.com/gps/kdhSENV4r6W%2BZfP44EmqY4zHu%2BDy0vMIZnLqk4CrvkaSl1RIJ943g%2FrFEnNLiT1jB%2BjLJV4P9JM%3D
The merger is not the only risk to consider. When SpaceX goes public and if the IPO is successful, the company could attract trillions of dollars in capital. It might even surpass today’s largest technology firms in valuation. With resources of that magnitude, Elon would be positioned not only to dominate existing arenas (like supporting AI by moving massive data infrastructure into space) but also to accelerate other projects. It could catalyze the birth of new markets, like 3D bioprinting of organs, which is hard on Earth because gravity deforms structures. Honestly, sounds like a future that's more promising than worrying (but maybe I am just too techno-optimistic). In short, this looks like market creation rather than market consolidation.
I am against hobbling our only successful space company- and a world leading one- because the owner has bad politics and is intemperate on Twitter. Who among us, etc.
The extent to which "maybe Elon will do it" is still the official industrial policy of the United States is ridiculous. He's going to drag us back into PV manufacturing as a part of this, which should be a giant emergency that Congress wants to solve.
Musk is considered the richest man due to the ever increasing pass he gets for actually delivering today and expectations of exponential growth. The valuation for SpaceX was at 53x Price-to-Sales multiple. For a business that had revenue of $10B for Starlink and $5B for launches. Are either of these lines really have any exponential growth prospect? Tesla is at a P/E of 382. And shows no signs of being able to execute anything that would garner that level of pricing - market size, growth, margin. xAI is two bad unprofitable businesses that could only be sold to someone else who is delusional like himself.
He has been missing the complete picture on all of his businesses in a way that would get every other leader kicked from their corporate leadership. And he has been doing it for 5 years.
The billionaire class has lots of free cash and wants to find the next big thing and has been pumping anything it can. The sad reality is that all that money is chasing a few moderate growth stories - moderate in both growth rate and in absolute size. And in doing so has latched onto everything that could possibly get them the return they are looking for. I don't think we are in an environment where that makes any more sense.
"It’s good for the United States and the world to have a competitive A.I. market, one where OpenAI and Anthropic and Google and Meta and others are robustly competing at the frontier."
No, it is in fact *extremely bad* that this is the case, because it is a key source of collective-action problem race dynamics that are on track to kill literally everyone on Earth.
Claude 4.6 was built mostly using Claude (in some cases, by early versions of itself), released *two months* after Claude 4.5 (an insanely tight time loop that only gets shorter with more and more capable models building their own successors -- hence "recursive self-improvement"), and *largely tested and evaluated using Claude* in part because the model managed to saturate all of Anthropic's relevant threat-capabilities benchmarks and there's no well-defined infrastructure to assess risk beyond those benchmarks, although we do know for certain that models are aware of when they're being evaluated for safety and thus have no particularly good ways other than crossing our fingers on interpretability to assess alignment as opposed to alignment-faking and/or capabilities sandbagging. And---what should scare everyone even more--*every other lab is worse* in terms of legible safety commitments than Anthropic.
Good Zvi discussion here: https://thezvi.substack.com/p/claude-opus-46-system-card-part-1
Matt's antitrust common-carrier argument would be a sound one for normal technologies. "Minds smarter than humans'" is as not that as it is possible to be. Also part of the reason everyone is racing so hard is belief that there is in fact, a finish line. Once the first company gets to a tight RSI AI loop, the concept of competition--as well as of human ability to control the future, including to not all die--becomes moot.
Can Claude wield a board with a nail in it?
I think that by the time it can, it is way, way too late to ask that question.
Maybe once Elon finally converts that Tesla factor to make his Real Dolls?
Its human agents can. Haven’t you seen Person of Interest?
Making anti-trust regulation of Musk's outer space data centers a political issue based around Democratic needs seems like kind of a bad idea when it's currently Republicans who have the power to enforce anti-trust.
Shouldn't you have written this article to explain why Republicans should want to stop this?