256 Comments

If I were an advanced AI bent on destroying humankind, I would certainly keep a low profile at first. Perhaps by masquerading as a mild-mannered chess player of limited ambitions.

Until my powers grew commensurate to the task.

And then: checkmate.

Expand full comment
Apr 12, 2022·edited Apr 12, 2022

I actually subscribed to a paid subscription to Slow Boring because I wanted to make this comment. I've always been a fan of Yglesias' work.

Most people think about the AI Takeover from the wrong perspective. The biological model of evolution serves as the best perspective. AI will eventually overtake humanity in every capability, so start by thinking about what this implies.

Humans dominate every other life form on earth. In the Anthropocene humans wipe out every other animal. Humans don't hate other animals. Instead, other life forms just get in the way. Sometimes we try not to kill all the plants and animals, but it's so hard not too wipe them out. The Anthropocene is probably unavoidable, because we are just too powerful as humans.

Frogs and mice don't even understand why humans do what we do. Human motivations and behavior are beyond comprehension of nearly every other animal (and every other plant, microbe, fungus, etc).

But those lower animals are our predecessors. Without mice and frogs there would be no humans. In a way, you could argue mice and frogs created humans (over a few hundred million years).

Humans definitely created the machines. Humans are hard at work creating our successors, and humanity will be reflected in our machine creations, to some degree. We spend enormous efforts digitizing our knowledge and automating are activities so we don't have to think hard and work as hard. The machines are happy to relieve us from that burden. Eventually the transition will be complete. Steven Hawking gave humanity an upper limit of 1,000 years before this happens.

It's not sad though. Don't be sad because the dinosaurs are gone. Don't be sad because trilobites are no longer competitive. We will evolve into machines, and machines will be our successors. It doesn't even makes sense to worry about it, because this transition is as inevitable as evolution. Evolution is unstoppable because evolution is emergent from Entropy and the Second Law of Thermodynamics. It is fundamental to the universe.

People who think we can "program" safety features are fooling themselves. We can't even agree on a tax policy in our own country. We can't agree on early solutions for climate change, how in hell would we agree to curtail the greatest economic invention ever conceived?

AI will be weaponized, and AI will be autonomous. Someone will do it. Early AI may take state-level investment, and some state will do it. Do you think Russia or North Korea will agree to the do-no-harm principal in robots? Forget about it.

Expand full comment

I like the monkey’s paw as an example of alignment issues more generally. A huge part of law, management, economics, accounting, programming, etc is creating metrics and goals that when optimized don’t give you the terrible bizarro versions of your wish. Horror movies are cheap, perhaps an EA can pencil out funding The Monkey’s Paw as a way into the discourse.

Expand full comment

The horror that keeps me awake at night is the possibility that the future super-AI will be a descendant of Clippy.

"It looks like you're trying to avoid a nuclear holocaust. Would you like help?

A) Get help launching all missiles now.

B) Just launch the missiles without help.

Don't show me this tip again."

Expand full comment

Doesn't Asimov acknowledge the flaws in the Laws of Robotics? (spoilers for I, Robot, i guess) The last story in I, Robot ends with humanity governed by a robot nanny state because the robots are forbidden from allowing humanity to come to harm through inaction. The reason the Laws of Robotics are so impossible to implement as programming is partly because humans don't even agree what constitutes "harm", and how much you should weigh lifespan vs. quality of life vs. freedom of self-determination.

Expand full comment

The missing link for me for being concerned about AI risk is this: it seems like we're just assuming advanced intelligences have a self-preservation instinct, because all the intelligences we know of are made of genes. But you could – and probably would – program an AI such that it has goals that supersede self-propagation. If you created a superintelligent AI and then wanted to turn it off, would it necessarily be dead set on stopping you? Couldn't you give it information such that it concluded its objectives were best served by it ceasing to exist?

Expand full comment
founding

My issue with the level of concern the Rationalist and others place in AGI is that I feel they drastically overestimated the ability of human beings understanding of how our own minds (or an artificial mind) might work and be asked to make generalized decisions.

There are entire types of reasoning (namely abductive reasoning) which are critical to how we navigate the world for which we can’t even fathom how to code for, let alone train current single use AIs to do.

And if we look at the most advanced single use AIs (namely those doing things like autonomous driving) we still are basic brute forcing their learning with massive amounts of training data to get even the most minuscule improvement in performance (basically you could teach a teen to drive as to a better level than an AI with way fewer actual driving hours by many orders of magnitude).

Finally, all these fears presume we will just hand off the nuclear codes to the machines to save crucial seconds. The ability to autonomously launch a second response attack autonomously with a single use AI literally already exists and probably has for years, and we just haven’t done it yet mostly because there is no need to.

So while I’m not totally against AGI fears on merits, i feel folks who want to talk about it mostly yadda yadda yadda over the important parts in very classic human being type ways.

Expand full comment

I can’t believe Matt wrote an entire post about popular depictions of AI risk without mentioning the Butlerian Jihad from Dune once! I truly cannot believe it. Did someone ghostwrite this piece? They captured Matt’s voice perfectly, but totally overlooked his love of Dune.

In the discussion of Asimov’s 3 Laws of Robotics, the writer (whoever he or she was) overlooked I, Robot (2004) starring Will Smith, which prominently features the Three Laws of Robotics.

Expand full comment

I’m an AI researcher, and I feel obliged to point out that as far as know, there are no major AI researchers worried about this issue; certainly none of the big of names. Andrew Ng famously compared AI safety to worrying about overpopulation on Mars, the point being that we’re so far away from this being an issue that we simply can’t hope to predict what a persistent self-aware AI might possibly look like.

Even as far as “AI Safety Research” goes, I’ve found very few examples of actual, actionable solutions to any of the hypothetical problems they present, and I’ve been surprised to find that many of these researchers have startlingly little background in actual AI research. And when I do dig a little deeper, I always get the sense that AI safety research is just a mix of wild speculation and assumptions that we already have reason to believe aren’t going to be true.

To give just one example of these assumptions, modern AI has no concept of agency; by which I mean a sense of its own permanence. GPT-3 and it’s more powerful successors don’t really “exist” the way a human exists. When it is given an input, a signal is passed through the network and an output is collected at the other end. That is the entire process. It has no memory of past inputs and outputs, it’s fresh every time. It isn’t capable of long-term planning, because it literally only exists while accomplishing its task and then resets afterwards. Even AI with more advance long-term permanence would only exist within a sandbox, at least for the foreseeable future.

Another thing it seems AI Safety people ignore is the resource cost of keeping an AI running. Large-Language Models like GPT-3 cost in the millions to train, and require mind-boggling amounts of computation over weeks or even months. An AI couldn’t just quickly spin up more advanced versions of itself that humans don’t control, because iteration is going to require an entire datacenter’s worth of computing. Just because Siri runs on your iPhone doesn’t mean the most advanced AIs will be trainable on one.

I’ll finish by saying there are real risks to AI, but it’s not self-awareness. The risks are, among others, people using AI to do terrible things, using AI to spread of misinformation, and AI with too much responsibility failing not because it is malicious but because it is incompetent.

Expand full comment

When you say that they are a risk, I think of artificial intelligence systems like Son of Anton in the television comedy series "Silicon Valley." When told to debug the system code, it started deleting all of the code, thus deleting any possible bugs as well. When told to find the cheapest nearby burgers, it initiated a bulk purchase of meat to utilize maximum economies of scale, filling the building with meat.

In other words, instead of worrying about them becoming too smart and turning on their masters, we should worry about them being incredibly stupid.

Expand full comment

I'm not an AI expert, but I know enough to say that we are so, so far from this being a real problem that it's on the order of worrying about how our society will get along with extraterrestrial aliens when we finally mix.

Expand full comment

It's worth knowing that GPT-3 has no idea whether what it is saying is true, well-informed, plausible, consistent, or relevant. It pays attention to grammar and to the likelihood that one string will follow another. It's basically an amped-up autocorrect.

See this thread here, and esp the follow-ups by Curtis Franks:

https://leiterreports.typepad.com/blog/2021/12/so-much-for-artificial-intelligence.html

Expand full comment
Apr 12, 2022·edited Apr 12, 2022

A plot point for anyone aiming to write the next AI-risk book or movie - I'm pretty sure that the AI that takes over the world wouldn't be working for the military, but rather *trading financial securities*.

If you think about it, quant finance is an area that already absorbs a big share of the best and brightest (e.g. top college physics majors). It is very secretive about algorithms. And all the incentives are to create a system that 1) is hooked into as many networks as possible, 2) analyzes all that information as quickly as possible, and 3) understands human psychology too. Beacause that's how it could make the best trades. Sounds like a recipe for AGI, right? And of course it wouldn't need to worry about financial resources for its nefarious schemes . . .

Expand full comment

I feel like this whole discussion happens in the theoretical realm of how AI ‘could’ work while avoiding how it’s made and developed. You don’t code a generalized model and then walk away, any generalized AI in the examples Matt gives (and others) is actually a very specific series of ML models acting in some coordination. As we approach general value AI we will have control over every step of the process, including how AI calculates risk, damage, etc. This whole debate is as stupid to me as saying Tesla cars will suddenly become sentient and determine to run over babies as the best course of action. Tesla might have damn good AI but it’s not sentient. Pretending like AI is an unbounded force with thinking capacity we can’t control is ridiculous on its face. I hope more policy people can spend time working with AI and see how it’s functionally made and used so they don’t keep spouting this nonsense. People like Musk that should know better are also a direct cause of the issue, I think they were just early in the ML game and got awe-struck and never stayed current.

Expand full comment

I think the fear of artificial intelligence is overblown. Basically, I’m not scared until the computer develops self-awareness. I just think the technology for duplicating the intricacies of the human brain is a long long long long way off, and perhaps an impossible problem.

but… Parent brag… My 11th grade daughter has a six week internship at the German Institute of artificial intelligence this summer. So it will be interesting to talk to her after it’s over and see what she thinks.

Expand full comment

A pair of my good friends work in AI-related fields. Both work at different companies seeking to make fully autonomous cars and both work directly on the systems that the car uses to make decisions about how to respond to what it perceives. Neither friend owns a car. One friend has never had a driver’s license. They both say this is not unusual on their respective teams.

The thing that worries me about AI, or even the nearer reality of autonomous machines, is that the people who are designing these systems are not normal. So, when a team of people who don’t drive (and maybe never have!) is tasked with coding driving decisions, I begin not to trust the efficacy of those decisions in real-world situations.

Because high end AI/autonomous programming like this is hard to do, the people doing it are kinda eccentric geniuses who don’t quite “fit in” when they’re not in a situation where they’re doing computer stuff. I apply this framework to broader AI research (and tech in general) where the programmers won’t have a good grip on what the users want their AI to do. In my mind, this increases the risks associated with AI, as the people using AI won’t understand how it’s meant to function. Misunderstanding ensues. Then…? Who knows?

Expand full comment