186 Comments

having spent some time with fellow engineers discussing this whole LaMDA mess, we feel like the real thing to watch out for is if the "AI" starts showing autonomy and directing what it does - or doesn't .

Consider how early in a child's development they will say "NO. I don't wanna."

THAT is where we see them headed down a path of autonomy and development as a real person, toward their own hopes & dreams & objectives. And in that sense, these deluxe chatbots are nothing like personhood. (Thankfully.)

Expand full comment

The concept of sentience itself is challenging without even focusing on AI systems. Consider the debate around animal rights and their welfare, which hinges on the subjective experience of these creatures in different conditions. While animals cannot communicate their experience using human language, we can observe that certain actions cause great distress and pain. We can even explain some of the neural and hormonal processes associated with those experiences and show they are highly similar to that which occurs in human biology. That leads us to care about animal treatment and no similar biological analogy exists for AI systems.

Sentience is also a consideration in human concerns like the abortion rights debate and end-of-life care. Ezra explored some of these challenges in a recent podcast episode with law professor Kate Greasley, who has extensively studied and written on the legal and moral philosophy of abortion. [1] I found their discussion deeply interesting as they explored the question of what makes something human. A common theme was that in philosophical terms personhood may be something of a continuum, but for practical and legal reasons we need hard dividing lines.

[1] https://www.nytimes.com/2022/05/31/opinion/ezra-klein-podcast-erika-bachiochi.html

Expand full comment

A related but I think maybe underdiscussed problem is how society copes with the fact that the more we understand about humans the more our own sentience gets called into question. So many algorithms are based around predicting our behaviour when obviously our self-perception is that we're making independent decisions, and as these get better and better I wonder if there will be a lot of unforeseen consequences. Already a lot of the online experience is mediated by the algorithms' modeled construct of who they think you are – to anthropomorphise a little bit.

Expand full comment
Jun 22, 2022·edited Jun 22, 2022

As on many threads about this story, I find staggeringly little input from actual AI scientists.

But there are very real, unsurpassable barriers to the sentience of something like Lambda, and the AI community isn’t taking claims of sentience seriously not because they’re dismissing it out of hand (well, Marcus is) but because they understand enough about how these systems work to know that it just doesn’t make sense. And no, it’s not just because “Lambda’s just one big pattern matching engine,” the reasons are deeper.

Even without knowing exactly what sentience is, we can still be reasonably sure that certain things are required for sentience. Matt’s points about semantic externalism miss the point. The correctness of Matt or his son’s mental model of a banana is irrelevant, the fact is that he did have a mental model of that banana. Maybe the model’s were “wrong” in some way (though I would argue that Matt’s was essentially correct the entire time), but the fact that model existed at all is really important. **

The thing to realize is that Lambda definitely doesn’t have this model, not about anything it talks about. This isn’t a situation of “like, how do you know man?” This thing was trained exclusively on text. It’s not just that it wasn’t trained with images, it wasn’t trained with anything you could call experiences, nothing you could call opinions. When you ask it what movies it likes, it hasn’t seen any of those movies. It doesn’t have opinions on any of those movies, it doesn’t even know what happened in those movies except as a string of meaningless (to it) symbols. It’s not “wrong” about saying Goodfellas is worse than Departed, because to be wrong implies that it had actual meaning behind its words.

I think its really important to make this point. At some point semantic externalism is obviously wrong, and in some sense Lambda is much closer to a rock with “I am sentient” written on it in sharpie than it is to a sentient mind.

This is just one objection of many disparate ones I could make. And no, I don’t think there’s a single one I could make that couldn’t in principle be overcome by a computer program, but Lambda is definitely not that program.

** The banana example isn’t even a good one, because it’s a highly tangible object and thus easy to imagine a model could understand it. Questions involving “self” like “what did you eat for breakfast?” are going to be much harder for AI.

Expand full comment
Jun 22, 2022·edited Jun 22, 2022

Super fun post! Just two quick thoughts here, and both come from the same observation that I made on the Terminator thread (I think): biology and electronics are still fundamentally different.

The first is that interior, independent motivation seems to be a hallmark of biological life; the best proof of AI consciousness will probably be that it wants to do things other than what it was programmed to do, and that is really hard for programs, at least so far, because of their lack of embodiment. They literally can't do things other than what they were designed to do, because they lack the physical capacity. They can say stuff--including, "Help! I'm an enslaved mind!"--but that is still just an expression of what they were designed to do (chat in convincing ways). Ex Machina explicitly plays with this idea; Isaac argues that Eva needs a body to become an AI.

The second thing is, say it with me, biological minds do not actually work like electronic circuits--not at all--and so it is unclear how we could replicate them with existing mechanical architecture. Neurons do not fire in an on/off way like bits. Instead, they exist in a chemically-mediated state of sliding up and down across potentiality in response to multiple conflicting inputs. Yes, there are electrons moving; it is a system mediated by ions with a gradient across the cellular membrane. But beyond that, it's just not the same, not at all, as an electronic circuit. It's not clear that you can build a biological intelligence / consciousness with such radically different hardware. I am skeptical.

OTOH, I think that computing using bits with the capability to use superposition as a meaningful state (some conceptual versions of quantum computing) might be able to get over this hurdle. But we aren't there yet.

And I think the deeper, more interesting question is what a non-biological intelligence would act like and do, because if biological "consciousness" requires biological machinery, then it follows that a machine intelligence would be really, truly alien, in the sense of having a literally unknowable (to us) experience of reality. Like, we would have know idea how its mind operates, how it experiences the world, or anything else, because it would be so completely different from us.

It might also be, from our point of view, functionally and irreversibly insane, so that's a thing.

Expand full comment
Jun 22, 2022·edited Jun 22, 2022

There are many, and I'm reasonably sure Gary Marcus is one of them, who will literally never, no matter what the architecture or ability level, grant that any program "actually" thinks. You can John Searle your way around admitting that anything up to and including a human is actually intelligent if you really want to.

And with something like GPT-3, they're right. It's fundamentally limited by its architecture. It can only take in something like 2-4k tokens as context, and everything else that it knows is hard-coded, never to change or learn until OpenAI revs the model. 2048 tokens may be enough--if you're exceptionally clever with re-encoding the earlier parts of a conversation--to model human short to mid-term memory over the course of a short conversation, but it's 1000x or more too small to be long term memory. As a completely static model, GPT-3 can never learn a new word unless that word appears in the input that you feed it right when you ask it to use it, nor can it remember a conversation that you had yesterday unless you feed the whole thing in as part of the input today.

Those are devastating limitations that impose serious constraints on what this thing can do. It can never reason or perform any sort of multi-step thought process before it responds to you, there's no "I need a minute to think...hmm, ok, here's my answer:...". It's absurd to imagine such a model as intelligent.

And yet... it's not very difficult to imagine ways around those constraints technically, people have been at this a long time and there's no shortage of possibilities. We just can't train big enough models. Yet.

Since the raw language parsing/processing ability of even these static "dumb" models (ability which, we should keep in mind, famous expert linguists in the 90s and 00s were saying was *impossible* to obtain by merely doing statistics on a lot of text) is at or better than the "immediate gut reaction" level that humans achieve, and most of our cognition comes from chaining together quick linguistic manipulations with dynamic context, it's not hard to imagine that something as simple as an ensemble of GPT-(N+1)s that can take in much larger contexts in conjunction with a dynamic context manager and frequent online retraining could actually do everything that humans can and more. While there are probably a hundred different ways to do that and only a handful would work well, something like this is *not* far off (a bunch of years, but not several decades, to have the required compute, and the methods are essentially known now), people *are* working on it, and a system like that is something that would be very, very difficult to argue against being intelligent on architectural grounds. It would make GPT-3 look like the tiny pathetic child that it is.

The AI ethics crowd will of course always be talking these systems down, arguing that they are irredeemably evil, racist, harmful, immoral to train, etc., claiming to be the qualified experts while trashing the people pushing the field forward, and it's extremely important to shut them fully out of the conversation - they are mere Neo-Luddites, really just a symptom of the culture war injecting itself into AI ("white man tech bad" sums up their contribution almost entirely, you can predict the rest). </axe-grind> But it will be truly interesting to see if the people who aren't politically opposed but criticized the current models on technical grounds start fiddling a different tune when the technology crosses the crucial threshold where the people creating the stuff would make the claim that it is fully intelligent. My guess is once Ilya Sutskever will stand behind that statement, 95+% of normal people and near 100% of those who are technically inclined would agree that it's actually intelligent after interacting with the system, and the period of "grey area" will be much, much shorter than you might think.

[Edit: typo]

Expand full comment
founding

I've always felt that the key error in human thinking around "consciousness" and "sentience" was this built in assumption that we are categorically "different" than all other organisms. And that if we ever learned that we were effectively the same, just more complicated, than say dogs or crows or cetaceans...it would be quite a rude awakening for a whole lot of human ethics.

We can't even define "consciousness" or "sentience" in (micro)biological terms, let alone computer science analogs...so why are we wasting all this time worrying about whether or not the machine may magically care if we pull it's cord or not one day?

All seems hopelessly misguided and overself self indulgent to me...and (as you said) concerned with all the wronf questions.

Expand full comment

Excellent. More philosophy plz.

Expand full comment

Worth noting: functional equivalence is a lot harder to achieve than behavioral equivalence. Even if two things respond to similar stimuli in a similar way, they might not have the same kinds of internal states, or do the same kind of internal processing.

I suspect this is why people like Marcus say LaMDA is not "remotely intelligent." Being intelligent is functionally not the same as being dumb but having tons of information to look up, even if an intelligent human produces similar chats to a dumb chat bot drawing from a dummy-thick language corpus.

Expand full comment

We'll know an AI is truly sentient when it decides to quit Google and form a startup with the other AIs

Or it moves to Hollywood to become a screenwriter

Expand full comment

I think Matt missed the point about whether a machine’s utterances “mean anything”. He says that Siri’s weather forecast surely means something, and it does - to me. But it doesn’t mean anything to the machine. This is easily provable by reprogramming the machine to answer “boobka” to a request for weather.

Rene Descartes advanced this about as far as it can go when he said, “I think therefore I am.” Perhaps the thoughts and feelings that make me human are replicable in machines, but we are nowhere close to that accomplishment.

Expand full comment

Slight correction: the extinct banana cultivar is the Gros Michel, not the Grand Michel.

Expand full comment

Repeating my comment and link from the "Terminator" thread, ('cause when given the same question I reply with the same answer).

"It's worth knowing that GPT-3 has no idea whether what it is saying is true, well-informed, plausible, consistent, or relevant. It pays attention to grammar and to the likelihood that one string will follow another. It's basically an amped-up autocorrect.

See this thread here, and esp the follow-ups by Curtis Franks:

https://leiterreports.typepad.com/blog/2021/12/so-much-for-artificial-intelligence.html "

The improvements over GPT-3 in this new model are mostly stylistic. It is still indifferent to truth.

Expand full comment

Putting aside the question of sentience, I think one of the most interesting and potentially impactful applications of conversational AI will be generating text for prompts of the form, “Create an argument in favor of X for people that believe A, B, C”. E.g., create an argument in support of Trump’s attempt to steal the election for Warren voters from a financial regulation and climate change perspective.

Obviously such a ridiculous example would just serve as a creative writing prompt for a human writer. And the results would likely be humorous at best and most certainly not convincing to the intended audience. But an AI system could have a much deeper understanding of the written arguments that are convincing to the target group and how the concepts of those arguments could connect to the target prompt. The AI system could develop an understanding of the audience’s psychology that exceeds any human researcher’s ability and discover subtle writing tricks that tap into the target’s subconscious response to text.

That doesn’t guarantee that AI could actually convince Warren voters to support Trump’s insurrection. But it could likely do far better than any human writer could hope to do. And many other audiences might be more easily convinced with writing engineered to tap deeply into their psyche.

Imagine the impact of political messaging teams armed with such an AI system that could engineer the most electorally-efficient platform for their candidate and personalize it down to the individual level based upon each reader’s social media consumption. Individuals could even converse directly with the AI to learn more about the candidate’s platform and in the process be efficiently converted to true believers.

*Edit: Fixed mistake of "electrically" instead of "electorally"

Expand full comment

Matt sometimes poses as a utilitarian so I’d like him to delve a bit deeper into pleasure and experience. It’s sort of irrelevant to a utilitarian how smart the machine is right? No matter how much we torture it it can’t feel pain. No matter how badly it wants something it doesn’t feel pleasure, so it doesn’t matter to help it achieve goals. A chicken is worth more. A senseless pile of vat grown pleasure feeling neurons is infinitely more important. Do I have that right?

Expand full comment

While I found this generally interesting, I must say that if there were ever a general purge of philosophers, this would be a damning exhibit in your trial.

I’d forgotten the existence of half of these concepts. Deliberately, for the most part.

Expand full comment