Discussion about this post

User's avatar
Miles's avatar

having spent some time with fellow engineers discussing this whole LaMDA mess, we feel like the real thing to watch out for is if the "AI" starts showing autonomy and directing what it does - or doesn't .

Consider how early in a child's development they will say "NO. I don't wanna."

THAT is where we see them headed down a path of autonomy and development as a real person, toward their own hopes & dreams & objectives. And in that sense, these deluxe chatbots are nothing like personhood. (Thankfully.)

Expand full comment
Matt Hagy's avatar

The concept of sentience itself is challenging without even focusing on AI systems. Consider the debate around animal rights and their welfare, which hinges on the subjective experience of these creatures in different conditions. While animals cannot communicate their experience using human language, we can observe that certain actions cause great distress and pain. We can even explain some of the neural and hormonal processes associated with those experiences and show they are highly similar to that which occurs in human biology. That leads us to care about animal treatment and no similar biological analogy exists for AI systems.

Sentience is also a consideration in human concerns like the abortion rights debate and end-of-life care. Ezra explored some of these challenges in a recent podcast episode with law professor Kate Greasley, who has extensively studied and written on the legal and moral philosophy of abortion. [1] I found their discussion deeply interesting as they explored the question of what makes something human. A common theme was that in philosophical terms personhood may be something of a continuum, but for practical and legal reasons we need hard dividing lines.

[1] https://www.nytimes.com/2022/05/31/opinion/ezra-klein-podcast-erika-bachiochi.html

Expand full comment
184 more comments...

No posts