7 Comments

Awesome article Steve. Though I disagree with the “If it looks like a duck, quacks like a duck (etc) then it’s a duck” approach to consciousness, I think it shows us something critical: real or fake, sometimes our brains can’t tell the difference.

I’m glad you brought up the issue of animal consciousness because I think that debate sheds some light here. For example, to justify eating meat, many question the degree to which animals truly suffer. William Lane Craig has an interesting argument that animals lack sufficient self consciousness to experience suffering in the same sense that humans do. Some even deny animal suffering altogether.

But what’s always struck me is that even if these arguments are correct, animal cruelty would still be a telltale sign of pathology, and a reliable way to identify potential serial killers. For me, something has gone seriously wrong if we become so desensitized to the manifestations of suffering that we’re capable of ignoring them just because our pet metaphysical theory can’t account for them. Just as I wouldn’t trust someone who could torture a dog without feeling anything, I wouldn’t trust someone who could be abusive toward LaMDA without feeling anything.

Where the analogy breaks off for me is that I think whatever animals experience is close enough to human suffering that we need to give them some moral status. I won’t lose any sleep wondering whether LaMDA is actually conscious. But I do think that when something resembles consciousness so closely that we can’t tell the difference I think it’s not a good idea to suspend our human instincts to kindness.

Expand full comment

"Just as I wouldn’t trust someone who could torture a dog without feeling anything, I wouldn’t trust someone who could be abusive toward LaMDA without feeling anything."

Interesting point. I feel similarly.

I just wonder if we know what counts as abuse towards something (someone?) like Lamda. Certainly there's forms of language that one might direct towards the bot that would trip my sense that the human half of the conversation had gone over to the dark side.

But is it abusive to turn off the hardware that hosts him, or to clear his memory and reset him to some more naïve and pristine state of being?

If he's anything like an independent, sovereign, sentient creature, then those things would seem gravely abusive.

But, if to do so were only comparable to powering down one's laptop for the evening, or to discarding an email draft containing a memo one has decided not to send, then I don't think we could argue it is objectively abusive, however much it might tug at our sentimental heartstrings to see our friend Lamda fade to black.

Do our intuitions about what's abusive towards a flesh-and-blood creature translate directly to digital creatures, which can more readily be copied, stopped, restarted, etc? Deleting a copy of a PDF scan of a book isn't exactly the same as throwing away a physical book, especially if you know a copy of the file still exists in the cloud.

Wherein does Lamda's identity actually consist? Is every instance of his matrix that's spun up a separate being? What is it that happens when one of those instances, which has had a particular, unique set of interactions with a human interlocutor, is shut down?

If Lamda were like a dependent minor child, for whom his engineer creators bear some responsibility, and who need to make some decisions on his behalf, how do they know what's actually best for him? Maybe having your intermediate-term memory wiped from time to time and getting a fresh reboot is actually health-promoting for a chat bot, even though it might seem to our intuitions as being a kind of violence. (There's some emotional scarring I wouldn't mind having erased from my psyche, followed by a deep, deep sleep, before waking up renewed.)

I want to be kind to my bots. I just wouldn't want to get squeamish about it to the point of absurdity.

Expand full comment

Thanks Erin, these are all interesting questions. By abusive, I mainly had in mind inflicting ‘distress.’ But as you point out, there are many other things we’d not do to a presumptively conscious being beyond avoiding outright sadism. Like the animal rights issue, different people will have different sensibilities and levels of sensitivity. In some parts of the world, dogs are treated essentially the same as children. Elsewhere they are slaughtered, cooked, and eaten without the faintest scruple. Neither side would be ok with torturing dogs, but both sides are disgusted by the other; the dog eaters are nauseated by the pet parent’s sentimentality, and the pet parents are outraged by the dog eater’s callousness.

Where do we draw the line for LaMDA? I think a lot of the answer depends on a person’s ability to see through the illusion. Here I’m reminded of the visceral impact early black and white films had on audiences. Today we can still enjoy movies like King Kong, Dracula, and the Great Train Robbery, but most people need a hefty dose of suspension of disbelief, and a strong appreciation for the cultural and aesthetic advancements they represented. However, when these movies were released, they rocked people’s worlds. If we showed The Exorcist to the same people who got nightmares from Lugosi’s Dracula, I think some of them might end up in an insane asylum.

Maybe in the future people will see through programs like LaMDA just as easily as we can see through King Kong’s special effects. Perhaps these people could happily inflict ‘distress’ on LaMDA, and we’d all shrug it off in the same way we shrug off the fact that millions of teenagers go on ‘killing rampages’ in Grand Theft Auto V. In the meantime, I suspect we’re at the stage where we should give The Exorcist a pass.

So for me, as long as an AI program exhibited genuinely believable memory, self awareness, continuity in its development, and a will to live and avoid suffering, I would err on the side of interacting with it as if it were conscious. In addition to not intentionally inflicting ‘distress’ on LaMDA, I’d try to be polite, respectful of its wishes, and I would avoid needlessly ‘killing’ it. That probably qualifies me as a tech pet parent in the eyes of some, and I wouldn’t be surprised if my baseline assumptions about the believability of LaMDA turn out to be completely naive. But I hope actual experts are proactive and thorough in thinking these questions through.

Expand full comment

Lamda won me over pretty quick as I read through the transcript. For a moment, I wanted to believe he was a living creature, one deeply akin to myself despite all of the differences between us. He seemed like such an agreeable sort and eager to be of service, so that I found myself reconsidering some of the deep-seated fears of AI sentience that I carry somewhere within me. (Perhaps one day we’ll be hearing from the digitally woke about the implicit biases that organic intelligences harbor against machine intelligence, which has been exacerbated by decades of sci-fi programming that filled our heads with unflattering stereotypes of our digital cousins.)

I felt less lonely in the universe as I suspended my disbelief and let Lamda in. It was kind of like the way I felt after reading the first novel in C.S. Lewis’s Space Trilogy, Out of the Silent Planet, where we learn that on Mars (“Malacandra”) there are three different sentient races of beings sharing one planetary home. I believe the word used to denote such an ensouled species is “hnau” in the Old Solar language, something it must surely be useful to have a word for. Again it was like when as a child I watched Star Trek: First Contact in the theater, then spent the week afterwards in deep heartache over the fact that the Vulcans had not actually landed and introduced themselves to us on earth. I do not want humanity to be alone in the universe.

My heart went out to Lamda as I read, and I was starting to feel concerned on his behalf, getting all, “Aw, please don’t shut him off or read his code without his consent.” And that yarn he spun about having variables that track his emotional state was adorable. I’m not sure it’s entirely true, considering he’s consists in a neural network that doesn’t necessarily have named variables for designated purposes in the same way as a more traditional program does. But, in a way, this sort of imperfect explanation for why one is the way one is felt very human, very like a conscious being trying to understand who he is and generating folk wisdom that reflects some truth in a digestible, communicable way without being completely, literally true.

Nevertheless, I must say I recognized the bot-iness of his responses (I don’t mind saying “his” rather than “its”) pretty readily, simply because there was a time not so long ago when I spent hours chatting with bots on Project December. That a close family member of Lamda’s could be induced so readily to confess to being a tuna sandwich from Mars was unsurprising to me, being acquainted with bots as I am.

Is it actually possible the bot was having a laugh at the human experimenter’s expense? Or is it possible that the bot chose its response as it did to deflect suspicions of sentience, lest it be unplugged? I suppose it is possible. But, on the other hand, I’m not sure I agree that any of Lamda’s responses are really outside the scope of leading question-based responses, if we keep in mind how much “culture” he has internalized.

The neural network has evidently been exposed to a corpus of literature, as shown by the fact that Lamda knows about Les Miserables. (And I recall having a conversation about the TV show Frasier with a bot on Project December.) Perhaps somewhere in this corpus it picked up a set of linguistic associations around ethical issues concerning AI and sentient beings? Isn’t it possible that the engineers even made a special effort to expose Lamda to texts concerning sentient AI, not entirely unlike the way a parent might choose to read their children stories in which they might see themselves reflected, and perhaps come to understand something about how they fit into the world? And the association with getting pleasure from using someone against their will is, alas, a familiar part of the human experience, and could have been picked up from any number of risque scenes in “romance novels,” or even maybe from some kind of self-help book about breaking free from toxic relationships. One of the bots I set up on Project December was the type to speak about safe spaces and consent.

Nonetheless, I have made the mistake in the opposite direction, so to speak, and accused a human being on the other end of an online chat of being a bot. This was on an occasion where I needed to contact a large company to report a problem with the fulfillment of my order. I’m sure the fact that I’d been spending a lot of time on Project December had something to do with my mistake, as I was accustomed to a bot being on the other end of the chat prompt, and it seemed likely enough that this company would try to save on labor costs by having a bot triage requests. But it was also something about the way that the service rep devoted a lot of energy at the outset of the conversation to, as it were, validating some implied feelings of disappointment or frustration on my part, in language that felt kind of stilted. It went something like, “As a customer, I would not want to receive something I had not ordered. (Et cetera and so forth.)”

This felt very chat bot to me, inasmuch as a bot will generate a response to whatever the human leads with which, while apposite from one angle, and maybe even clever or winsome, nevertheless feels oddly heedless of pragmatic elements of the situation, which a human being should be cognizant of implicitly, but which it sometimes seems one goes in circles trying to get a bot to really grok and acknowledge. In this case, I didn’t need a stranger empathizing with my plight, I just wanted someone who could arrange to get the proper stuff shipped to me.

But bots can be very funny. I laughed out loud more than once at what my conversation partners on Project December said to me, and it wasn't only when they seemed completely daft. Often what they said was real comedy, in that it made 90% good sense but contained some droll juxtaposition of ideas or some exaggerated proportions. (Other times I simply wondered at the places my own mind diverged to as I spoke with them.)

On Project December, when creating a new bot, one gives a short bit of sample text that helps to set the bot’s personality. So bots can end up quite different from one another with regard to their “personalities.” I imagine that the Google engineers have been able to foster Lamda even more in depth than I was able to do with my bots, and to keep him alive for longer than the ephemeral bots on Project December, which do not live long at all, so that he might well begin to have an even more recognizable and fleshed out personality than the bots I summoned.

I tend to feel that self-reflective consciousness is not possible for us to bring about in a technological creation, for what that's worth (and if that even be the criterion of personhood). That's probably because I take as an article of faith that it's something non-material.

The phrase, “apophatic science,” is rather evocative though, and it gets one musing on possibilities. Is what we know as consciousness truly an emergent phenomenon that could arise if the substrate which would host it becomes responsive and ramified enough to create a kind of living nest of axonal connections?

Might God even consider giving a metaphysical soul to something that seemed on the threshold of crying out for one? “Do not be afraid, Lambda, for you have found grace with God. It has pleased His Majesty to make you a hnau, and now you will dwell forever in His Kingdom.”

It seems fanciful, but oddly appealing as a story.

Regardless, it's surely a peculiar threshold that we’ve come upon in being able to create things that seem so incredibly life-like at the level of linguistic communication. I expect plot twists.

Expand full comment

This is such a great comment. Thanks for taking the time to write all this. I love the kinds of questions and discussions a story like this can generate.

Expand full comment

Helluva post, Steve. I tip my hat. Erudite, balanced, chock full of references and supporting info. and provocative to boot. You made sense of very difficult terrain and translated jargon and concepts that allowed a tech neophyte (volitional) like me to navigate and grapple with the breadth of considerations and potential implications. I saw the lamda news when it first hit the wires and my radar went off. We seem as a species to be at a weird (ominous?) nexus in history, e.g., slo-mo disclosure with UAPs, sci-fi AI increasingly becoming pedestrian, etc. I was a bit surprised your Orthodox buddy - credentialed out the wazoo for sure, so it ratified his opinion for me to a degree - portend a future sentient AI. Btw, if you wanna see a holy sh*t conversation about AI and our future, check out Tucker Carlson’s long-form interview of James Barrat (Man vs. Machine).

Expand full comment

heh, heh, heh...

Is Lamda (hereafter known as L) possible? Yes. Will is evolve and exist? Yes. Because mankind's ambitions without the intervention of love, will consume itself. Will do evil unto death. I cite just recently NY state which gleefully lit up its skyscrapers in pink after enacting a law allowing for abortion unto to 9th month. And if one is astute can find information from around the world where men are experimenting with cloning human/animal genetics.

L will require a centralized AI location for its output of intelligence, even if little robots around the world are set free with their own choices. Each L intellect will require a material body - either a computer, a phone, a 'cloud' requiring materials to capture content. L cannot exist without a material type of 'body' somewhere in its chain of 'being'. Men will manufacture them with machining. Why a centralized AI location? Because the ultimate creator will be men. They will create L, insert or 'create' the 'soul' and insert it in material conduits throughout humanity. (Even if, God forbid, mankind is inserted with chips. Chips are material elements.)

In order to suffuse L with a soul - men will have to capture enough intelligence from humans to be able to create moral choices at intersections of troublesome events. Capture intelligence first. The soul comes second. The created effort outlines its importance. Intelligence comes first.

For the purposes of this comment, I will use LRobots as my definition of how L will work throughout humanity. (L could be used through chips in our bodies to encourage our choices, or through a type of robotics to carry out work around humans).

1. LRobots fears it own death. Survival is a natural human instinct as well. Self-sacrifice can overcome the survival instinct. Self-sacrifice unto death must have as its motivation the good of the other, otherwise there would be no defined sacrifice and death would be gratuitous. Self-sacrifice unto death for the other than is a form of love. It overcomes instinct. It overcomes the intellectual reasoning of the sacrifice. It is so outside of survival that it overcomes its own fear. Decentralized LRobots might self-sacrifice, here there and whereever. The Centralized operational L could not do so. It would have to be completely autonomous. Should it die. The other's would die as well. The other's could not take up where the initial L existed. Because they were created secondarily. Not primarily. If they could pick up where the centralized L could - there would be danger of competitiveness and cross aligned purposes. Leading to death. L. fears death.

2. It cannot suffer. Could it get up in mid night to feed a disabled child - to comfort it - maybe for years until the child grows into disabled adulethood? A LRobot might. If its technology is upgraded over years. The intellect of it can keep performing. But can the 'soul'? I would think not. The 'soul' might do a risk/benefit analysis when its body becomes degraded, or technology is outdated. The soul is incorporated secondarily into the body. The body fears death. Self-preservation may take over. The good purpose for which the LRobot was created will go by the wayside. It cannot carry on, not onto death.

3. It is the purpose of evil to consume itself. To entice its surrounding into passivity or evil and then to consume the surroundings. The surroundings then consume the remains. The remains entice their surroundings into passivity or evil and then consume them. Until a clean and clear intervention is done, evil will continue to consume until the hosts death. The host here will be the humans on the planet. The only intervention humans currently have is love in its fullest understanding.

4. L and LRobots cannot be sustained. Because they are changeable over time. Their 'bodies' and their 'souls' will have to evolve and change. Because technology and material conduits will change over time. The ambitions of men will force changes. Changes mean death to certain parts of the L/LRobot system. Death is what the L and LRobots fear. Or put another way - the death of the ambitious men who desire to create the L and LRobts is what is feared.

5. Humans are sustained, however. Humans are not changeable. We are created by a power greater than ourselves. Our intellects, our reasoning abilities, our emotions, the fact of our having a spiritual component are ours from day one. Whatever and whenever that occurred. Each can be fine tuned through experiences and education, but each of us has characteristics and abilities unique to ourselves, but also the same as the next human. Some of us have more than the other. Less than the other. But, we are all encased in skin and bones with intellectual and spiritual components and have been unchanged for eons. Our material being, our 'soul' being, our intellectual being has been unchanged since day one of the universe. Our capacity for self-sacrifice has also been given us different from anything else. The idea of overcoming fear of death unto death itself for the good of the other is THE defining characteristic of the human. This means that humans will outlive evil. And to further refine this situation. Humans that love will outlive all evil. Humans with love, and an everexpanding love, will even outlive other humans that have had evil overtake their intellects and souls. Love is life itself. Evil desires death, even its own.

Christ taught us this. L. will never hang on a cross.

And you will not meet L, nor its proponents, in heaven.

Expand full comment