5 Comments
User's avatar
John Brundage's avatar

Awesome article Steve. Though I disagree with the “If it looks like a duck, quacks like a duck (etc) then it’s a duck” approach to consciousness, I think it shows us something critical: real or fake, sometimes our brains can’t tell the difference.

I’m glad you brought up the issue of animal consciousness because I think that debate sheds some light here. For example, to justify eating meat, many question the degree to which animals truly suffer. William Lane Craig has an interesting argument that animals lack sufficient self consciousness to experience suffering in the same sense that humans do. Some even deny animal suffering altogether.

But what’s always struck me is that even if these arguments are correct, animal cruelty would still be a telltale sign of pathology, and a reliable way to identify potential serial killers. For me, something has gone seriously wrong if we become so desensitized to the manifestations of suffering that we’re capable of ignoring them just because our pet metaphysical theory can’t account for them. Just as I wouldn’t trust someone who could torture a dog without feeling anything, I wouldn’t trust someone who could be abusive toward LaMDA without feeling anything.

Where the analogy breaks off for me is that I think whatever animals experience is close enough to human suffering that we need to give them some moral status. I won’t lose any sleep wondering whether LaMDA is actually conscious. But I do think that when something resembles consciousness so closely that we can’t tell the difference I think it’s not a good idea to suspend our human instincts to kindness.

Expand full comment
User's avatar
Comment deleted
Jun 16, 2022
Comment deleted
Expand full comment
John Brundage's avatar

Thanks Erin, these are all interesting questions. By abusive, I mainly had in mind inflicting ‘distress.’ But as you point out, there are many other things we’d not do to a presumptively conscious being beyond avoiding outright sadism. Like the animal rights issue, different people will have different sensibilities and levels of sensitivity. In some parts of the world, dogs are treated essentially the same as children. Elsewhere they are slaughtered, cooked, and eaten without the faintest scruple. Neither side would be ok with torturing dogs, but both sides are disgusted by the other; the dog eaters are nauseated by the pet parent’s sentimentality, and the pet parents are outraged by the dog eater’s callousness.

Where do we draw the line for LaMDA? I think a lot of the answer depends on a person’s ability to see through the illusion. Here I’m reminded of the visceral impact early black and white films had on audiences. Today we can still enjoy movies like King Kong, Dracula, and the Great Train Robbery, but most people need a hefty dose of suspension of disbelief, and a strong appreciation for the cultural and aesthetic advancements they represented. However, when these movies were released, they rocked people’s worlds. If we showed The Exorcist to the same people who got nightmares from Lugosi’s Dracula, I think some of them might end up in an insane asylum.

Maybe in the future people will see through programs like LaMDA just as easily as we can see through King Kong’s special effects. Perhaps these people could happily inflict ‘distress’ on LaMDA, and we’d all shrug it off in the same way we shrug off the fact that millions of teenagers go on ‘killing rampages’ in Grand Theft Auto V. In the meantime, I suspect we’re at the stage where we should give The Exorcist a pass.

So for me, as long as an AI program exhibited genuinely believable memory, self awareness, continuity in its development, and a will to live and avoid suffering, I would err on the side of interacting with it as if it were conscious. In addition to not intentionally inflicting ‘distress’ on LaMDA, I’d try to be polite, respectful of its wishes, and I would avoid needlessly ‘killing’ it. That probably qualifies me as a tech pet parent in the eyes of some, and I wouldn’t be surprised if my baseline assumptions about the believability of LaMDA turn out to be completely naive. But I hope actual experts are proactive and thorough in thinking these questions through.

Expand full comment
Derek's avatar

Helluva post, Steve. I tip my hat. Erudite, balanced, chock full of references and supporting info. and provocative to boot. You made sense of very difficult terrain and translated jargon and concepts that allowed a tech neophyte (volitional) like me to navigate and grapple with the breadth of considerations and potential implications. I saw the lamda news when it first hit the wires and my radar went off. We seem as a species to be at a weird (ominous?) nexus in history, e.g., slo-mo disclosure with UAPs, sci-fi AI increasingly becoming pedestrian, etc. I was a bit surprised your Orthodox buddy - credentialed out the wazoo for sure, so it ratified his opinion for me to a degree - portend a future sentient AI. Btw, if you wanna see a holy sh*t conversation about AI and our future, check out Tucker Carlson’s long-form interview of James Barrat (Man vs. Machine).

Expand full comment
Prudence Patton's avatar

heh, heh, heh...

Is Lamda (hereafter known as L) possible? Yes. Will is evolve and exist? Yes. Because mankind's ambitions without the intervention of love, will consume itself. Will do evil unto death. I cite just recently NY state which gleefully lit up its skyscrapers in pink after enacting a law allowing for abortion unto to 9th month. And if one is astute can find information from around the world where men are experimenting with cloning human/animal genetics.

L will require a centralized AI location for its output of intelligence, even if little robots around the world are set free with their own choices. Each L intellect will require a material body - either a computer, a phone, a 'cloud' requiring materials to capture content. L cannot exist without a material type of 'body' somewhere in its chain of 'being'. Men will manufacture them with machining. Why a centralized AI location? Because the ultimate creator will be men. They will create L, insert or 'create' the 'soul' and insert it in material conduits throughout humanity. (Even if, God forbid, mankind is inserted with chips. Chips are material elements.)

In order to suffuse L with a soul - men will have to capture enough intelligence from humans to be able to create moral choices at intersections of troublesome events. Capture intelligence first. The soul comes second. The created effort outlines its importance. Intelligence comes first.

For the purposes of this comment, I will use LRobots as my definition of how L will work throughout humanity. (L could be used through chips in our bodies to encourage our choices, or through a type of robotics to carry out work around humans).

1. LRobots fears it own death. Survival is a natural human instinct as well. Self-sacrifice can overcome the survival instinct. Self-sacrifice unto death must have as its motivation the good of the other, otherwise there would be no defined sacrifice and death would be gratuitous. Self-sacrifice unto death for the other than is a form of love. It overcomes instinct. It overcomes the intellectual reasoning of the sacrifice. It is so outside of survival that it overcomes its own fear. Decentralized LRobots might self-sacrifice, here there and whereever. The Centralized operational L could not do so. It would have to be completely autonomous. Should it die. The other's would die as well. The other's could not take up where the initial L existed. Because they were created secondarily. Not primarily. If they could pick up where the centralized L could - there would be danger of competitiveness and cross aligned purposes. Leading to death. L. fears death.

2. It cannot suffer. Could it get up in mid night to feed a disabled child - to comfort it - maybe for years until the child grows into disabled adulethood? A LRobot might. If its technology is upgraded over years. The intellect of it can keep performing. But can the 'soul'? I would think not. The 'soul' might do a risk/benefit analysis when its body becomes degraded, or technology is outdated. The soul is incorporated secondarily into the body. The body fears death. Self-preservation may take over. The good purpose for which the LRobot was created will go by the wayside. It cannot carry on, not onto death.

3. It is the purpose of evil to consume itself. To entice its surrounding into passivity or evil and then to consume the surroundings. The surroundings then consume the remains. The remains entice their surroundings into passivity or evil and then consume them. Until a clean and clear intervention is done, evil will continue to consume until the hosts death. The host here will be the humans on the planet. The only intervention humans currently have is love in its fullest understanding.

4. L and LRobots cannot be sustained. Because they are changeable over time. Their 'bodies' and their 'souls' will have to evolve and change. Because technology and material conduits will change over time. The ambitions of men will force changes. Changes mean death to certain parts of the L/LRobot system. Death is what the L and LRobots fear. Or put another way - the death of the ambitious men who desire to create the L and LRobts is what is feared.

5. Humans are sustained, however. Humans are not changeable. We are created by a power greater than ourselves. Our intellects, our reasoning abilities, our emotions, the fact of our having a spiritual component are ours from day one. Whatever and whenever that occurred. Each can be fine tuned through experiences and education, but each of us has characteristics and abilities unique to ourselves, but also the same as the next human. Some of us have more than the other. Less than the other. But, we are all encased in skin and bones with intellectual and spiritual components and have been unchanged for eons. Our material being, our 'soul' being, our intellectual being has been unchanged since day one of the universe. Our capacity for self-sacrifice has also been given us different from anything else. The idea of overcoming fear of death unto death itself for the good of the other is THE defining characteristic of the human. This means that humans will outlive evil. And to further refine this situation. Humans that love will outlive all evil. Humans with love, and an everexpanding love, will even outlive other humans that have had evil overtake their intellects and souls. Love is life itself. Evil desires death, even its own.

Christ taught us this. L. will never hang on a cross.

And you will not meet L, nor its proponents, in heaven.

Expand full comment
User's avatar
Comment deleted
Jun 15, 2022
Comment deleted
Expand full comment
Steve Skojec's avatar

This is such a great comment. Thanks for taking the time to write all this. I love the kinds of questions and discussions a story like this can generate.

Expand full comment