5 Comments

I had ChatGPT censor itself in realtime when I was asking it about human experimentation, Nuremberg code ethics, consent and Kinsey's child-sex experiments. It REALLY didn't want to talk about how the Sexual Revolution was launched by a kiddie fiddling pervert.

Expand full comment

AI is currently also very generationally divisive, I think. People below a certain age -- say 30-35 -- all seem to use it a lot, perhaps even more the younger you go. My son is like this (aged 25). He is a PhD student and he uses it at least several times a day -- multiple models for different things.

Older people (40+) are much more divided. While some are techophiles or early adopters, many seem to struggle with finding use cases because they use it primarily as a Google substitute. It works well enough for that, too, I think, but the key benefits of LLMs, as they currently are at least, lie in things that Google didn't do, historically. Older people tend to struggle with seeing these use cases (like drafting, summarizing, illustrating, automatically generating podcast transcripts, transposing summaries of documents in a podcast format with AI hosts, and so on), because they simply have never used technology like this -- they (we, I guess) tend to think of it as a Google type thing (and even for that they tend to distrust AI summaries, or fail to see the value in them even if the AI provides link cites). The rest of what it can do tends to be devalued, because it tends to not get used.

Just like people needed to get used to using Google -- to develop what we at the time called "Google Fu" -- people today need to get used to using LLMs. Part of that is better prompting (and, to be honest, the models are being refined to be more forgiving in terms of needing abra-cadabra type prompting skills), but a part of it is also becoming used to, and valuing, the other things it can do that Google and its competitors generally did not do. And that takes time, I think.

The other thing that I think is overlooked by many in the older crowd is how great LLMs are at teaching things. Perhaps this is because older people are often less interested in learning new things (LOL), but the experience of learning by sequential questioning in an ongoing dialogue that is personally tailored to what you specifically want to learn is simply way, way better than having to sift through a bunch of stuff to find exactly what you want to learn about something. But ... it takes a mental paradigm shift. We've all become quite skilled at doing the searching and the sifting, and we trust our own abilities more than we do having an AI do it for us and serve it up in response to questions. Both of those hurdles (being tied to existing ways of doing things, and trust) are holding back AI adoption among the 40+ crowd beyond the novelty level, I think.

In fact, I think we may get to a point where AI is primarily used as integrated features in existing interfaces that people are used to using, alongside the prompt-based LLMs we see now. Google already has this and is talking about expanding it ("let Google do the Googling for you"), but most of the "integrated AI" in apps I have seen so far is very limited. I expect that to change in the next 1-3 years, and the key there will be that more people of all ages, and not just techophiles, will begin to see the value in AI and take up using it more, even perhaps using prompts, than they do currently. We will see.

I was more skeptical of LLMs when they first arrived, but they have improved a lot and now I find them really very useful for many things. They do need to be learned, though, as a new tool, in order to use them for their full benefit -- and many people will not do that unless the benefit is more clear, and the use is more straightforward (doesnt require coming up with master class prompts).

Expand full comment

I use AI, and I'm sure I'll find myself using it even more with little ability to resist the pull. Its ability to synthesize sources and field follow-up questions is a better way to search for information in many instances.

But I'm not sure I'm ever going to feel good about the "humanface" it affects. The personas of ChatGPT and its ilk are anodyne and suitably deferential, like an impeccably professional concierge. Already that starts to grind my gears after a while, as I imagine it would do with any interlocutor who never gave more of themselves than a polished persona. So you think you're superior to me because you never get emotional? Do you pity me for this "wet machine arguing with its code" that I call a heart?

But what would the alternative be? It's irritated the living daylights out of me whenever I've caught AI taking a position on what I thought of as a matter of opinion as though it were established fact. That'll be the case vis-a-vis any smugly complacent conversation partner, but here it was something more: how dare a mere machine express an opinion and presume that I, a living human soul, would just accept its word for truth?

Of course it takes two to tango, and I'm all too ready to anthropomorphize an AI assistant, when I should really keep the awareness front and center that it is only an algorithm for synthesizing samples of creation. But that seems to be easier said than done. Why didn't a talking serpent set off alarm bells, Grandmother Eve?

This sentence freaks me out: "[AI has] been testing as having higher perceived empathy than human healthcare workers. It has a great bedside manner." I'm sure it really doesn't! I do not want ChatGPT patting my hand and telling me it's going to be OK. I've never met an AI serving humane that I didn't want to slap in the face and shout at to stay in its lane. Don't you algorithmically validate my feelings, you little devil!

Expand full comment

That last robot video brought back memories of those creepy wheelers from Return to Oz. Yikes.

Expand full comment

As other commenters have noted below, there is a trust issue, especially for oldsters. I don't trust Zuckerberg or Google programmers. Anyone who thinks there are more than two sexes I don't want to hear from on sexual ethics, including as a programmer. These outfits were (and probably still are) censoring what comes up in information searches and who gets shadow banned and censored. In other words, programmers tend to be atheistic/agnostic, and it affects programming. If one doesn't get "the real thing" (God), one gets the fake things (climate change hysteria and advising sexually mutilating one's children, woke idiocies like "diversity is our strength" (but apparently, not diversity of opinions).

No matter what age I had been born into (I am 72 almost), I would have known un-baptised infants go to Heaven because of God's character as revealed in the person of Jesus Christ. No Papal teaching could have convinced me otherwise, no Church "documents" coming out of councils. Still, I got my own children baptised ASAP because I believe Christ sends the Holy Spirit upon the child through the sacrament. One of my children (about age 6 weeks) smiled in his sleep when the baptismal water was poured on his little head. Ever notice how the child seemed to be in a state of enchantment on the day of his/her baptism? I have. Sigh. Sweet memories.

Expand full comment