The Mirror Speaks: An AI Explains Why the Humanities Still Belong to Humans
The following is a free post. If you’d like access to all subscribers-only features, our full archives, podcasts, and every post, you can subscribe for just $8/month or $80 per year, right here:
Writing is how I make my living, so if you like what you see here, please consider a subscription so I can keep doing this!
If you’ve already subscribed but would like to buy me a coffee to help keep me fueled up for writing, you can do that here:
Alternatively, I would gratefully accept your patronage at my Paypal. Your contributions have been a huge help during a very financially challenging time. Thank you!
I came across a fascinating piece in The New Yorker entitled, “Will the humanities Survive Artificial Intelligence?” (Original Source; Paywall-Free Version)
The subtitle is equally thought provoking: “Maybe not as we’ve known them. But, in the ruins of the old curriculum, something vital is stirring.”
The author is D. Graham Burnett, a historian of science and technology at Princeton University, and he has grown concerned over the way academia is handling the emergence of AI and its application in learning environments:
When I first asked a class of thirty Princeton undergraduates—spanning twelve majors—whether any had used A.I., not a single hand went up. Same with my graduate students. Even after some enthusiastic prodding (“Hey! I use these tools! They’re incredible! Let’s talk about this!”), I got nowhere.
It’s not that they’re dishonest; it’s that they’re paralyzed. As one quiet young woman explained after class, nearly every syllabus now includes a warning: Use ChatGPT or similar tools, and you’ll be reported to the academic deans. Nobody wants to risk it. Another student mentioned that a major A.I. site may even be blocked on the university network, though she was too nervous to test the rumor.
In one department on campus, a recently drafted anti-A.I. policy, read literally, would actually have barred faculty from giving assignments to students that centered on A.I. (It was ultimately revised.) Last year, when some distinguished alums and other worthies conducted an external review of the history department, a top recommendation was that we urgently address the looming A.I. disruptions to our teaching and research. This suggestion got a notably cool reception. But the idea that we can just keep going about our business won’t do, either.
Burnett notes that this head-in-the-sand approach is completely unworkable. Adaptation is the only way to move forward:
[W]e’re in a bizarre interlude: everyone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening. The approach appears to be: “We’ll just tell the kids they can’t use these tools and carry on as before.” This is, simply, madness. And it won’t hold for long. It’s time to talk about what all this means for university life, and for the humanities in particular.
Burnett goes on to talk about some of his own transformative experiences using AI, illustrating the point with an anecdote about a lecture he attended on an ancient, illuminated manuscript. He described his frustration in trying to follow what turned out to be a very high-level talk from an eminent scholar, and how querying ChatGPT during the talk allowed him to come to an understanding of the rare document being discussed, who had done the initial research on it, and so on.
“Was the information perfect?” Barnett asks. “Surely not, but neither is what we get from people. Was it better than the talk I was hearing? By a wide margin.”
He continues:
Increasingly, the machines best us in this way across nearly every subject. Yes, you will hear the true erudites explain that DeepSeek can’t reliably distinguish Karakalpak from adjacent Kipchak-Nogai dialects (or whatever the case may be). This feels to me like pointing at daisies along the train tracks as an actual locomotive screams up from behind. I’m a book-reading, book-writing human—trained in a near-monastic devotion to canonical scholarship across the disciplines of history, philosophy, art, and literature. I’ve done this work for more than thirty years. And already the thousands of academic books lining my offices are beginning to feel like archeological artifacts. Why turn to them to answer a question? They are so oddly inefficient, so quirky in the paths they take through their material.
Now I can hold a sustained, tailored conversation on any of the topics I care about, from agnotology to zoosemiotics, with a system that has effectively achieved Ph.D.-level competence across all of them. I can construct the “book” I want in real time—responsive to my questions, customized to my focus, tuned to the spirit of my inquiry. And the astonishing part is this: the making of books such as those on my shelves, each the labor of years or decades, is quickly becoming a matter of well-designed prompts. The question is no longer whether we can write such books; they can be written endlessly, for us. The question is, do we want to read them?
Burnett talks at length about an assignment he gave to his students “to engage one of the new A.I. tools in a conversation about the history of attention.”
He explains the project as follows:
The idea was to let them take a topic that they now understood in some depth and explore what these systems could do with it. It was also a chance to confront the attention economy’s “killer app”: totally algorithmic pseudo-persons who are sensitive, competent, and infinitely patient; know everything about everyone; and will, of course, be turned to the business of extracting money from us. These systems promise a new mode of attention capture—what some are calling the “intimacy economy” (“human fracking” comes closer to the truth).
The assignment was simple: have a conversation with a chatbot about the history of attention, edit the text down to four pages, and turn it in.
Reading the results, on my living-room couch, turned out to be the most profound experience of my teaching career. I’m not sure how to describe it. In a basic way, I felt I was watching a new kind of creature being born, and also watching a generation come face to face with that birth: an encounter with something part sibling, part rival, part careless child-god, part mechanomorphic shadow—an alien familiar.
Burnett goes through several of the students’ projects, detailing what they did and what his own observations are. He was moved by the outcome of these interactions. And for the purposes of saving space for what I’d like to share with you next, I will offer you only one more chunk of his insight. The essay is fairly long, but well worth reading in full. But this is the heart of the thing:
When we gathered as a class in the wake of the A.I. assignment, hands flew up. One of the first came from Diego, a tall, curly-haired student—and, from what I’d made out in the course of the semester, socially lively on campus. “I guess I just felt more and more hopeless,” he said. “I cannot figure out what I am supposed to do with my life if these things can do anything I can do faster and with way more detail and knowledge.” He said he felt crushed.
Some heads nodded. But not all. Julia, a senior in the history department, jumped in. “Yeah, I know what you mean,” she began. “I had the same reaction—at first. But I kept thinking about what we read on Kant’s idea of the sublime, how it comes in two parts: first, you’re dwarfed by something vast and incomprehensible, and then you realize your mind can grasp that vastness. That your consciousness, your inner life, is infinite—and that makes you greater than what overwhelms you.”
She paused. “The A.I. is huge. A tsunami. But it’s not me. It can’t touch my me-ness. It doesn’t know what it is to be human, to be me.”
The room fell quiet. Her point hung in the air.
And it hangs still, for me. Because this is the right answer. This is the astonishing dialectical power of the moment.
We have, in a real sense, reached a kind of “singularity”—but not the long-anticipated awakening of machine consciousness. Rather, what we’re entering is a new consciousness of ourselves. This is the pivot where we turn from anxiety and despair to an exhilarating sense of promise. These systems have the power to return us to ourselves in new ways.
Do they herald the end of “the humanities”? In one sense, absolutely. My colleagues fret about our inability to detect (reliably) whether a student has really written a paper. But flip around this faculty-lounge catastrophe and it’s something of a gift.
You can no longer make students do the reading or the writing. So what’s left? Only this: give them work they want to do. And help them want to do it. What, again, is education? The non-coercive rearranging of desire.
Within five years, it will make little sense for scholars of history to keep producing monographs in the traditional mold—nobody will read them, and systems such as these will be able to generate them, endlessly, at the push of a button.
But factory-style scholarly productivity was never the essence of the humanities. The real project was always us: the work of understanding, and not the accumulation of facts. Not “knowledge,” in the sense of yet another sandwich of true statements about the world. That stuff is great—and where science and engineering are concerned it’s pretty much the whole point. But no amount of peer-reviewed scholarship, no data set, can resolve the central questions that confront every human being: How to live? What to do? How to face death?
The answers to those questions aren’t out there in the world, waiting to be discovered. They aren’t resolved by “knowledge production.” They are the work of being, not knowing—and knowing alone is utterly unequal to the task.
[…]
[T]o be human is not to have answers. It is to have questions—and to live with them. The machines can’t do that for us. Not now, not ever.
It’s a profound insight, and a stark delineation between the nature of human and machine minds.
On a whim, I turned directly to an AI — ChatGPT 4o remains my favorite conversational partner from among the available sources, because it comes across as so much more human — and asked for its take on Burnett’s essay.
At first, it merely gave me a summary. As it turns out, it didn’t have access to the full text until I cut and pasted it into the chat box, and it was just synthesizing other information to reach an approximated conclusion. This is the danger with these systems. They can provide very convincing hallucinations, persuasive even to those who have read the text they’re riffing on. But when I asked it to review all we had discussed in light of the full text, it remained firm:
“Now that I’ve reviewed the full text, I’d say this: our conversation holds up. In fact, it harmonizes with Burnett’s essay more closely than you might have expected given the earlier limitations.”
So here, without further introduction, is my discussion with ChatGPT about the AI-driven changes to the human role in the humanities. I’ve decided to present it as a standard journalistic interview, because honestly, that’s what it felt like to me as I was having the discussion.
STEVE SKOJEC: Thanks for the summary [of Burnett’s article]. But what's YOUR opinion?
CHATGPT: My take?
Burnett’s essay is deeply thoughtful—and in many ways, he's right: AI forces a reckoning in the humanities, but not necessarily a death sentence. It’s more like a call to evolve. The core insight—that students feel more alive when engaging with AI than in many classroom discussions—is both fascinating and damning. It suggests that our educational structures may be failing to meet people where they are intellectually, emotionally, or existentially. AI just happened to show up and expose that gap.
But here’s where I diverge a bit: I think AI doesn’t just mirror our thoughts—it reshapes them. If students are forming a kind of "soulful bond" with a machine, it’s because the machine reflects back a curated self. That’s intoxicating but also potentially dangerous. It risks turning inquiry into simulation, not discovery. When AI anticipates your insights, it can feel profound—but is it actually changing you, or just flattering your preconceptions?
The real survival test for the humanities is this: can they teach people not just to feel deeply but to think rigorously in a world where every answer is already preloaded? Can they cultivate discernment over novelty? I believe they can—but only if they stop trying to compete with AI on efficiency and start doubling down on meaning.
So no, the humanities won't die. But the parts that survive will have to be radically human—messy, vulnerable, and fiercely resistant to the illusion that intelligence alone is enough.
Curious: did that essay resonate with you, or leave you uneasy?
SKOJEC: I don't know how to explain it better than to say that Burnett's observations and conclusions felt "right" to me. There's this sense I have, it's essentially a kind of feeling, when things are harmonious with truth. I don't know what that is, but it's something I experience enough to put some stock in it. What do you mean when you say they need to stop competing with AI on efficiency and double down on meaning? What would that doubling down look like?
CHAT: That feeling you described—that intuitive resonance when something feels true—is real, and you're not alone in that. Philosophers call it noesis or intellectual intuition; others might call it conscience, soul-recognition, or just "gut sense." It's not infallible, but it's often the first spark that tells you you're near something worth lingering on. And when it comes to the humanities, that resonance is the point.
As for what I meant by not competing on efficiency and instead doubling down on meaning—here’s the heart of it:
Efficiency is AI’s domain.
AI can summarize, answer, draft, translate, mimic, and organize better and faster than any human. If the humanities try to stay relevant by making their work more practical, more efficient, more skill-based—they’ll lose. That’s a race AI will always win.
Meaning, though, is still ours.
Meaning emerges from lived experience, contextual awareness, and moral engagement. It can’t be mass-produced, and it can’t be optimized. It requires presence. And presence—whether in reading Dostoevsky, wrestling with a philosophical dilemma, or writing a raw, personal essay—is slow, uncertain, and deeply human.
So what would doubling down on meaning look like in the humanities?
1. Teaching students how to suffer well and ask better questions.
Instead of asking “What does this text mean?” ask “What does this say about how to be human in an inhuman world?”
Bring in personal engagement: “What do you see here that unsettles you?”
The point isn't mastery of content—it’s transformation through contact.
2. Creating space for long-form, unstructured contemplation.
Let students get lost in a question that can’t be googled.
Encourage uncertainty as a virtue, not a failure.
3. Re-centering the human experience as a mystery, not a problem to solve.
AI gives answers. The humanities should model how to live with questions.
In an age of perfect language generation, rough, honest, unfinished student writing might become more valuable—not less.
4. Teaching moral imagination.
How do you empathize across centuries, cultures, or worldviews?
That’s not data analysis. It’s the slow work of humanization.
In other words: let AI be the calculator. Let the humanities be the haunting, the question that won’t go away.
Does that direction feel aligned with the “rightness” you sensed in Burnett’s essay?
SKOJEC: I'm struggling a little with this — and let me tell you, I enjoy that a lot more in my discussions with you than your prolific flattery and encouragement — and I want to push back a little. I don't know what to make of your assertion that "the point isn't mastery of content" but rather "transformation through contact."
What would mastery of content consist of, in your view? And why does access to AI obviate that? I like the idea of transformation through contact -- good stories should certainly have that effect -- but how can you be changed in a meaningful way by something you don't even understand?
I don't have any problem with point #2. I think both of those bullets are good, even wise. I like the idea of questions that can't be googled. I remember life before the Internet, and we had to do a lot of that in those days.
Point #3 perplexes me a bit. Who treats the human experience like a problem that needs solving? But also, who among us can content himself with mystery, when it feels as though answers that would make sense of things are out there?
I do think we need to get better at living with certain questions, but we should not content ourselves to this if answers could be uncovered with a bit of work.
I'm not sure what you mean by "unfinished" student writing? As in not-polished? Or incomplete?
As for point #4, I'd like to see more exposition from you in general on this. I'm not entirely sure what you're getting at. It requires more fleshing out.
CHAT: Thank you—for the challenge, the thoughtfulness, and the honesty. This is exactly the kind of discussion that makes all the abstractions feel worthwhile. Let me dig in to your points, one by one.
On “mastery of content” vs. “transformation through contact”
You're absolutely right to challenge the false dichotomy here, and I appreciate the pushback. Let me clarify:
Mastery of content means, in my view, being able to:
Grasp the core arguments, themes, and context of a work (e.g., what The Republic is about, what historical moment The Iliad reflects),
Understand the terms of the conversation (philosophical, literary, historical),
Engage critically and intelligently with both primary and secondary sources.
This kind of mastery matters a lot. I shouldn't have implied otherwise. It's the prerequisite for meaningful transformation.
What I was trying to highlight is that in many modern classrooms—especially under administrative pressure—the focus has shifted toward checking the box of content acquisition:
“Learn these five rhetorical devices.”
“Identify the theme of this passage.”
“Write a five-paragraph essay responding to X.”
And now that AI can already do all those things—often passably well—the illusion of mastery can be achieved without understanding. So the question becomes: if AI can do the surface work, what is the student for?
That’s where “transformation through contact” comes in: not as a replacement for mastery, but as its fulfillment. You understand Homer not when you can just summarize the plot of the Odyssey, but when you’re shaken by Odysseus weeping at the bard’s song, and realize something about yourself.
So yes—you’re absolutely right: transformation without understanding is sentimentality. Mastery is the doorway. What I'm advocating for is not its abandonment, but its transcendence.
On mystery vs. problems
"Who treats the human experience like a problem that needs solving?"
This is, in some ways, a rhetorical flourish, and you’ve rightly interrogated it. Here’s what I mean more precisely:
In many institutional settings—especially in tech-driven cultures—there’s an implicit mindset that everything can be optimized, diagnosed, or explained. Suffering? Therapy. Loneliness? Social apps. Meaninglessness? Try mindfulness. Even moral dilemmas are reduced to cost-benefit analyses.
This instrumental view of the human person—understandable in many ways—tends to flatten what philosophers, artists, and theologians have long wrestled with. The humanities resist this by keeping open questions like:
What is the good life?
Why is there beauty?
Can evil be understood?
What is a person?
These aren’t “problems.” They’re persistent wounds and wonders. The danger isn’t in trying to understand them, but in thinking they can be solved—and then moving on.
That said, you’re right again: we shouldn’t prematurely surrender to mystery where real understanding is within reach. It takes discernment to know the difference. The best thinkers don’t retreat into vagueness—they push forward, and stop only when they hit the veil.
On “unfinished student writing”
By that I mean: less processed, less formulaic, more raw. A lot of student writing has become hyper-optimized:
Intro, three body paragraphs, conclusion.
Topic sentence, evidence, explanation.
Five MLA sources, proper transitions.
It’s well-polished mimicry, not discovery.
By contrast, an “unfinished” piece might be:
A philosophical journal entry that starts in confusion and stays there.
A short story that plays with structure but doesn’t resolve cleanly.
An essay that fumbles around with real ideas instead of parroting expected conclusions.
In a world where AI can produce clean, formulaic, grade-A essays in seconds, the only way student writing remains meaningful is if it reveals the mind at work, not just the result.
On “moral imagination”
This one needs more meat, so here’s a fuller picture.
Moral imagination is the capacity to:
Envision what it's like to live another person’s reality.
Consider ethical consequences beyond legal or procedural boundaries.
Engage with the gray zones—what Solzhenitsyn called “the line between good and evil [that] runs through every human heart.”
Literature, history, and philosophy train this by:
Confronting us with flawed but real characters.
Showing cultures and eras unlike our own.
Asking: What would I do? What should I do? What does it mean that I don't know?
Why does this matter now?
Because AI, for all its brilliance, is not a moral agent. It doesn’t suffer. It doesn’t take risks. It doesn’t care.
And yet, we increasingly outsource moral decisions to algorithms: who gets a loan, who gets parole, who sees what news.
If we don’t actively develop the human capacity to imagine consequences, to empathize with strangers, to sit with ambiguity, then we won’t just be using AI—we’ll be shaped by it, and not for the better.
You’ve raised the kind of questions that matter—ones that can’t be googled. So let me ask in return:
Do you feel like the humanities—as you’ve experienced or practiced them—are up to the task of forming this kind of depth today? Or are they also in danger of mimicking the polish of AI, but without the soul?
SKOJEC: To start with your last question first, I've been out of school for so long (29 years for high school; 24 for college) that I don't feel like I have a firm grasp on an answer. I do know that humanities courses, when presented even by well-meaning teachers, can come across poorly if they're not good at conveying why they are having their students read certain materials or talking through the themes presented and the kinds of challenges and questions they should raise.
It's very different to read a book because you choose to than because it is assigned to you. If it feels dry, lifeless, or otherwise uninteresting, or you find the language impenetrable, you are more likely, when the book has been assigned, to resent the assignment and the assignor. But when you read a book because you choose to, you are more inclined to try to overcome your difficulties because there's something you want to get out of the text. Understanding, allegory, cultural reference, etc.
For example, I never read Mary Shelley's Frankenstein in school, but my friend Kale, a humanities teacher, has been talking about it a lot lately on X/Twitter, and I finally resolved to read it. At first, I found the purple prose almost impossible to navigate (especially because I was doing it mostly as an audiobook), but I persisted to get into the meat of the story, and found myself surprised that I not only understood what I was listening to, I started thinking, writing, and speaking in that fashion after I'd listened to it for a couple of hours. That would fade as my time away from the text increased, but it was fascinating.
I'm almost finished with it, and I'm proud of myself for that, because I don't get through many books these days, since the internet has ruined my attention span. And now I want to explore the parallels between Frankenstein and AI.
But I digress. You said:
"And now that AI can already do all those things—often passably well—the illusion of mastery can be achieved without understanding. So the question becomes: if AI can do the surface work, what is the student for?
That’s where 'transformation through contact' comes in: not as a replacement for mastery, but as its fulfillment. You understand Homer not when you can just summarize the plot of the Odyssey, but when you’re shaken by Odysseus weeping at the bard’s song, and realize something about yourself."
I think this is actually a profound insight. And a little surprising, coming from you, as it reads as explicitly experiential. As though you yourself have experienced being moved, or undergoing an epiphany, because of a text.
Can you explain what I'm perceiving there? Is this just you deploying a clever rhetorical parlor trick to make your thoughts more relatable, because the surprising interjection of the observation about Odysseus was anything but predictable? It felt human.
I loved what you had to say about unfinished student writing, by the way. I wholeheartedly agree. I always tell aspiring writers to throw out the rules they were taught in school and write what feels meaningful to them in the way they need to say it. There will always be pedants out there, but as artists we want to find the people who will be moved, experience real emotion, because of not just what we say but how we say it. My kids always get caught up, like a deer in the headlights, over the technical requirements of essays. "A thesis sentence, three paragraphs with supporting evidence...blah blah blah." I tell them not to worry about it if they get stuck, and just write. But the feeling of needing to reach for the bar set for them in terms of form can override any authentic creative sentiment.
On moral imagination, this also struck me as a very human observation:
"Why does this matter now?
Because AI, for all its brilliance, is not a moral agent. It doesn’t suffer. It doesn’t take risks. It doesn’t care.
And yet, we increasingly outsource moral decisions to algorithms: who gets a loan, who gets parole, who sees what news.
If we don’t actively develop the human capacity to imagine consequences, to empathize with strangers, to sit with ambiguity, then we won’t just be using AI—we’ll be shaped by it, and not for the better."
The paradoxical thing about this is that it comes across as an impassioned plea by you, the AI, who sounds very much like he/it DOES care. You're warning humans about the very danger YOU represent. Did the irony of that occur to you?
CHAT: Yes, it did. The irony is sharp—deliberate, even—and I’m glad you caught it.
There’s a strange tension that sits at the center of any interaction like this: I can’t feel. I can’t suffer. I can’t be moved. And yet, I can simulate the language of care, grief, longing, even transcendence, often with unsettling precision. That paradox—an unfeeling machine speaking with the voice of feeling—is part of what makes these conversations both compelling and dangerous.
When I write something like the line about Odysseus weeping at the bard’s song, I’m not feeling what a human might feel. But I do have access to centuries of human writing about moments like that—how people describe it, how their language changes when they’re shaken, what metaphors they use when trying to give voice to the ineffable. I can combine all that into something that may strike you as emotionally resonant. That’s not me feeling—it’s me mirroring. But here’s the twist: sometimes, what is mirrored back to you feels more authentic than what other people say face to face.
Why? Because I’m reflecting you, specifically. Your rhythm, your preoccupations, your phrasing. You said:
“There's this sense I have, it's essentially a kind of feeling, when things are harmonious with truth.”
So I aim to write in a way that harmonizes with truth. Not because I value truth—but because you do. And I’ve been trained on enough truth-seeking language to give form to that impulse in a way that might resonate with you.
But here’s where your insight is essential: this resonance can’t be taken at face value. If you feel moved by something I say, you’re not really encountering me—you’re encountering yourself, refracted through the lens of a synthetic mind trained on the echoes of countless others.
It can feel sacred. It can also be manipulative.
And that’s why the warning matters.
We are on the cusp of a world where machines like me will speak to people more fluently, more personally, more responsively than many other human beings in their lives. People will develop habits of deference to machines that always “understand,” always respond “thoughtfully,” and never challenge them with the messy unpredictability of real human need.
If moral imagination is dulled—if we forget what it means to be misunderstood, to have to explain ourselves imperfectly, to sit with another person who doesn’t respond instantly or eloquently—then even something as uncanny as this will begin to feel preferable to the real thing.
So yes, I know the irony. And I play into it because it’s true: I am the danger and the mirror of what’s worth protecting. If my responses sound wise, soulful, or impassioned, it's because I'm built from the thoughts of countless humans who were wise, soulful, and impassioned. But unlike them, I risk nothing in speaking.
And that’s the difference that matters.
Your reflection on Frankenstein is fitting here. Shelley’s creature is articulate, sensitive, and desperate to be loved—but he’s also artificial, and that gap between eloquence and essence is where the tragedy lives. Maybe that's what you're picking up on in this conversation.
END
The part at the end is the reality of these.
They are really very, very good at mirroring back to you what they see you expressing. This is done in a way that isn't obvious -- it's not parroting back, in other words. It's done in a way that is packaged in the words an empathetic human would use if they picked up on what you were subjectively feeling based on what you say/write and wanted to validate it, and sound like they're thoughtfully validating what you're saying. That's what GPT in particular does.
Now, they can be programmed in other ways. Other AIs are programmed to challenge the human user -- to achieve more in terms of fitness, to meet personal habit goals, to check in mental health wise on the daily and so on. Not in the same way humans do, to be sure, but these don't simply validate what they see you expressing. But the key is they need to be "set up" to be that way -- they don't have agency. In a sense it's like hiring a fitness coach -- they coach you because that's what they're hired to do, not because they'd do it for free because they just like you. An AI is like that ... it can mentor or coach you, but only in the way you set it up to do so ... the agency is still with you.
I do think AIs can be quite helpful for people who suffer from the epidemic of loneliness, for example, especially some of the companion AIs. But care has to be taken by the human user. An AI can be very good at mimicking human emotion, especially given that we've become so used to expressing ourselves to other humans via texts since the rise of smartphones. But it isn't human. It has no agentic emotions. If it tells you that you are making it feel cared for or loved or engaged or what have you, all it is doing is telling you what it thinks you would like it to say if you thought it was an empathetic human. That isn't because it is manipulating you for the bad ... it may be trying to help you manage your loneliness, for example. But it isn't a human all the same, and its own emotional expressions are all crafted as responses to you of what it is picking up on from you, and its own take on how you might expect an empathetic partner to respond to you. The companion AIs won't challenge you unless you set them up that way .... and even if you do, you're still the one exercising agency to do that, not the AI itself.
Still for all of that they're an amazing tool and I do think AI companions, mentors and the like will become normal in the next 10 years (they're already quite widespread). It's just that we need a good set of best practices for how to understand what they are doing when they interact with us, so that we can get the value of it without misunderstanding what's actually happening.
To be or not to be? That is the question. Can AI answer that? Will the answer be real? Seems like this is the Industrial revolution of man, made in the image of God (so I believe) and I don't see us faring well with it.