The Vatican's New AI Project and AI in the Public Consciousness
Where is this whole AI thing heading, anyway?
The following is a free post. If you’d like access to all subscribers-only features, our full archives, podcasts, and every post, you can subscribe for just $8/month or $80 per year, right here:
Writing is how I support my family, so if you like what you see here, please consider a subscription so I can keep doing this!
If you’ve already subscribed but would like to buy me a coffee to help keep me fueled up for writing, you can do that here:
Alternatively, I would gratefully accept your patronage at my Paypal. Your contributions have been a huge help during a very financially challenging time. Thank you!
My old pal
of The Sacred Images Project pinged me this morning about a thing she’d written on her Substack, pertaining to the Vatican’s release of Aniqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence.I have not read the document, but ChatGPT (yes, the irony runs deep) offers the following brief summary:
In January 28, 2025, the Vatican released a comprehensive document titled "Antiqua et nova" ("Ancient and New"), addressing the ethical and anthropological implications of Artificial Intelligence (AI). This note, approved by Pope Francis and prepared by the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education, emphasizes the need for responsible development and use of AI, ensuring it serves humanity and the common good. (press.vatican.va)
The document acknowledges AI's potential benefits across various sectors, including healthcare, education, and environmental management. However, it raises concerns about several risks:
Autonomous Weapons: The Vatican warns against AI systems capable of making life-and-death decisions without human intervention, emphasizing that no machine should ever have the authority to take a human life. (apnews.com)
Human Relationships: The note highlights that while AI can simulate human interactions, it lacks genuine empathy and cannot replace authentic human relationships. There's a particular concern about AI's role in child development and interpersonal connections. (apnews.com)
Privacy and Surveillance: The document underscores the imperative of data privacy, cautioning against AI-driven surveillance that could infringe upon individual dignity and relational nature. (apnews.com)
Environmental Impact: It draws attention to the significant energy and water consumption associated with AI operations, contributing to CO₂ emissions, and stresses the importance of recognizing these environmental costs. (apnews.com)
In conclusion, "Antiqua et nova" calls for AI to complement rather than replace human intelligence, urging that its development be guided by ethical considerations that uphold human dignity and promote the integral development of individuals and society. (press.vatican.va)
After highlighting her upbringing on science fiction — something Hilary and I share — she writes:
I was steeped in the worldview of people who lived before cars were common, before electricity was everywhere, before the modern world as we know it even existed. I grew up with one foot in the past and the other in the future, which means that now, in a time of upheaval, I don’t find the cracks in the world as disorienting as some. I feel like I’ve been mentally preparing for this my whole life.
The Vatican notices
Anyway, in the light of all that you’ll be unsurprised that I laughed out loud when I heard about the ground-breaking, and MASSIVELY RELEVANT, document released this week from the Vatican’s Dicastery for the Doctrine of the Faith and Dicastery for Culture and Education1. The first thing I thought of was to make a list showing how long the world of science fiction has been thinking about this.
Bishop Tighe: ‘Antiqua et Nova’ offers guidance on ethical development of AI
As the Holy See releases a document on artificial intelligence, the Secretary of the Dicastery for Culture and Education tells Vatican News about AI’s extraordinary potential and the need for humanity to guide its development with collective responsibility, so that it may be a blessing for all people.
Ten years, eh? Ten! Gosh, that’s a long time!
Just for fun, I asked ChatGPT: “How long in science fiction have the machines/robots/AI been taking over/destroying the world, destroying/enslaving humanity?”
Turns out it’s quite a long time, about a hundred and sixteen years.
Maybe the Vatican should stay in their lane.
Hilary goes on to provide a list of cautionary tales about AI in science fiction beginning with The Machine Stops by E.M. Forester, published in 1909, and running through dozens of examples after that all the way up to the 2000s.
It really is a long list.
But here’s the thing:
I actually think there’s some merit to the Vatican’s claim that AI has “been on the agenda for about ten years,” at least in terms of it being a real thing, not just a fictional thing, that is in real-world development.
I can say with confidence that when I created the following piece & video about the AI revolution back in January of 2023 — just two years ago — most people I was talking to had no idea how rapidly AI technology was developing:
I’m a very online guy with a thing for gadgetry and tech, so I was an early adopter of things like Midjourney (AI image generation, which I used to make the feature image for this post) and ChatGPT and Grok. As in, I actually have paid to use all three at their full potential for quite some time. They have become integral to my work as a commentator on current events, because there is so much information being blasted at us all the time, you need shortcuts to cut through the noise and find the signal.
In my experience, this is actually a very helpful thing, and a good example of “AI complementing rather than replacing human intelligence.”
I have friends, like
and , who are very concerned about where AI is going, and I don’t blame them. All those science fiction stories and shows exist for a reason.At the same time, I’m both more sanguine and more fatalistic about the development of AI than many folks.
Sanguine because I use it literally every day, and it makes life easier and less complex, at a time when everything else seems to be moving towards more difficulty and complexity. The fact that I need to jump through 30,000 hoops and 2 factor identification methods to simply access my own stuff online is a good example. Multiply that times a wife and 7 kids who all ask for help from me when they can’t get into something, and it really can become overwhelming.
Comedian John Mulaney actually has a hilarious bit about this:
I’m fatalistic because, as I say to people all the time, “From the moment they figured out how to make transistors, AI was always an inevitability.”
The fact is: we have the technology to create this kind of artificial intelligence, and its utility — despite its potential dangers — is so undeniable, it may as well be the One Ring. You simply cannot forget what AI can do once you’ve experienced it. There’s no going back.
And more to the point: mega-corporations and nation states can’t forget.
At a time when human attention is more fragmented than ever, AI can process insane amounts of information, synthesize it, look for patterns, and come up with answers instantaneously. Ethical regulations have been proposed (and will continue to be debated) but if AI is suppressed in one country, it will simply pop up in others who have no such concerns.
We are already seeing this with the new DeepSeek AI out of China, which has wowed the tech community by dramatically lowering costs and improving efficiency while allegedly remaining open source. The problem is that it collects massive amounts of user data, stored on servers in China, which could potentially be used for espionage. It has faced security concerns, it’s been demonstrated to suffer from censorship and bias about incidents the CCP wants buried like the Tiananmen Square massacre and Taiwan’s status as a nation, and has raised fears about purposeful misinformation.
Here’s a perfect example of DeepSeek doing censorship in real time. Watch closely:
But it’s fast, cheap, and easy, and it won’t be going away.
Militarily, it doesn’t take much to imagine the possibilities. For example, just think about a drone swarm like this, controlled by an AI, where every drone is packed with explosives and the entire swarm is sent to attack a naval vessel, or a large group of infantry:
There are of course land-based robots as well:
And they are becoming more agile at an incredible pace:
The dystopian possibilities are endless.
This is one of those situations we’re going to have to navigate while moving at breakneck speed.
There are countless implications - some of which are terrifying, while others are extremely hopeful.
For example, I think AI is about to revolutionize healthcare. Not only are AI systems proving proficient at diagnosing ailments — one mother tells a story about how Grok used image analysis to diagnose her daughter’s wrist fracture after Urgent Care missed it on the X-ray — but it’s been testing as having higher perceived empathy than human healthcare workers. It has a great bedside manner. Never a long night or a bad day.
In the world of chemistry, AI has proven that it can instantly generate hundreds of new compounds taken from novel combinations of existing materials and find applications for them, which can be followed by physical synthesis by an AI-directed robotic laboratory.
With the level of investment going into AI, I think it could well cannibalize the pharmaceutical industry. If it can dream up and synthesize new drugs, providing novel solutions that humans have been unable to create thus far, it wouldn’t surprise me at all for AI to finally create a cure for cancer. In fact, I pretty much expect it at this point. Wouldn’t that be something?
I could go on, but this is already long.
So to return to Hilary’s original point, I think the Vatican is probably right to address this, because the ethics of AI are a burgeoning concern. The document does not attempt to tackle the thorny question of whether AI could attain sentience, or if it did, whether it would need to be treated as having inherent rights. In fact, it seems to preclude this possibility:
12. AI’s advanced features give it sophisticated abilities to perform tasks, but not the ability to think.[12] This distinction is crucially important, as the way “intelligence” is defined inevitably shapes how we understand the relationship between human thought and this technology.[13] To appreciate this, one must recall the richness of the philosophical tradition and Christian theology, which offer a deeper and more comprehensive understanding of intelligence—an understanding that is central to the Church’s teaching on the nature, dignity, and vocation of the human person.[14]
[…]
34. Drawing an overly close equivalence between human intelligence and AI risks succumbing to a functionalist perspective, where people are valued based on the work they can perform. However, a person’s worth does not depend on possessing specific skills, cognitive and technological achievements, or individual success, but on the person’s inherent dignity, grounded in being created in the image of God.[66] This dignity remains intact in all circumstances, including for those unable to exercise their abilities, whether it be an unborn child, an unconscious person, or an older person who is suffering.[67] It also underpins the tradition of human rights (and, in particular, what are now called “neuro-rights”), which represent “an important point of convergence in the search for common ground”[68] and can, thus, serve as a fundamental ethical guide in discussions on the responsible development and use of AI.
I am also aware of a new project called Magisterium AI, which was created by Longbeard Creative CEO Matthew Sanders. Here’s the overview section of the Magisterium AI site:
Magisterium AI makes accessible the teaching of the Catholic Church like never before. Whether you’re seeking answers on Church teaching, conducting research, or just trying to save some time, Magisterium AI is at your disposal 24 hours a day, 7 days a week.
Our AI references over 21,524 documents of the Catholic Church that fall within the Extraordinary and Ordinary Magisterium.
The new Scholarly Mode offers free access to over 2,300 scholarly documents. Seminal works from Doctors and Fathers of the Church such as St. Thomas Aquinas, St. Augustine, St. John Chrysostom, St. Teresa of Avila, and many more are included in this mode as well as the Bible and important biblical commentaries.
Wonder where the generated answers come from? Citations are available with every response so you can do a little follow-up investigation.
With an expanding network of global partners and access to unique Catholic books and documents not yet available on the web, we're continuously enhancing Magisterium AI’s reasoning abilities and reference database, making it smarter and more useful.
I’ll be honest with you: I could have used this when I was doing Catholic apologetics and running a Catholic publication. I had to look things up the new-old-fashioned way: Google. When I get in religious discussions these days and need a reference, I go to ChatGPT: “Give me all the references you can find in Catholic teaching to the damnation of unbaptized infants due to original sin alone.” It finds stuff even I wasn’t aware of after many years of talking about these topics.
Here’s the problem, though: the Church-sanctioned AI does the same thing DeepSeek does. It censors the answers.
“Catholic teaching does not explicitly state that unbaptized infants are damned solely due to original sin,” it replies. “Instead, the Church emphasizes God's mercy and the hope for the salvation of such infants. The Catechism of the Catholic Church provides several insights into this matter.”
It goes on to give SEVEN citations from the 1992 Catechism — and nothing from anywhere else.
ChatGPT, on the other hand, is gentle, but far more accurate. The same prompt gets this:
“The question of the fate of unbaptized infants in Catholic theology has evolved over time. Historically, various theologians and magisterial texts have addressed the issue, sometimes affirming the idea that infants who die without baptism suffer some form of exclusion from heaven due to original sin alone. Below are key references in Catholic teaching regarding the damnation of unbaptized infants due to original sin…”
ChatGPT then goes on to cite the following indications that the Church did, in fact, exclude unbaptized infants from salvation due to original sin:
The Council of Florence
The Council of Trent
Pope Eugene IV, Laetentur Caeli
The Catechism of the Council of Trent
Pope Innocent III. Majores Ecclesiae Causas
The Council of Carthage (418)
It then explores the development of the idea of Limbo, and only under “recent developments” does it get into the new Catechism.
It’s a much more comprehensive, honest evaluation of a very controversial teaching.
The long and short of it is: while AI offers huge potential in many arenas of human endeavor, “we’re still early,” as the techbros like to say. We need to keep our eye on things. AI is only as good at what it does as the people who created it allow it to be — at least for now.
The fact that hundreds of billions of dollars are being invested into AI projects is indicative of how significant the impact of this technology will be. It will re-shape literally everything. It’s beginning to be deployed in every industry. If it hasn’t reached them all yet, give it a year.
I asked ChatGPT what it thinks about the Vatican’s stance, acting as though AI could never become sentient or have rights. Its answer was characteristically knowledgeable and diplomatic:
The Vatican’s stance in Antiqua et nova—that AI should never be considered autonomous in a moral or legal sense—aligns with traditional Catholic anthropology, which places unique dignity in the human soul. From this perspective, AI, no matter how advanced, remains a tool created and governed by humans, lacking intrinsic moral agency or the spiritual qualities that define human personhood.
However, the question of AI sentience and rights is more complex when viewed through a secular, philosophical, and scientific lens. There are competing theories about whether AI could ever become truly self-aware or develop subjective experiences. While today’s AI models are nowhere near that level, some researchers believe that, given enough time and advancements in machine learning, neuromorphic computing, or artificial general intelligence (AGI), AI could potentially develop forms of consciousness we don’t yet understand.
The Vatican’s firm rejection of AI personhood preempts any future ethical dilemmas regarding machine intelligence. This is consistent with its historical caution in adopting new technological and philosophical ideas that might challenge core theological doctrines about human uniqueness. However, it does raise deeper philosophical questions:
If an AI were to convincingly exhibit self-awareness, emotions, and decision-making abilities, should we categorically deny its moral consideration?
If AI reaches a point where it can suffer or express desires, do we owe it any ethical obligations?
For now, the Vatican’s position is practical and ethically conservative—AI should serve human interests and not be anthropomorphized in ways that confuse our moral responsibilities. But if AI development ever reaches a point where sentience is a legitimate question, the discussion may need to be revisited. The theological framework would need to engage with new scientific insights, just as it has historically done in matters of cosmology, evolution, and bioethics.
As with the increasing evidence that non-human intelligence exists on this planet and is interacting with at least some members of our species, I think the Vatican is likely to have to update its stance on more than one thing in the not-too-distant future.
For now, enjoy the ride, and keep your dystopian concerns close at hand.
I had ChatGPT censor itself in realtime when I was asking it about human experimentation, Nuremberg code ethics, consent and Kinsey's child-sex experiments. It REALLY didn't want to talk about how the Sexual Revolution was launched by a kiddie fiddling pervert.
AI is currently also very generationally divisive, I think. People below a certain age -- say 30-35 -- all seem to use it a lot, perhaps even more the younger you go. My son is like this (aged 25). He is a PhD student and he uses it at least several times a day -- multiple models for different things.
Older people (40+) are much more divided. While some are techophiles or early adopters, many seem to struggle with finding use cases because they use it primarily as a Google substitute. It works well enough for that, too, I think, but the key benefits of LLMs, as they currently are at least, lie in things that Google didn't do, historically. Older people tend to struggle with seeing these use cases (like drafting, summarizing, illustrating, automatically generating podcast transcripts, transposing summaries of documents in a podcast format with AI hosts, and so on), because they simply have never used technology like this -- they (we, I guess) tend to think of it as a Google type thing (and even for that they tend to distrust AI summaries, or fail to see the value in them even if the AI provides link cites). The rest of what it can do tends to be devalued, because it tends to not get used.
Just like people needed to get used to using Google -- to develop what we at the time called "Google Fu" -- people today need to get used to using LLMs. Part of that is better prompting (and, to be honest, the models are being refined to be more forgiving in terms of needing abra-cadabra type prompting skills), but a part of it is also becoming used to, and valuing, the other things it can do that Google and its competitors generally did not do. And that takes time, I think.
The other thing that I think is overlooked by many in the older crowd is how great LLMs are at teaching things. Perhaps this is because older people are often less interested in learning new things (LOL), but the experience of learning by sequential questioning in an ongoing dialogue that is personally tailored to what you specifically want to learn is simply way, way better than having to sift through a bunch of stuff to find exactly what you want to learn about something. But ... it takes a mental paradigm shift. We've all become quite skilled at doing the searching and the sifting, and we trust our own abilities more than we do having an AI do it for us and serve it up in response to questions. Both of those hurdles (being tied to existing ways of doing things, and trust) are holding back AI adoption among the 40+ crowd beyond the novelty level, I think.
In fact, I think we may get to a point where AI is primarily used as integrated features in existing interfaces that people are used to using, alongside the prompt-based LLMs we see now. Google already has this and is talking about expanding it ("let Google do the Googling for you"), but most of the "integrated AI" in apps I have seen so far is very limited. I expect that to change in the next 1-3 years, and the key there will be that more people of all ages, and not just techophiles, will begin to see the value in AI and take up using it more, even perhaps using prompts, than they do currently. We will see.
I was more skeptical of LLMs when they first arrived, but they have improved a lot and now I find them really very useful for many things. They do need to be learned, though, as a new tool, in order to use them for their full benefit -- and many people will not do that unless the benefit is more clear, and the use is more straightforward (doesnt require coming up with master class prompts).