William Jones of the Future of Life Institute speaks to Vatican News about the current state of AI development, the impact it is already having on human beings, and the role of religion in carving out a positive future for our species.
By Joseph Tulloch
For years, AI safety experts have been warning of a future moment where AI models become advanced enough to be able to improve themselves, producing more capable successors that would then repeat the cycle.
They warn that such a dynamic—referred to as ‘recursive self-improvement’—could potentially lead to exponential, uncontrollable improvement in AI capacities, posing a severe threat to humanity.
With the recent release of AI models with much-improved coding abilities, some of those experts are now asking if that moment has arrived.
Among them is William Jones, a Futures Program Associate at the Future of Life Institute in London, which works to steer new technologies away from such extreme risks. He spoke to Vatican News about the threats posed by AI, and the role of religion in safeguarding a positive future for humanity.
The following transcript has been lightly edited for style and brevity.
Vatican News: In your view, what’s the state of AI right now?
William Jones: I think what we’re looking at is a lot of things coming to fruition that people in AI have been talking about internally for a while. We thought and hoped they might be further away. ‘Recursive self-improvement’ is one of them. A few years ago, this was touted as the big fear among AI safety people: the moment you have AIs that can actually create the next model and improve themselves. No one knows how fast that can go and how far that can go, and it doesn’t seem like a process that we have a great deal of control over, to put it mildly.
Claude Code and other systems that have been released in the last month or two, I think, have brought the prospect of replacement home to a lot of people. The companies are aiming not to create systems that can help workers, but to replace them. Obviously, with coding, it’s still more in the white-collar realm, but ultimately this is something that could extend to blue-collar workers as well, with improvements in robotics.
Alongside that, we have a lot of back and forth at the moment in the US between AI companies and the government. In the case of the dispute with Anthropic, the Pentagon seems to be saying that autonomous lethal AI is an essential component of government usage of these technologies, which I think is quite sobering.
Q: So there’s clearly a need for responsibility in AI governance. Before the interview started, you were telling me about how you’ve seen a renewed interest in the role that religion can play in that regard. Could you explain?
I would say last year I saw a real growth in the number of different religious writers, especially in the US, who were addressing these topics. I think the main thing that was getting a lot of them interested in the topic were the child suicides, these cases of teenagers talking to chatbots and gradually becoming more and more alienated from the real world and ultimately taking their own lives.
The mother in one of the most prominent cases, Megan Garcia, has become a very vocal advocate. I believe she’s a professed Catholic; she met the Pope last year. In DC, we’ve seen religious coalitions involving groups like the National Association of Evangelicals, the Institute for Family Studies, and various Catholic academics. At one point, lobbyists in the federal government tried to ban states from legislating AI for the next ten years through Congress, but there was a real groundswell of opposition to that, a sort of coalition of labour groups, faith groups, child policy groups and AI safety people coming together, saying, “These companies are getting too powerful to remain unregulated; look at the impact that’s having on our children.” I mean, we are talking about hundreds of thousands of cases of AI psychosis from young people and other vulnerable people spending hours every day for months on end only talking to AIs.
For many reasons the chatbot issue resonates with religious groups. It’s very much a pastoral issue. Equally, it’s clearly a spiritual issue in many ways. It relates to people’s concept of who they are. And the Catholic Church has a lot of writing around relationality, around the embodied relationships crucial to human flourishing. For all these reasons, I think this has brought a lot of new religious leaders into this battle and driven home the urgent need for action.
Q: You mentioned the Catholic Church there. The Vatican has been very active recently on AI. There have been initiatives such as the Rome Call for AI Ethics, and documents like Antiqua et nova. What’s your assessment of the role the Vatican is playing in the fight for AI safety?
I think the first thing to say is that the Catholic Church has been, in many ways, ahead of all other major faiths and denominations with regard to their response to AI. You mentioned the Rome Call – that was very early on, in 2020. And I’ve been told by members of other faiths that they wish they had their own version of Antiqua et nova, which came out at the beginning of last year. Protestant groups often acknowledge the fact that they can’t come together in a central location and put on workshops as the Vatican does.
Pope Leo came in last year and, from the outset, talked about AI as one of the main issues of his papacy, at least for the moment. He made a comparison between his predecessor Pope Leo XIII, who responded to the industrial revolution with Rerum Novarum, and suggested that he, as Pope Leo XIV, was keen to bring the riches of Catholic Social Teaching to bear on the AI revolution.
I would say that another initiative that’s been quite inspirational to other faiths has been the series of workshops organised by the Dicastery for Culture and Education, workshops of academics and ethicists which initially produced a book called Encountering Artificial Intelligence, and now just in the past few months they’ve come out with their second book entitled Reclaiming Human Agency in the Age of Artificial Intelligence.
These AI systems are becoming more and more agentic. What does that mean for human agency? It’s more of a philosophical than a technical question, but certainly very much needed at the moment in terms of how we think about human empowerment.
What role are humans going to play in a world where AI is taking more of our jobs and is increasingly making our decisions for us, even decisions over life and death? The Catholic Church has been very clear for almost ten years that lethal autonomous weapon systems must be banned. They haven’t been yet, but the Church’s leadership on that has been really valued in civil society circles and multilateral governance efforts.
Q: As well as these life-and-death issues, the Pope has also spoken about some of the more mundane ways that AI is penetrating our daily lives. For example, he told children not to use AI to do their homework.
Yes, as time goes on, the Holy Father is saying more and more about different facets of the AI behemoth. He told a group of children that, if AI were taken away tomorrow, they should still be able to think and feel for themselves. Just a few days ago, he told a group of priests from Rome that they shouldn’t be using AI to write sermons—sermons should be an expression of their personal faith, and AI can’t do that.
So the Pope is speaking both from a spiritual and, if you like, developmental point of view, saying to both children and priests, and whoever it might be, that if we decide we’re going to sit back and let AI do everything for us, we won’t actually develop. Our formation as humans and as Catholics will be compromised and limited.
The Pope has also had audiences with filmmakers, with corporate governance, with parliamentarians. And these themes are sort of coming up again and again, this idea of protecting and cultivating the dignity of the human person.
With AI clearly becoming as autonomous as it is, it will be interesting to see how the Church speaks into the bigger questions around the pursuit of AGI, artificial general intelligence, or artificial superintelligence. That’s what the company leaders constantly say that they’re aiming for. Tech leaders have said that humans will not be in charge in a few years’ time and have claimed this is inevitable.
It will be good to have the Church’s voice stressing that humans should remain stewards of God’s creation and that AI must remain a tool, rather than becoming an uncontrollable force that replaces us.

