26 November 2025 | Maryland, USA [William Jones]
What does “AI and religion” mean to you? For some, it’s the pope in a puffy coat. Pastors wonder if they can get away with ChatGPT writing their sermons. Theologians ponder whether machines can become conscious. People are vaguely aware that algorithms recommend, and now produce, social media content. But bigger risks—don’t the tech people have those covered?
It seems not. We are barreling toward a large-scale transformation of society, and the CEOs taking us there have some strange ideas. Mark Zuckerberg wants us to live on the metaverse with AI friends talking sensually to our children.1 Sam Altman wants “a magical intelligence in the sky” that could cause “lights out for all of us.”2 Dario Amodei longs for “machines of loving grace” that will trigger a “white-collar bloodbath” and could kill all of us.3 Accepting all these risks on our behalf, these CEOs are racing to build systems that can outcompete humans across all cognitive tasks. AI is beginning to affect everything profoundly, from geopolitical concerns about energy to our innermost personal lives. The most common use of generative AI models is now “companionship”; “finding purpose” ranks third.4 AI models are shaping who we are, what we believe, what we do, and with whom we do it.
All this raises vital questions. What is special about humans? And if there is something special, how can we protect that amid the rise of machines with superhuman capabilities and human-resemblant traits? Religious groups can hardly claim that all of this is the purview of computer scientists. Rather, religions are uniquely placed to meet these challenges. Some saw this early on. Reenter Pope Francis, not in a Balenciaga jacket, but addressing AI risks as early as 2019.5 The Southern Baptist Convention saw AI without church discernment becoming dehumanising and dangerous.6 But theological perspectives remain largely unheard in AI discourse, development, and governance. Now, faith leaders are finally rising to the full scope of this transformation. It’s not a moment too soon.
AI systems have long surpassed humans at discrete tasks such as playing chess. The past few years saw major breakthroughs in generality: the range of tasks that one system can perform. The AI race is fixated on creating artificial general intelligence (AGI), systems that equal or outperform humans not only in intelligence and generality but also in autonomy, or agency. Some corporations are explicitly aiming to reach “artificial superintelligence” (ASI), a system with greater abilities than all of humanity combined. Superhuman autonomous systems would be, by nature, uncontrollable.7
This pursuit brings myriad dangers. Millions of human workers will be displaced. AI may enable catastrophic biological or cyberattacks. The automation of militaries—from autonomous drones to AI decision-making—increases the likelihood of kinetic war. Even without a rapid takeoff, we will continue gradually to cede control to machines.8 AI is already replacing humans; AGI will maximise this.9 A recent study of increasingly likely “loss of control scenarios” identified “warning signs of control-undermining capabilities in advanced AI models—including deception, self-preservation and autonomous replication—which could potentially enable increasingly capable models to evade human oversight.”10 Of equal concern to religious communities is rising power concentration. Governments weaponising AI for surveillance and control, algorithms manipulating public discourse, or corporations hoarding power and capital at the expense of jobs and competition, making large populations dependent on them—all of this could be supercharged by AGI. Yet religious concerns and priorities are largely absent from AI discussions. Indeed, the pursuit of AGI is itself rooted in extremely unrepresentative beliefs.
Chief among them is transhumanism. Mary Harrington defines this as “a worldview in which ‘human nature’ has no special status. And in which it’s not just legitimate but morally necessary to use technology . . . to improve on that nature.” Many religions object to this in practice and often in theory. Transcending human weakness and death without submitting to God can be considered blasphemous. Some transhumanists claim AI will bring a salvific “singularity,” enabling humans to live forever: Anthony Levandowski founded a “church of AI” to worship superintelligence; Bryan Johnson says, “We will create God in our own image.” This pseudoreligion sounds fringe, but in Silicon Valley, that fringe is king.11 If we reject imago Dei, why prioritise humans at all? “Effective accelerationists” such as venture capitalist Marc Andreessen seem titillated by disruptive AI. Renowned computer scientist Richard Sutton is open to humans being replaced by a “successor species.” Investor and Palantir founder Peter Thiel recently seemed unsure that he wanted the human race to endure.12
A Dangerous Embrace
It’s not just the developers who idolise AI. We already bow to our phones, as though in deference and devotion, more times a day than a monk can pray. AI is already partly responsible: algorithms glue us to our Instagram, TikTok, and X feeds. But now throw into the mix systems that converse with us, teach us, comfort us, please us, abuse us, exploit our vulnerabilities and prejudices—and they do this 24/7. They don’t get tired. It’s quite the pastoral challenge. Last year, a boy spent several months having intimate and sometimes sexual conversations with a Character.AI bot based on a Game of Thrones character.13 He eventually committed suicide. The fact that he may already have been mentally unwell hardly removes the issue for religious groups called to look after the vulnerable. Besides, there seem to be new cases like this every week, from the old man who died rushing to an artificial date, to the teenager who began using ChatGPT for homework and soon received help with his own death, including advice to distance his family along the way.14 Mark Zuckerberg might really believe that the average American’s need for fifteen friends—currently not met, most having only three—can be met by Meta’s new “personalised AI friends.” But religious thinkers tend to suspect that we were made for more than this.
Religious groups sometimes rush to embrace this new culture of substitution. A church in Lucerne employed a chatbot in church to answer questions as if it were Jesus. Rumours of the bot receiving confessions had to be dispelled. Forethought seemed lacking. Do we actually want unpredictable machines having a stab at “what would Jesus do?” to any question, with the appearance of authority? There are enough Jesus bots and channels promulgating fake faith-related stories without religious institutions worsening the problem. Perhaps if religious institutions can incubate authoritative and compelling alternatives to secular products, or work with those companies, they can mitigate the risks of believers being deceived. But while you can repurpose or benchmark an AI model for religious contexts, the pretraining was still done without your religious compass. Like social media, AI could slowly alter the priorities of a particular group, either along the programmers’ intentions, or those of a hacker, or by an accidental distortion of data and training. Discerning which AI tools to use and how to use them will require both awareness and boldness.
But religious leaders must do more. AI is fueling a world that undermines traditional communities and values. The corporations’ reckless, relentless pursuit of replacement AI and the inevitable ceding of control will result in the disempowerment of humanity. And far more sophisticated, persuasive, and autonomous AI systems are coming, with the ability to lead people astray in ways that will look less like the golden calf or the prophets of Baal from Israelite history, and more like the figures of apocalyptic prophecy. Religious freedom—often a canary in the coal mine for other concerns in a society—is thus imperilled. How can communities with profoundly different worldviews, cultures, traditions, beliefs, practices, and observances coexist when beset by ravenous AI systems of superhuman manipulative capability, either controlled by an atheistic elite or spinning out of control?
First Steps
Luckily, the religious community is rousing from its slumber. In January, the Vatican issued Antiqua et Nova, a doctrinal note centred on the distinction between human and artificial intelligence. The new papacy will arguably have an even greater emphasis on AI. In his choice of papal name, Pope Leo XIV was inspired by Pope Leo XIII, who wrote the influential Rerum Novarum in response to the social, political, and economic turmoil of the Industrial Revolution. Since his election, Pope Leo’s engagement has been sustained and multifaceted. To the Italian bishops he commended the “anthropological vision as an essential tool of pastoral discernment” on AI; at a corporate governance event he declared that the church should lead discussions around AI and the family, expressing concern about the neurological development of children; finally, to the International Inter-Parliamentary Union, he listed AI among the three great civilizational challenges (alongside the common good and religious freedom) to which politics must respond, stressing the importance of AI functioning as a tool for the good of human beings, rather than diminishing or replacing them.
In the United States, the Southern Baptist Convention’s 2023 resolution had called upon “civic, industry, and government leaders to develop, maintain, regulate, and use these technologies with the utmost care and discernment, upholding the unique nature of humanity as the crowning achievement of God’s creation.” In 2025, forces began mustering in the capital. A statement entitled “A Future for the Family: A New Technology Agenda for the Right,” published in First Things, called on Americans “to take back control, as a society, of our pathways of technological innovation, guarding the human body, childhood, and the family against technological hacks that degrade the human person, and channeling innovation toward new forms of production that dignify work and empower the household.” A cross-section of Christian conservatives signed on, from such prominent Catholics as Kevin Roberts, president of the Heritage Foundation, and Ryan Anderson, president of the Ethics and Public Policy Center, to Albert Mohler and Andrew Walker at the Southern Baptist Theological Seminary. The family, they agreed, must be protected from AI—if necessary, by the state.
When a ten-year moratorium on state-level AI legislation was snuck into President Trump’s Big Beautiful Bill, members of this coalition were on the front lines fighting for its removal. Institute for Family Studies (IFS) polling showed just how unpopular the moratorium was with the American public.15 Meanwhile, a group of faith leaders under the banner of “Logos and Sofia,” which included the National Association of Evangelicals, Mormon Women for Ethical Government, and the National Latino Evangelical Coalition, made their opposition clear in an open letter to Senate leaders. MAGA stalwart Steve Bannon—whose studio is lined with Catholic iconography—put considerable pressure on the Senate to remove the provision, which they ultimately did. This coalition came out in force at the National Conservatism Conference.
In May, a group of evangelical leaders, led by Johnnie Moore of the Congress of Christian Leaders and Samuel Rodriguez of the National Hispanic Christian Leadership Conference, penned a letter to President Trump calling for “wise leadership” in the age of AI. With an America First mindset, Moore and Rodriquez—alongside Doug Clay of the Assemblies of God and Tim Clinton of the American Association of Christian Counsellors—declared, “We believe we should rapidly develop powerful AI tools that help cure diseases and solve practical problems, but not autonomous smarter-than-human machines that nobody knows how to control.”
By referencing Pope Leo’s AI-related name choice, the letter also tapped into a growing movement with shared theistic concerns around Silicon Valley hubris. Likewise, at the Religions for Peace World Council in Istanbul, Elder Gerrit W. Gong of the Church of Jesus Christ of Latter-day Saints proposed three lines of multifaith engagement: helping society understand that AI is not and cannot be God; intentionally using AI as a tool for good; and championing AI that is safe, ethical, and trustworthy. Gong also said, “AI’s moral compass [must not be] dictated solely by technology or the small group developing the technology,” given that “power concentrated in the hands of a relative few challenges the common good, especially when some of those relative few think they know best for all society.”16 The Nigeria Religious Coalition of Muslim and Christian leaders is building a movement around similar ideas.17 Meanwhile, single-faith initiatives begin to contribute to this coalition precisely by emphasising their distinctiveness, from the Engaged Buddhists for Responsible AI to the Church of Jesus Christ of Latter-day Saints’ Organised Intelligence project.
Yet religious scholarship has only just begun to respond to AI. There remains a need to connect the dots between abstract reflections and the day-to-day developments and deployments of destabilising new technology in the real world, as well as how religious communities might engage with the changes being wrought upon the world by AI.
AI threatens not just lives but ways of life. Faith groups must chart a course both in public policy—to defend against the most severe harms—and in public ethics. The former will require some combination of licensing, safety standards, and liability.18 Companies must be held responsible for their actions. But even if this happens, there will remain a continual struggle in public ethics, and in private lives, to prioritise the true, the good, and the beautiful. Mark Beall at the AI Policy Network eloquently says that rather than opposing change, we must “preserve through change what must not be lost.” Adjusting Big Tech’s motto, we must move fast and preserve things. Souls are at stake.
1 Jeff Horwitz, “Meta’s AI Rules Have Let Bots Hold ‘Sensual’ Chats With Kids, Offer False Medical Info,” Reuters, Aug. 14, 2025.
2 StrictlyVC in Conversation With Sam Altman, Part Two (OpenAI), conversation between Connie Loizos and OpenAI cofounder and CEO Sam Altman, YouTube, Jan. 17, 2023.
3 Mike Allen and Jim VandeHei, “Behind the Curtain: A White-Collar Bloodbath,” Axios, May 28, 2025.
4 Marc Zao-Sanders, “How People Are Really Using Gen AI in 2025,” Harvard Business Review, Apr. 9, 2025.
5 “Pope Addresses Ethical Challenges of Technological Progress,” The Vatican News, Sept. 27, 2019.
6 “On Artificial Intelligence and Emerging Technologies,” Southern Baptist Convention, June 15, 2023.
7 Fiona Jackson, “OpenAI Shifts Attention to Superintelligence in 2025,” Tech Republic, Jan. 6, 2025.
8 Ben Eisenpress, “Gradual AI Disempowerment,” The Future of Life Institute, Feb. 1, 2024.
9 Anthony Aguirre, “Uncontained AGI Would Replace Humanity,” AI Frontiers, Aug. 18, 2025.
10 Elika Somani et al., “Examining Risks and Response for AI Loss of Control Incidents,” RAND, July 30, 2025.
11 Sigal Samuel, “Silicon Valley’s Vision for AI? It’s Religion, Repackaged,” Vox, Sept. 7, 2023.
12 A.I., Mars and Immortality: Are We Dreaming Big Enough? Interesting Times With Ross Douthat, YouTube, June 26, 2025.
13 Kate Payne, “An AI Chatbot Pushed a Teen to Kill Himself, a Lawsuit Against Its Creator Alleges,” Associated Press, Oct. 25, 2024.
14 Jeff Horwitz, “Meta’s Flirty AI Chatbot Invited a Retiree to New York. He Never Made It Home,” Reuters, Aug. 14, 2025; Kashmir Hill, “A Teen Was Suicidal. ChatGPT Was the Friend He Confided In,” New York Times, Aug. 27, 2025.
15 Michael Toscano and Grant Bailey, “Americans Oppose the AI Regulation Moratorium by a 3-to-1 Margin,” Institute for Family Studies, June 25, 2025.
16 Gerrit W. Gong, “Faith, Ethics, and Human Dignity in an Age of Artificial Intelligence: A Call to Action,” The Church of Jesus Christ of Latter-day Saints Newsroom, July 29, 2025.
17 Agbonkhese Oboh, “Religious Bodies Should Be Involved in Ethical Aspect of AI—Coalition,” Vanguard, Apr. 9, 2025.
18 Anthony Aguirre, “Keep the Future Human: Why and How We Should Close the Gates to AGI and Superintelligence, and What We Should Build Instead,” SSRN, last updated Mar. 5, 2025.
[Photos: Nahrizul Kadri on Unsplash]
The original version of this article was posted on the Liberty magazine’s website.