A new study suggests that using an AI pastor to replace human clergy undermines religious commitment among followers.
As artificial intelligence expands across occupations, AI-generated sermons and robot clergy offer new means of ministry, but they may erode credibility and reduce donations for religious groups that rely on them, the study found.
The study (PDF), published on July 24 in the Journal of Experimental Psychology, had participants listen to sermons by robots and humans at Buddhist and Taoist temples in Japan and Singapore; a third group of US-based Christians also evaluated AI vs human-generated sermons online.
Background: the Rise of Robot Clergy
As artificial intelligence and automation continue to expand into new realms once reserved for humans, from manufacturing and finance to medicine and journalism, a pressing question arises – is there anything robots cannot do?
Robots are increasingly taking on religious duties, like Mindar, a Buddhist robot that has been giving sermons in Japan since 2019.
And an experimental church service utilizing AI was recently held in Germany, attracting over 300 attendees.
The 40-minute sermon delivered at the service was generated by ChatGPT.
These new forms of ministry are filling a widening gap, as fewer people choose to join the clergy.
In the US, for example, the number of priests dropped about 38% from 1970 to 2016.
Lack of Credibility
But in domains involving the transmission of ideas and beliefs, success depends as much on the credibility of the source as the content itself.
This is especially relevant in religion, where clergy and other leaders have long served as cultural models who both embody and legitimize faith.
Religion presents a unique test of whether automation can emulate human credibility.
Theories suggest that religious elites’ credibility enabled institutions to maintain high commitment over time.
So while they are technically capable of priestly roles, are AI clergy viewed as credible replacements?
The Current Research
This study on the effects of AI versus human clergy involved three parts – two field experiments conducted in religious settings, and one online experiment.
In Study 1, 422 participants at a Buddhist temple in Kyoto, Japan watched either a 25-minute sermon delivered by the robot preacher Mindar or the same sermon delivered by a human preacher.
They then completed a survey which included questions on how much money they would be willing to donate.
The sermon was created by a philosopher and theologian who had provided ChatGPT with prompts for prayers, psalms, and other elements to include.
Avatars displayed on a screen above the altar spoke the AI-written sermon, which covered topics such as moving beyond fear and retaining faith.
Analysis found that 68% of participants in the robot preacher group donated versus 80% in the human preacher group.
And the average credibility rating on a 1-5 scale was 3.12 for the robot versus 3.51 for the human.
Study 2 randomly assigned 239 participants at a Taoist temple in Singapore to hear the identical sermon from either a robot or a human preacher.
Measures included donations, willingness to distribute temple flyers, and willingness to share the sermon message.
Those who heard the robot donated less, and were less willing to distribute flyers or share the message.
Finally, Study 3 sampled 274 Christian participants in the US who were recruited via Amazon’s Mechanical Turk; their average age was 44, and about 55% were women.
Participants were asked to evaluate a sermon; half of them were told the sermon was written by a human, and the other half were told it was generated by an AI (in fact both sermons were identical).
The study gauged participants’ religious commitment via a questionnaire, and also measured the likeability and charisma of the “author” of the sermon.
The AI was found to have considerably less credibility (with an average score of 4.4 versus 7.6 for the human, on a 1-10 scale), and was also much less likeable (4.1 vs 7.7).
Less Sacrifice Means Less Skin in the Game
According to research on cultural evolution, humans readily adopt the beliefs and behaviors demonstrated by credible role models.
This tendency is especially pronounced in religion, where clergy serve as cultural models embodying and legitimizing faith tenets.
Prominent theories suggest that religious leaders’ displays of commitment, like sacrificing for their faith, increase their credibility and followers’ adherence.
However, past research was not able to experimentally manipulate religious leaders’ credibility.
Robot and AI clergy, on the other hand, allow for such manipulation.
AI Clergy May Prompt Rethinking of Religion’s Future
Robots may indeed be able to capably transmit religion’s content, but they lack credibility.
People perceive robots as having less capacity for understanding and emotion – less “mind” – than humans.
And “mind” is key to credibility.
Robots cannot authentically believe or feel faith’s costs.
Unlike human clergy, who sacrifice for their religion, robots and AI simply speak about matters of faith, without any commitment.
The research provides new evidence that artificial intelligence may face hard limits when entering roles that require credibility, commitment, and “mind”-like religious leadership.
Even as automation spreads, people may doubt the sincerity of robotic clergy and decrease adherence to faiths that regularly use such automation.
These findings suggest that AI cannot simply emulate human credibility, especially in religion, which hinges on devotees believing that their leaders hold deep convictions.
More research is needed, but the studies indicate automation may instigate declines in religious membership if technology replaces clergy.
Related: By combining precision agriculture and aerospace technology, two Florida companies are joining forces to show how using AI in agriculture can vastly improve yields.
- Title: “Exposure to Robot Preachers Undermines Religious Commitment.”
- It was published on July 24, 2023 in the Journal of Experimental Psychology: General.
- The lead author of the study was Joshua Conrad Jackson, from the Management and Organizations Department at the Kellogg School of Management of Northwestern University.
- Co-authors included Kai Chi Yam of the Department of Management and Organization at the National University of Singapore; Pok Man Tang from the Department of Management at the Terry School of Business, University of Georgia; Ting Liu of the Department of Business Administration at the Graduate School of Management, Kyoto University; and Azim Shariff from the Department of Psychology at the University of British Columbia.