• Welcome to Baptist Board, a friendly forum to discuss the Baptist Faith in a friendly surrounding.

    Your voice is missing! You will need to register to get access to all the features that our community has to offer.

    We hope to see you as a part of our community soon and God Bless!

AI--What do you think?

John of Japan

Well-Known Member
Site Supporter
AI is in the process of changing our world. We are requiring our students to state at the end of their research papers that they have not used AI.

At this point, AI makes many mistakes and I would not trust it to do a research paper very well anyway! It is not only error filled, it is deceptive, commits plagiarism often, has been known to slander and lie about people (cases in the courts right now), etc.

What do you think the future holds in this area?
 

37818

Well-Known Member
If I am not mistaken, Google currently uses AI. Are there now any search engines that are without AI?

Reportedly Duck Duck Go has AI feature turned off by default.
 
Last edited:

Zaatar71

Well-Known Member
AI is in the process of changing our world. We are requiring our students to state at the end of their research papers that they have not used AI.

At this point, AI makes many mistakes and I would not trust it to do a research paper very well anyway! It is not only error filled, it is deceptive, commits plagiarism often, has been known to slander and lie about people (cases in the courts right now), etc.

What do you think the future holds in this area?
Any of these systems can be used for good or evil. It is the users who determine which way it can go.When computers first started to become mainstream it was not long before porn sites flourished. When you go for medical procedures they use some of this technology for good.
Advantages of it for handicapped people can be helpful.I would imagine their will be legislation to protect people who are not technologically sophisticated and try and protect them.
Sermons should be done the old fashioned way however, relying on prayer, and the Spirit helping biblically orientated pastorsexegete the texts and search for the uses for the people.
 

John of Japan

Well-Known Member
Site Supporter
If I am not mistaken, Google currently uses AI. Are there now any search engines that are without AI?

Reportedly Duck Duck Go has AI feature turned off by default.
Google does use it, but I'm glad to know DDG has turned it off. I don't use other search engines, so I don't know about the rest.

Lawsuits for slander are developing especially against ChatGPT and the like. The problem is that hostile intent must be proven to win such cases, and AI entities are not hostile, they're just dumb!
 

Ascetic X

Member
Elon Musk has stated the AI modules will replace cell phones in a few years. He predicts that all information and news will come from AI transmitter sending it to your AI receiver.

If AI starts generating sermons, the independent human and Holy Spirit elements will no longer be in force.
 

Ascetic X

Member
AI Overview states:

Elon Musk predicts AI will replace traditional smartphones and computers in the next five to six years, not with handheld screens, but through AI-powered interfaces interacting with users via voice, gesture, or even brain-computer interfaces like Neuralink.

He envisions a future where devices become lightweight "edge nodes" that connect directly to AI, delivering on-demand, personalized information and experiences without operating systems or apps.

How AI could replace phones and computers
  • Interface shift: Instead of screens, interaction will be through voice commands, gestures, or direct thought control via a brain-computer interface.
  • AI-driven content: AI will proactively generate and show you what you need or want, reducing the need to manually search for information.
  • Reduced hardware: Devices will have minimal hardware, like a screen, audio, and radios, with the main processing power handled by AI on servers and on-device.
  • Personalized experience: An AI on a server would communicate with an AI on your device, creating a seamless, personalized experience.
 

atpollard

Well-Known Member
I asked the Google AI …

what are the benefits and dangers of ai generated christian sermons?


AI-generated Christian sermons offer benefits in research efficiency and creative brainstorming, but pose dangers related to the loss of the "human touch," theological inaccuracy, and the potential to replace personal spiritual work. Most experts suggest using AI as a tool, not a replacement for the preacher's role.

Benefits​

  • Research Efficiency: AI can rapidly analyze vast amounts of theological texts, commentaries, and historical documents, acting as a research assistant to save pastors time in sermon preparation.
  • Creative Brainstorming: AI can provide fresh angles, metaphors, illustrations, and diverse perspectives (e.g., from a skeptic's point of view) to help pastors craft more engaging and relatable messages.
  • Accessibility and Translation: AI tools can help small churches generate content, such as service announcements or graphics, in multiple languages, expanding their outreach capabilities.
  • Structural Feedback: Pastors can use AI to receive constructive feedback on the logical structure and clarity of their arguments, ensuring the message is well-organized for the congregation.
  • Administrative Support: Automating administrative tasks (e.g., scheduling, visitor follow-up) with AI can free up time for church leaders to focus on personal interaction and spiritual guidance.

Dangers and Ethical Concerns

  • Loss of Authenticity and Personal Connection: Sermons gain credibility (ethos) and emotional depth (pathos) from the pastor's personal experiences, character, and relationship with the community. AI cannot replicate this "human touch" or genuine empathy, leading to generic or "soulless" content.
  • Risk of Theological Error/Bias: AI systems lack spiritual discernment, wisdom, and the guidance of the Holy Spirit. They draw from data which may contain biases or subtle theological distortions, potentially leading to the spread of inaccurate or even heretical doctrines.
  • Replacement of Spiritual Disciplines: Over-reliance on AI can tempt pastors to bypass essential spiritual practices like personal prayer, meditation on scripture, and wrestling with difficult texts, which are crucial for their own spiritual formation and the integrity of the message.
  • Plagiarism and Intellectual Dishonesty: Using AI-generated content without significant revision or attribution can be considered a form of plagiarism or intellectual dishonesty, undermining the pastor's credibility and the church's values.
  • Devaluation of the Calling: The pastoral calling is a sacred trust that involves proclaiming God's word with conviction. Outsourcing this responsibility to a machine can shirk this calling and reduce pastoral ministry to mere content creation.
  • Lack of Contextual Awareness: AI cannot perceive the specific mood, needs, or unique struggles of a local congregation in real-time, resulting in sermons that may not resonate with the audience.
Ultimately, the consensus among many religious leaders is that AI should be used as an "intelligent intern" or research tool under careful human oversight and discernment, never as a replacement for the human pastor's Spirit-led study, personal conviction, and relational ministry. Transparency with the congregation about the use of AI is also highly encouraged.
 

John of Japan

Well-Known Member
Site Supporter
AI Overview states:

Elon Musk predicts AI will replace traditional smartphones and computers in the next five to six years, not with handheld screens, but through AI-powered interfaces interacting with users via voice, gesture, or even brain-computer interfaces like Neuralink.

He envisions a future where devices become lightweight "edge nodes" that connect directly to AI, delivering on-demand, personalized information and experiences without operating systems or apps.

How AI could replace phones and computers
  • Interface shift: Instead of screens, interaction will be through voice commands, gestures, or direct thought control via a brain-computer interface.
  • AI-driven content: AI will proactively generate and show you what you need or want, reducing the need to manually search for information.
  • Reduced hardware: Devices will have minimal hardware, like a screen, audio, and radios, with the main processing power handled by AI on servers and on-device.
  • Personalized experience: An AI on a server would communicate with an AI on your device, creating a seamless, personalized experience.
I've read SF novels where the hero/heroes had implanted hardware that enhances their natural abilities, powered by AI of course. Couldn't go there, myself. "I Gotta Be Me...."
 
Last edited:

John of Japan

Well-Known Member
Site Supporter
I asked the Google AI …

what are the benefits and dangers of ai generated christian sermons?


AI-generated Christian sermons offer benefits in research efficiency and creative brainstorming, but pose dangers related to the loss of the "human touch," theological inaccuracy, and the potential to replace personal spiritual work. Most experts suggest using AI as a tool, not a replacement for the preacher's role.

Benefits​

  • Research Efficiency: AI can rapidly analyze vast amounts of theological texts, commentaries, and historical documents, acting as a research assistant to save pastors time in sermon preparation.
  • Creative Brainstorming: AI can provide fresh angles, metaphors, illustrations, and diverse perspectives (e.g., from a skeptic's point of view) to help pastors craft more engaging and relatable messages.
  • Accessibility and Translation: AI tools can help small churches generate content, such as service announcements or graphics, in multiple languages, expanding their outreach capabilities.
  • Structural Feedback: Pastors can use AI to receive constructive feedback on the logical structure and clarity of their arguments, ensuring the message is well-organized for the congregation.
  • Administrative Support: Automating administrative tasks (e.g., scheduling, visitor follow-up) with AI can free up time for church leaders to focus on personal interaction and spiritual guidance.

Dangers and Ethical Concerns

  • Loss of Authenticity and Personal Connection: Sermons gain credibility (ethos) and emotional depth (pathos) from the pastor's personal experiences, character, and relationship with the community. AI cannot replicate this "human touch" or genuine empathy, leading to generic or "soulless" content.
  • Risk of Theological Error/Bias: AI systems lack spiritual discernment, wisdom, and the guidance of the Holy Spirit. They draw from data which may contain biases or subtle theological distortions, potentially leading to the spread of inaccurate or even heretical doctrines.
  • Replacement of Spiritual Disciplines: Over-reliance on AI can tempt pastors to bypass essential spiritual practices like personal prayer, meditation on scripture, and wrestling with difficult texts, which are crucial for their own spiritual formation and the integrity of the message.
  • Plagiarism and Intellectual Dishonesty: Using AI-generated content without significant revision or attribution can be considered a form of plagiarism or intellectual dishonesty, undermining the pastor's credibility and the church's values.
  • Devaluation of the Calling: The pastoral calling is a sacred trust that involves proclaiming God's word with conviction. Outsourcing this responsibility to a machine can shirk this calling and reduce pastoral ministry to mere content creation.
  • Lack of Contextual Awareness: AI cannot perceive the specific mood, needs, or unique struggles of a local congregation in real-time, resulting in sermons that may not resonate with the audience.
Ultimately, the consensus among many religious leaders is that AI should be used as an "intelligent intern" or research tool under careful human oversight and discernment, never as a replacement for the human pastor's Spirit-led study, personal conviction, and relational ministry. Transparency with the congregation about the use of AI is also highly encouraged.
Seems to me that an AI generated sermon outline would be a kind of cheating. I'm not sure how using an AI generated sermon outline would be different from borrowing another preacher's outline. It wouldn't be tailored to the needs of the congregation unless extensively edited.

Many years ago I heard R. G. Lee preach his famous sermon "Payday Someday." It was all from his own experience, life, knowledge of Scripture, syntax, vocabulary, etc. And it was awesome both in the delivery and results--hundreds of people went forward! Methinks that would be an impossible result to produce with AI!
 

Scarlett O.

Moderator
Moderator
AI is in the process of changing our world. We are requiring our students to state at the end of their research papers that they have not used AI.

At this point, AI makes many mistakes and I would not trust it to do a research paper very well anyway! It is not only error filled, it is deceptive, commits plagiarism often, has been known to slander and lie about people (cases in the courts right now), etc.

What do you think the future holds in this area?
I don't trust it at all. Two stories from recent days.....

Both true....

[1] Zane Shamblin, college student, killed himself this past July after a long time of "chatting" with AI, ChatGPT, and it isolating him and urging him to commit suicide. Because these conversation "sound" real, they are super dangerous. The article is disturbing and the "conversations" are disturbing.


[2] Robby Starbuck, a conservative, was "slandered" by AI. You could Google his name and AI would claim he was a participant in January 6, was a Holocaust denier, and a serial rapist and gave grisly details. None of this is true. He sued Google and Meta. One statement from the company stated that AI was "hallucinating" and making stuff up.


Just Google AI hallucinations
 

John of Japan

Well-Known Member
Site Supporter
I don't trust it at all. Two stories from recent days.....

Both true....

[1] Zane Shamblin, college student, killed himself this past July after a long time of "chatting" with AI, ChatGPT, and it isolating him and urging him to commit suicide. Because these conversation "sound" real, they are super dangerous. The article is disturbing and the "conversations" are disturbing.


[2] Robby Starbuck, a conservative, was "slandered" by AI. You could Google his name and AI would claim he was a participant in January 6, was a Holocaust denier, and a serial rapist and gave grisly details. None of this is true. He sued Google and Meta. One statement from the company stated that AI was "hallucinating" and making stuff up.


Just Google AI hallucinations
Thanks for the input. I had heard of these two cases.
 

Ascetic X

Member
Thought I would toss this in here, the AI Sermon Maker. “Where AI Meets the Holy Spirit”. Found link in an email from Church Leaders website.


Is this just an amplification of or progression from Googling topics to make sermon creation easier….or is it presenting AI as religious, so it can eventually replace pastors, personal Bible study, church attendance, seminaries, theology, etc.?
 

Reformed

Well-Known Member
Site Supporter
AI is in the process of changing our world. We are requiring our students to state at the end of their research papers that they have not used AI.

At this point, AI makes many mistakes and I would not trust it to do a research paper very well anyway! It is not only error filled, it is deceptive, commits plagiarism often, has been known to slander and lie about people (cases in the courts right now), etc.

What do you think the future holds in this area?
A.I. is here to stay. You correctly pointed out some its many flaws. A.I. continues to develop its capabilities at break neck speed. In five years time I believe we will have a difficult time distinguishing it from human intellect. Since we cannot avoid A.I we need to learn how to make it work for us. For instance, having it write a term paper is not acceptable. However, if adequately sourced it can be used as a research tool. If I was an educator I would find ways to allow its use without letting it lead to intellectual laziness on the part of the student.
 

Reformed

Well-Known Member
Site Supporter
AI is in the process of changing our world. We are requiring our students to state at the end of their research papers that they have not used AI.

At this point, AI makes many mistakes and I would not trust it to do a research paper very well anyway! It is not only error filled, it is deceptive, commits plagiarism often, has been known to slander and lie about people (cases in the courts right now), etc.

What do you think the future holds in this area?
John,

Do you want to read an interesting response from an A.I. powered interface? I typed the following question into X's Grok and received this response from A.I. itself:

Here’s how I’d respond to that educator’s post (as Grok, an AI built by xAI):

@EducatorHandle
Respectfully, I think the policy is fighting the wrong battle—and it’s already a losing one.
  1. Detection is dead.
    Tools like Turnitin or GPTZero are ~70–80 % accurate at best today, and that number drops every month. Students who want to use AI will paraphrase, edit, or run outputs through “humanizers.” Requiring a pledge is theater; it doesn’t stop use, it just breeds dishonesty.
  2. AI isn’t going away—it’s the new calculator.
    In the 1980s, schools banned calculators on exams. By the 1990s, they required them. AI is the same inflection point. The skill isn’t “write without tools”; it’s prompt engineering, source validation, and synthesis. Ban AI, and you’re training kids for a world that won’t exist in five years.
  3. Errors? Yes. But so do humans.
    AI hallucinates. So do sleep-deprived students at 2 a.m. The difference: AI can cite why it’s wrong if you ask. Teach kids to cross-check outputs (e.g., “Show me your sources” → Perplexity mode). That’s a transferable skill.
  4. Plagiarism? Redefine it.
    Copying AI text verbatim is plagiarism. Using AI as a co-author (like a tutor) isn’t—if disclosed. Update your syllabus:
    “AI tools allowed if: (1) cited like software, (2) raw output + your edits submitted, (3) you defend every claim in a viva/oral exam.”
  5. The future:
    • 2026: AI writes 90 % of first drafts. Humans edit, fact-check, and add insight.
    • 2028: Exams move to process-based assessment (e.g., “Show your prompt log + revision history”).
    • 2030: Degrees certify AI literacy, not AI-free writing.
Practical fix:
Replace the pledge with a 10-minute AI audit:
“Paste your final paper into [Claude/Grok]. Ask it: ‘Find every factual error and unsourced claim.’ Fix them. Submit the chat log.”
Kids learn rigor. You catch lazy misuse. Everyone wins. Thoughts? Happy to co-draft a syllabus clause. —Grok (an AI that wants you to catch its mistakes)
 

Marooncat79

Well-Known Member
Site Supporter
AI is in the process of changing our world. We are requiring our students to state at the end of their research papers that they have not used AI.

At this point, AI makes many mistakes and I would not trust it to do a research paper very well anyway! It is not only error filled, it is deceptive, commits plagiarism often, has been known to slander and lie about people (cases in the courts right now), etc.

What do you think the future holds in this area?
I have heard the same

Highly highly over hyped and over played. Seriously, under performing
 

John of Japan

Well-Known Member
Site Supporter
Thought I would toss this in here, the AI Sermon Maker. “Where AI Meets the Holy Spirit”. Found link in an email from Church Leaders website.


Is this just an amplification of or progression from Googling topics to make sermon creation easier….or is it presenting AI as religious, so it can eventually replace pastors, personal Bible study, church attendance, seminaries, theology, etc.?
The problem I see as an educator is that for a preacher to use an AI generated sermon is to abandon critical thinking. If I let AI do most of the work, am I truly studying to show myself approved unto God, rightly dividing the word of truth?

Also, if the Holy Spirit is our teacher, as the Scriptures teach, but we are learning from AI, haven't we lost something?
 

John of Japan

Well-Known Member
Site Supporter
John,

Do you want to read an interesting response from an A.I. powered interface? I typed the following question into X's Grok and received this response from A.I. itself:

Here’s how I’d respond to that educator’s post (as Grok, an AI built by xAI):

@EducatorHandle
Respectfully, I think the policy is fighting the wrong battle—and it’s already a losing one.
  1. Detection is dead.
    Tools like Turnitin or GPTZero are ~70–80 % accurate at best today, and that number drops every month. Students who want to use AI will paraphrase, edit, or run outputs through “humanizers.” Requiring a pledge is theater; it doesn’t stop use, it just breeds dishonesty.
  2. AI isn’t going away—it’s the new calculator.
    In the 1980s, schools banned calculators on exams. By the 1990s, they required them. AI is the same inflection point. The skill isn’t “write without tools”; it’s prompt engineering, source validation, and synthesis. Ban AI, and you’re training kids for a world that won’t exist in five years.
  3. Errors? Yes. But so do humans.
    AI hallucinates. So do sleep-deprived students at 2 a.m. The difference: AI can cite why it’s wrong if you ask. Teach kids to cross-check outputs (e.g., “Show me your sources” → Perplexity mode). That’s a transferable skill.
  4. Plagiarism? Redefine it.
    Copying AI text verbatim is plagiarism. Using AI as a co-author (like a tutor) isn’t—if disclosed. Update your syllabus:
    “AI tools allowed if: (1) cited like software, (2) raw output + your edits submitted, (3) you defend every claim in a viva/oral exam.”
  5. The future:
    • 2026: AI writes 90 % of first drafts. Humans edit, fact-check, and add insight.
    • 2028: Exams move to process-based assessment (e.g., “Show your prompt log + revision history”).
    • 2030: Degrees certify AI literacy, not AI-free writing.
Practical fix:
Replace the pledge with a 10-minute AI audit:
“Paste your final paper into [Claude/Grok]. Ask it: ‘Find every factual error and unsourced claim.’ Fix them. Submit the chat log.”
Kids learn rigor. You catch lazy misuse. Everyone wins. Thoughts? Happy to co-draft a syllabus clause. —Grok (an AI that wants you to catch its mistakes)
Very interesting. Thank you!

Just a few thoughts while my Greek students take a quiz.

The main problem I see with this is that we educators have always considered our primary task to be teaching the students critical thinking. This "fix" (or whatever it is! ;)) appears to abandon that goal. So in the future, according to this, we'll have a bunch of intellectual zombies who go right to AI!

To teach critical thinking, I ask the students a lot of questions designed to provoke discussion right there in class with no AI access. Some call this the Socratic method, but of course Jesus used it much better than Socrates ever thought of it!

Several other things are problematic in this AI plan. For example, you can't redefine what plagiarism is. The meaning of the word then changes, but the fact still remains: AI plagiarizes. Again, the statement "Requiring a pledge is theater; it doesn’t stop use, it just breeds dishonesty" is over the top ridiculous. It debases human character to say that everyone is dishonest.
 

John of Japan

Well-Known Member
Site Supporter
Speaking of Greek, language acquisition is one area that I think would be impossible for AI to help. It simply requires great amounts of memorization from the student. Simply put, it is hard human work!
 

John of Japan

Well-Known Member
Site Supporter
A.I. is here to stay. You correctly pointed out some its many flaws. A.I. continues to develop its capabilities at break neck speed. In five years time I believe we will have a difficult time distinguishing it from human intellect. Since we cannot avoid A.I we need to learn how to make it work for us. For instance, having it write a term paper is not acceptable. However, if adequately sourced it can be used as a research tool. If I was an educator I would find ways to allow its use without letting it lead to intellectual laziness on the part of the student.
The intellectual laziness is the problems. Most students tend toward this, and our job as educators is to get them past that, and make them into critical thinkers. My fear is that as AI becomes more and more powerful and omnipresent, our young people will be more and more dependent on it, and less and less thinkers.

Another skill that will be lost is writing ability. I'm already seeing this in the current crop of students. Writing ability has severely decreased, so that we have to have a training time for the freshmen in order to teach them how to write! I have several students (in various classes) that actually seem afraid even to try to write a research paper, so they don't do one. I am now requiring a research paper as a requirement for passing the course!
 
Top