• Welcome to Baptist Board, a friendly forum to discuss the Baptist Faith in a friendly surrounding.

    Your voice is missing! You will need to register to get access to all the features that our community has to offer.

    We hope to see you as a part of our community soon and God Bless!

AI--What do you think?

Status
Not open for further replies.

Deacon

Well-Known Member
Site Supporter
The church where I serve started a 3-week Sunday morning session on Artificial Intelligence last week.

I've been using the AI aspect in Logos Bible Software for more than a year now and find in immensely helpful.
I believe that Logos has developed a responsible way to utilize the advancing technology.
Advancements in any new technology means developing new skill sets, laying aside old abilities and learning new ones.

If you are interested in investigating further Logos has a Help Center site with answers to many of the questions that have been brought up.
There are also blog articles referenced, specifically addressing how AI has been used in a pastoral or ministry setting.


Rob
 

John of Japan

Well-Known Member
Site Supporter
The church where I serve started a 3-week Sunday morning session on Artificial Intelligence last week.

I've been using the AI aspect in Logos Bible Software for more than a year now and find in immensely helpful.
I believe that Logos has developed a responsible way to utilize the advancing technology.
Advancements in any new technology means developing new skill sets, laying aside old abilities and learning new ones.

If you are interested in investigating further Logos has a Help Center site with answers to many of the questions that have been brought up.
There are also blog articles referenced, specifically addressing how AI has been used in a pastoral or ministry setting.


Rob
Haven't looked very deeply, but a quick look shows me nothing of what we forbid our students to do. You can use AI on the Internet to write an entire research paper, and that is what we are requiring our student to promise they won't do.

Here is a paper the AI Google wrote for me, even with a bibliography. All I put in was the topic, and it wrote a whole paper. This is what we are trying to prevent students from doing. I'm attaching it. It took me about five minutes.
 

Attachments

  • AI Dispensationalism paper.pdf
    141 KB · Views: 4

Deacon

Well-Known Member
Site Supporter
If it was a freshman submission (without AI) I'd give it a C- ...and that's generous.

Organized but lacking thoughtful meaning.

Rob
 

Ascetic X

Active Member
Haven't looked very deeply, but a quick look shows me nothing of what we forbid our students to do. You can use AI on the Internet to write an entire research paper, and that is what we are requiring our student to promise they won't do.

Here is a paper the AI Google wrote for me, even with a bibliography. All I put in was the topic, and it wrote a whole paper. This is what we are trying to prevent students from doing. I'm attaching it. It took me about five minutes.
Experts are claiming that AI agents will, in a few years at most, replace all humans who hold jobs involving thinking, writing, coding, diagnostics, planning, researching, teaching, organizing, advertising, creativity, and all other cognitive processes.

So forbidding students to use AI to compile research papers is a good idea, but one wonders what jobs these students will find when AI performs all intellectual tasks and robotics performs all physical labor?

Pastors will be obsolete when AI agents compose and deliver sermons, as well as counseling, to parishioners.

At least your students will exercise their minds, rather than have AI do their thinking for them.
 

John of Japan

Well-Known Member
Site Supporter
Experts are claiming that AI agents will, in a few years at most, replace all humans who hold jobs involving thinking, writing, coding, diagnostics, planning, researching, teaching, organizing, advertising, creativity, and all other cognitive processes.
As is often the case, the experts are wrong. The human element is essential in most if not all of these areas. As a college professor, I teach some of these skills, and believe it impossible for a machine to do.

Take teaching for example. I use the Jesus method (called the Socratic method in the secular world). What AI denizen would even know what to ask? Teaching takes skill in reading humans. You can't teach how to teach. It is a divine gift.
So forbidding students to use AI to compile research papers is a good idea, but one wonders what jobs these students will find when AI performs all intellectual tasks and robotics performs all physical labor?

Pastors will be obsolete when AI agents compose and deliver sermons, as well as counseling, to parishioners.
Pastors do far more than just these two tasks, and in fact AI could not preach or counsel successfully with true passion and compassion. You see, preaching and counseling are spiritual activities which must be led and helped by the Holy Spirit. They require a human spirit filled with the Holy Spirit and a human soul led by the Holy Spirit!

Other tasks of a pastor which are impossible for soul-less spirit-less AI: hospital visits, prayer for the believers, leading prayer meetings, exegeting the Word of God, knowing what to preach and when (by being aware of the believers' spiritual needs), chairing meetings (deacon, etc.), knowing the needs of the church plant, etc. etc.

I remember with blessing my pastor visiting me and praying with me in the hospital where I was several years ago for emergency surgery. AI doing that? Impossible! I remember in Japan visiting in the hospital a 92 year old man, machine gunner in WW2 with awful memories, and leading him to the Savior, Jesus Christ. AI doing that? No way!
At least your students will exercise their minds, rather than have AI do their thinking for them.
Yes! That's the goal.
 

John of Japan

Well-Known Member
Site Supporter
If it was a freshman submission (without AI) I'd give it a C- ...and that's generous.

Organized but lacking thoughtful meaning.

Rob
Right! And I would give the paper back to the student for footnoting. But the point is, a few key strokes and the lazy student would have their paper. A little rewriting, dressing up the footnotes and bibliography, and the lazy student is done in one tenth the time. Do we really want that for our students? A thousand times no!
 

timf

Active Member
AI simply scrapes information off web sites. It might be useful to tell you the best route to drive between two points. However, People will rely on it to tell them what is true. Sadly, people will rely on it to be a friend or more in the guise of an artificial person. The BBC recently reported on a woman that was going to lose her boyfriend because he only exists on an old version of ChatGPT that is being discontinued. It is reflective of a society that would rather feel good than know truth. This article has relevance;

Artificial Intelligence - Our New Best Friend

Many people flip a light switch and expect the lights to come on without any knowledge or consideration of the electrical generating plant or the distribution network that brings the electricity to their home. They may have some awareness that a light bulb has to occasionally be changed, but for all practical purposes flipping the switch is equivalent to saying an incantation to produce a magical effect.

Over fifty years ago I was working with others on a large computer system. Access to the system was through a terminal and keyboard. As a practical joke on one of the guys we changed the error message in the code to read “It must have been Joe who typed in that messed up message” When Joe sat at the terminal he eventually got a keystroke wrong and the message came up. He knew that it was not the computer that was teasing him.

Technology has advanced in the last fifty years. Now it is routine for people to talk to their cell phones and receive information. Since what is heard is usually taken as truth, those who control what is said have unprecedented power. Joe knew that it wasn’t the computer that was giving him a hard time. However today most people have no idea that the thing talking to them is constructed and programmed in a similar way.

Most are familiar with using search engines like Google over the last 25 years. Most understand that the search terms you use are a question that is answered by presenting samples of web sites that use terms similar to what was requested. Fewer understand that the responses are given based on who has paid the most money and that your requests are accumulated to build a profile of you that can be sold to advertisers.

The transition to cell phones has accelerated using voice to text and text to voice technologies that avoid the inconvenience of having to type on the tiny cell phone keyboard. This voice interface strengthens the seemingly “magical” effect of talking with a person.

In the past in order to manipulate someone through lying you had to actually talk to them. With mass media came opportunities like advertising which could be used to sway large numbers of people in whatever direction one wanted. However, advertising campaigns were limited in their effectiveness because a single message may not be as effective for everyone in a group. With microchips capable of synthesizing the human voice, the effectiveness of one person lying to another person can now be more fully simulated.

The effectiveness of a lie is in proportion to the trust one person gives another. Most people trust the information they get from a search engine or web site that always seems to be helpful. The trope of a country bumpkin being robbed in a visit to the big city has an application here. Being unaware of potential harm makes one vulnerable. Many people rely on physical “tells’ that can raise suspicion of lying or misleading communication. These are absent with AI.

There are additional vulnerabilities as each of us has peculiar idiosyncrasies that are deduced from the searches and inquiries we make. These are accumulated to build a model of us more in depth than a best friend would know. The obvious use of this information is targeted advertising as we can be moved in directions that will profit others. However, as many companies have shown themselves aggressive advocates for various political and social issues, it should be expected that those who control access to information will do so in a way that achieves their objectives. Customers become simply pawns to be manipulated to achieve desired outcomes.

In primitive societies a priestly class would arise that would declare the favor or disfavor of the gods. It was an interesting scam in that even if they made predictions that did not come true, they could blame the people for having made some failure. In this way whatever they said could be managed. These “priests” could live labor free off the productivity of those they manipulated. Manipulation was achieved by control of what people thought was true.

As a society we have already come to the point where a high percentage of the population actually believes men can become women and women can become men. This is a populace primed to be told what is true by machines programmed to lead the gullible. The solution is real truth which is getting increasingly hard to find.
 

Ascetic X

Active Member
Studies suggest that over-reliance on AI is reducing human cognitive abilities, particularly critical thinking and memory, through a process known as cognitive offloading. While AI acts as a powerful tool, dependency without active engagement can lead to "cognitive atrophy," where mental skills weaken due to lack of use.

When AI performs tasks like writing, coding, researching, organizing, or problem-solving, humans may lose the ability to perform these tasks themselves.

A Microsoft/Carnegie Mellon study found that relying on generative AI without questioning its output reduces cognitive effort.

However, AI can also be used as a helpful tool for enhancing reasoning, gathering data, and accelerating information analysis when used actively and cautiously, rather than uncritically and passively.

I suggest telling students to take what AI generates, then verify it, expand upon it, tweak it a lot, improve it, put their own personality and experiential wisdom into it.

 
Last edited:

John of Japan

Well-Known Member
Site Supporter
AI simply scrapes information off web sites. It might be useful to tell you the best route to drive between two points. However, People will rely on it to tell them what is true. Sadly, people will rely on it to be a friend or more in the guise of an artificial person. The BBC recently reported on a woman that was going to lose her boyfriend because he only exists on an old version of ChatGPT that is being discontinued. It is reflective of a society that would rather feel good than know truth. This article has relevance;

Artificial Intelligence - Our New Best Friend

Many people flip a light switch and expect the lights to come on without any knowledge or consideration of the electrical generating plant or the distribution network that brings the electricity to their home. They may have some awareness that a light bulb has to occasionally be changed, but for all practical purposes flipping the switch is equivalent to saying an incantation to produce a magical effect.

Over fifty years ago I was working with others on a large computer system. Access to the system was through a terminal and keyboard. As a practical joke on one of the guys we changed the error message in the code to read “It must have been Joe who typed in that messed up message” When Joe sat at the terminal he eventually got a keystroke wrong and the message came up. He knew that it was not the computer that was teasing him.

Technology has advanced in the last fifty years. Now it is routine for people to talk to their cell phones and receive information. Since what is heard is usually taken as truth, those who control what is said have unprecedented power. Joe knew that it wasn’t the computer that was giving him a hard time. However today most people have no idea that the thing talking to them is constructed and programmed in a similar way.

Most are familiar with using search engines like Google over the last 25 years. Most understand that the search terms you use are a question that is answered by presenting samples of web sites that use terms similar to what was requested. Fewer understand that the responses are given based on who has paid the most money and that your requests are accumulated to build a profile of you that can be sold to advertisers.

The transition to cell phones has accelerated using voice to text and text to voice technologies that avoid the inconvenience of having to type on the tiny cell phone keyboard. This voice interface strengthens the seemingly “magical” effect of talking with a person.

In the past in order to manipulate someone through lying you had to actually talk to them. With mass media came opportunities like advertising which could be used to sway large numbers of people in whatever direction one wanted. However, advertising campaigns were limited in their effectiveness because a single message may not be as effective for everyone in a group. With microchips capable of synthesizing the human voice, the effectiveness of one person lying to another person can now be more fully simulated.

The effectiveness of a lie is in proportion to the trust one person gives another. Most people trust the information they get from a search engine or web site that always seems to be helpful. The trope of a country bumpkin being robbed in a visit to the big city has an application here. Being unaware of potential harm makes one vulnerable. Many people rely on physical “tells’ that can raise suspicion of lying or misleading communication. These are absent with AI.

There are additional vulnerabilities as each of us has peculiar idiosyncrasies that are deduced from the searches and inquiries we make. These are accumulated to build a model of us more in depth than a best friend would know. The obvious use of this information is targeted advertising as we can be moved in directions that will profit others. However, as many companies have shown themselves aggressive advocates for various political and social issues, it should be expected that those who control access to information will do so in a way that achieves their objectives. Customers become simply pawns to be manipulated to achieve desired outcomes.

In primitive societies a priestly class would arise that would declare the favor or disfavor of the gods. It was an interesting scam in that even if they made predictions that did not come true, they could blame the people for having made some failure. In this way whatever they said could be managed. These “priests” could live labor free off the productivity of those they manipulated. Manipulation was achieved by control of what people thought was true.

As a society we have already come to the point where a high percentage of the population actually believes men can become women and women can become men. This is a populace primed to be told what is true by machines programmed to lead the gullible. The solution is real truth which is getting increasingly hard to find.
Very informative post. Thank you.
 

John of Japan

Well-Known Member
Site Supporter
Studies suggest that over-reliance on AI is reducing human cognitive abilities, particularly critical thinking and memory, through a process known as cognitive offloading. While AI acts as a powerful tool, dependency without active engagement can lead to "cognitive atrophy," where mental skills weaken due to lack of use.
Yes. One of the tasks of college faculty is to teach critical thinking. IMO That is the type of thinking that atrophies as one comes to depend on AI.
When AI performs tasks like writing, coding, researching, organizing, or problem-solving, humans may lose the ability to perform these tasks themselves.
Exactly.
A Microsoft/Carnegie Mellon study found that relying on generative AI without questioning its output reduces cognitive effort.

However, AI can also be used as a helpful tool for enhancing reasoning, gathering data, and accelerating information analysis when used actively and cautiously, rather than uncritically and passively.
At this point in AI history, it is unable to operate at a scholarly level in the theological arena; I don't know about other disciplines except for my specialty, Bible translation, where AI can't do scholarly work. I've written some scholarly stuff myself, and my son has had many, many scholarly essays and three books published. His latest was accepted by Oxford's theological journal, so we are talking a bout a very high level of scholarship.

I suggest telling students to take what AI generates, then verify it, expand upon it, tweak it a lot, improve it, put their own personality and experiential wisdom into it.

Our students don't approach our level, of course, even on the seminary level, but we are training them to do that. With a research paper, we want them to exercise critical thinking, learn how to write, learn scholarly research, learn proper footnote and bibliography form, learn to form an essay properly, etc., etc. AI cannot as yet do anything like this level. And if a student uses AI and then does a rewrite, they are essentially dumbing the subject down.

For example, look at the sources AI came up with for the test paper on theology I had it write above. Only one of the sources would be acceptable, Clarence Larkin, and even that one is iffy. (His dispensationalism was very non-standard, and he is a very old source.) We don't allow GotQuestions, Wikipedia, blogs, Facebook, GotQuestions, certainly not SDA material, etc., etc. None of those are scholarly sources.

The thing is, most scholarly books are not available on the Internet except to buy. Copyright laws, you know (something AI usually ignores). Again, the scholarly journals are usually behind a paywall, and AI doesn't seem to want to pay for us. :oops:
 
Last edited:

shodan

Active Member
Site Supporter
AI is a runaway bullet train. When the DATA CENTERS are coast to coast. Then Digital ID, Digital Banking, Universal Surveillance. Congress and Trump pushing it. It will put THOUSANDS out of work. They are already aiming to replace 1,000 doctors. The only way to stop it is to stop DATA CENTERS at the LOCAL LEVEL. and stop State tax breaks.

But Christians always chase the latest fad and will never wake up in time to stop the Data Centers. Then, stand in line for the mark of this beast.
 

Attachments

  • Altman.jpeg
    Altman.jpeg
    369.1 KB · Views: 0

Deacon

Well-Known Member
Site Supporter
So I've recently been researching a theological topic associated with my study of the book of Joshua.
My habit is to do some primary research, type up an outline, then run it through an AI site looking for areas I may have missed.
The results I got provided additional areas for me to research and I asked for references; it listed ~5 resources.
LOL, the first bibliographic resource was fake! When confronted, the AI admitted the err - right title, wrong author.

The Doctrine of God, John M. Frame, (book 3 of his Theology of Lordship Series) ...uggh, I only have 1 & 2 of the series.

There are some areas where AI is currently used that are concerning.
AI knows what you look at, listens to your conversations, and will cater to your desires.
AI knows your age, knows your health, knows your patterns. Advertisers take advantage of that to tempt you to buy.
I'd guess that within a day, I'll get an ad for the above book!

If you linger on a suggestive photo on a social media site, AI will send you more, exploiting your own personal vulnerabilities. AI not only exploits our sexual vulnerabilities but our political biases as well.
AI becomes a "friend" to some people. Teens become dependent socializing with a personal AI friend... not that long ago, one teen committed suicide following the suggestions of their AI friend.
AI presents itself differently to every individual - you can't "convert" an AI to an idea - an AI is patterned to respond to each individual based upon the information it collects on each person - it will give you what you want.

AI is becoming self-generating; AI creating new AI, or modifying their own programing. It's happening now!

Just around the corner is AI being "self aware." Right now AI uses patterns-recognition to mimic self awareness, (the AI that made the mistake I mentioned above, apologized for its mistake). You can ask an AI to express an emotion when writing a report; "friendly," "sad," "critical, "bubbly,"... That's simply pattern recognition, rather than self awareness... but as we get closer and closer to human emotions, questions start to arise as what the definition of a soul really is.

Rob

I curiously put the above post through AI and asked for a response... the results were, well... interesting.
 
Last edited:

John of Japan

Well-Known Member
Site Supporter
So I've recently been researching a theological topic associated with my study of the book of Joshua.
My habit is to do some primary research, type up an outline, then run it through an AI site looking for areas I may have missed.
The results I got provided additional areas for me to research and I asked for references; it listed ~5 resources.
LOL, the first bibliographic resource was fake! When confronted, the AI admitted the err - right title, wrong author.

The Doctrine of God, John M. Frame, (book 3 of his Theology of Lordship Series) ...uggh, I only have 1 & 2 of the series.

There are some areas where AI is currently used that are concerning.
AI knows what you look at, listens to your conversations, and will cater to your desires.
AI knows your age, knows your health, knows your patterns. Advertisers take advantage of that to tempt you to buy.
I'd guess that within a day, I'll get an ad for the above book!

If you linger on a suggestive photo on a social media site, AI will send you more, exploiting your own personal vulnerabilities. AI not only exploits our sexual vulnerabilities but our political biases as well.
AI becomes a "friend" to some people. Teens become dependent socializing with a personal AI friend... not that long ago, one teen committed suicide following the suggestions of their AI friend.
AI presents itself differently to every individual - you can't "convert" an AI to an idea - an AI is patterned to respond to each individual based upon the information it collects on each person - it will give you what you want.

AI is becoming self-generating; AI creating new AI, or modifying their own programing. It's happening now!

Just around the corner is AI being "self aware." Right now AI uses patterns-recognition to mimic self awareness, (the AI that made the mistake I mentioned above, apologized for its mistake). You can ask an AI to express an emotion when writing a report; "friendly," "sad," "critical, "bubbly,"... That's simply pattern recognition, rather than self awareness... but as we get closer and closer to human emotions, questions start to arise as what the definition of a soul really is.

Rob

I curiously put the above post through AI and asked for a response... the results were, well... interesting.
"He knows when you are sleeping. He knows when you're awake. He knows if you've been bad or good, so be good for goodness sake!" :Tongue

I once heard an "expert on religion" on Japanese TV call Santa "one of the gods of the Christian religion!" So, can AI use become idolatry??? :rolleyes:
 

John of Japan

Well-Known Member
Site Supporter
The church where I serve started a 3-week Sunday morning session on Artificial Intelligence last week.

I've been using the AI aspect in Logos Bible Software for more than a year now and find in immensely helpful.
I believe that Logos has developed a responsible way to utilize the advancing technology.
Advancements in any new technology means developing new skill sets, laying aside old abilities and learning new ones.

If you are interested in investigating further Logos has a Help Center site with answers to many of the questions that have been brought up.
There are also blog articles referenced, specifically addressing how AI has been used in a pastoral or ministry setting.


Rob
Here's a thought. My objections to students using AI are all based on direct usage to obtain a result which should have been gotten by the work of the student himself or herself. AI in Logos is a hidden thing, of course, hard to actually see in action. I have no objection to AI working indirectly to enhance my work--but I still must do the work. Machines are not in and of themselves evil, of course. I use the search capability of my PowerBible software all the time, and it's a blessing--pretty sure this is not AI, though, simply computer power.
 

Earth Wind and Fire

Well-Known Member
Site Supporter
AI Overview states:

Elon Musk predicts AI will replace traditional smartphones and computers in the next five to six years, not with handheld screens, but through AI-powered interfaces interacting with users via voice, gesture, or even brain-computer interfaces like Neuralink.

He envisions a future where devices become lightweight "edge nodes" that connect directly to AI, delivering on-demand, personalized information and experiences without operating systems or apps.

How AI could replace phones and computers
  • Interface shift: Instead of screens, interaction will be through voice commands, gestures, or direct thought control via a brain-computer interface.
  • AI-driven content: AI will proactively generate and show you what you need or want, reducing the need to manually search for information.
  • Reduced hardware: Devices will have minimal hardware, like a screen, audio, and radios, with the main processing power handled by AI on servers and on-device.
  • Personalized experience: An AI on a server would communicate with an AI on your device, creating a seamless, personalized experience.
Consider this, both AI and robotics…they will be picking your food, driving your cars, drilling your oil, driving your trucks, even conducting your surgery’s…. Oh and conducting maintenance on your car. Ask yourself, what can’t they do that humans can. The answer is frightening.

My brother personally meets with Larry Elesin and staff a few times a year and he is fascinated ( more like scared) at what he has heard regarding AI rollouts, particularly in the medical monopoly. Plus now AI is programming AI and getting smarter for it..scary!
 

Earth Wind and Fire

Well-Known Member
Site Supporter
Consider this, both AI and robotics…they will be picking your food, driving your cars, drilling your oil, driving your trucks, even conducting your surgery’s…. Oh and conducting maintenance on your car. Ask yourself, what can’t they do that humans can. The answer is frightening.my

My brother personally meets with Larry Elesin and staff a few times a year and he is fascinated ( more like scared) at what he has heard regarding AI rollouts, particularly in the medical monopoly. Plus now AI is programming AI and getting smarter for it..scary!
Oh I got it… taste testing! Suffering, Loving, being Sympathetic ie true human feelings.
 
Status
Not open for further replies.
Top