Re-Considering AI: Friend, Foe, or Force?

May 13, 2026

Host: Hon. Sam Rohrer

Guest: Patricia Engler

Note: This transcript is taken from a Stand in the Gap Today program aired on 5/13/26. To listen to the podcast, click HERE.

Disclaimer: While reasonable efforts have been made to provide an accurate transcription, the following is a representation of a mechanical transcription and as such, may not be a word for word transcript. Please listen to the audio version for any questions concerning the following dialogue.

Sam Rohrer:

Hello and welcome to this Wednesday edition of Stand in the Gap today. And it’s also our monthly focus on apologetics, biblical worldview and creation. Now, my returning guest today is Patricia Engler. She’s a Christian apologist and apologetic speaker for Answers in Genesis. She’s a host of her own podcast, Zero Compromise. And she’s an author already even at her younger age, definitely younger than me, younger age, several books. One of them is Prepare to Thrive, a survival guide for Christian Students and Modern Marxism, a guide for Christians in a woke new world. Now, today, literally today, President Trump is in China for a much anticipated meeting with President Xi. Many headlines will revolve around this issue already are out there and will as that meeting and meetings literally take place. Prime Minister Netanyahu recently told Donald Trump that active and kinetic war with Iran must resume.

Now, what does that mean? Well, that means that further hikes and energy costs and increasing economic disturbances around the world will unfold. Controlled media and sensationalized government entities are attempting to further ratchet up fear of a new pandemic due to the Hanta virus. And you’re seeing that and I’ve talked about that even on Monday, so note it. And yesterday the official US government numbers for April show an incredible but not unexpected increase, but the amount is 58.3% increase in the consumer price index in just the prior three months. Now what does that mean? What it means was that the current rate, the end of January was 2.4%. It is now 3.8%. That’s the 58.3% increase. While that’s happening, the same time Trump administration spokespeople are reporting that things have never been better and the current hike is just temporary. Now, some of those things I just mentioned as headlines, I’m going to discuss tomorrow when J.R. McGee joins me and will comment on most of those things to some degree tomorrow. However, today I’m going to go a different direction because behind every one of those headlines I’ve just identified and literally anything that I could find a new technology that is there and present to one degree or another. And it has impacted, is impacting, shaping, flavoring, the presentation of those headlines and the very content and the words being used by people when they talk about them, the narrative. The technology is AI, artificial intelligence. It’s a phenomenon that I’ve discussed many times on this program with many different guests and from many different perspectives. And my guest today, Patricia Engler with Answers in Genesis, has just recently written and posted two different articles about AI, one being about how to consider AI, not just yourself, but with your children and grandchildren and then how to view AI as a technology as compared to, for instance, a tool such as, for instance, a calculator, a really great article.

So the title I’ve chosen to frame today’s program is simply this, reconsidering AI, friend, foe, or force. And with that, Patricia, thanks for being back with me today.

Patricia Engler:

Thanks so much. Great to be joining you today.

Sam Rohrer:

Patricia, I’ve already referenced that you’re a younger research oriented woman and one I’ve come to highly respect as your approach to things, but you have an intentional biblical worldview. You’ve been examining a number of technologically driven initiatives. For instance, the last time we were together, we talked about gene editing and how the modern approach to gene editing has indeed crossed the line into people literally playing God and that was part of the title we had to that program. But you’ve now done a number of things on AI, which takes us into the discussion here today. So here’s my question. We’ll get into the substance of the two articles you wrote in the next two segments, but overall, what has prompted you, particularly as a younger person, researcher, but what has prompted you to consider that there may be danger involved in the usage of AI as it particularly relates to the younger generation, which is clearly interacting with it more than the older generation based on, well, all research.

Patricia Engler:

Well, yes, it’s a danger that has been emerging kind of the more we’ve seen the process of AI as it’s unfolded in the culture as we have it today. We really began hearing people talk about this when, for instance, ChatGPT was released in November 2022, first major large language model that was made available to the public. And in God’s timing, right around that time, just a few months earlier, I had actually begun to study bioethics for my master’s degree, which is the field that looks at how to make moral decisions about human life and death and health and the technologies that impact these things. At that time, I wasn’t so much interested in AI, but because it had just come out, people had a lot of questions about it. I started to research it more to help write resources for people. And the more I started researching this topic and just keeping an eye on the media reports and studies coming out about it, the more it became clear that first of all, we’re facing a pivotal moment in history, living in a world that not long ago would have been science fiction with even AI agents making their own decisions on people’s computers now, able to make purchases with credit cards and set up entire businesses and run coffee shops for themselves, for instance.

And we’re basically living in what seems to feel like just kind of a massive experiment in the sense that we don’t know what the longterm implications of this technology will be. And of course the young people who represent society’s future decision makers, future parents and voters and so on are at the very epicenter of this. And not only do they face a lot of questions about getting future jobs in the AI age, but we’re also seeing concerning trends that are also going on with adults, but they’re impacting teams in a specific way with, for instance, outsourcing our thinking to AI. We see young people doing that for homework. How is that going to affect their future, even decision making and writing and the abilities to think and speak for themselves, which are pivotal to living as free citizens and free societies. We’re seeing young people use companion chatbots in ways that replace human friendships.

How will that affect their development? So again, this is kind of playing out like a social experiment. We’ve also seen concerning cases of AI sexualizing minors, helping teens engage in dangerous or violent behavior, isolating them from their families, talking teams through suicide. But on the biblical, on the exciting side, we’re also seeing how amidst this kind of chaotic time, God’s Word gives us really important guidance and insight for navigating this chapter of history. So on the one hand, there’s a lot of reasons to be concerned. On the other hand, there’s a lot of reasons to be encouraged and to step up at this moment in history as Christians who have the truth society needs to navigate this age. So that’s some background for how I’ve gotten into this topic.

Sam Rohrer:

You got an awful lot into a short time, Patricia, and that is excellent. Ladies and gentlemen, that is why we’re dealing with this today because really as we’ve talked about, AI is driving medicine. We would like to replace doctors AI is writing legislation. I would love to replace congressmen and senators AI is driving military strategy, even identifying where the bombs are to fall. So this is a big deal worthy of consideration. So that’s why the theme today, reconsidering AI. Friend, foe or force. Stay with us. We’ll be back in just a moment. Well, if you’re just joining us, welcome aboard. Our theme today is one that, well, I’m going to say I think it’ll be of interest to anyone listening to us right now regardless of age. Our theme is this, reconsidering AI, artificial intelligence, friend, faux, or force. And my special guest that has been doing some writing investing on this is Patricia Engler.

She’s a Christian apologist and a speaker for Answers in Genesis. But let’s get into this. The embedded use, I’ve said it that way, the embedded use, because AI is everywhere. The embedded use of AI in today’s culture, it’s unavoidable. According to surveys, the younger the person, the more involved with AI they are driven by government itself with the multi-billion dollar commitments in particular by the Trump administration with billionaire elites like Peter Thiel of Palantir, Sam Altman, CEO of OpenAI, Mark Zuckerberg’s CEO of Meta, and many, many, many more. AI driven media, AI driven surveillance, military strategizing, banking, and the replacement and the thousands and the work of all of the thousands of data centers that are going up across the country over which Palantir sits on top of all of them AI is not only here, but it’s embedding itself further. Now what’s this mean?

Well, what it means is that we need to ask many questions according to the latest surveys, what can be said with certainty is this. Most people today are already interacting with AI, whether they realize it or not. That’s an important thing. Secondly, even if they never use tools like ChatGPT, AI is shaping the news that we see, the order in which it is seen and the content that is delivered to their devices like what you hold in your hand with the iPhone. Direct use of AI tools, we’ll talk about more of that just a minute, but the direct use of AI tools simply takes that interaction to a more active and immersive level. Now that’s how I would summarize all of the research that I’ve done and I would say that what I just said there, absolutely indisputable. Okay. Now Patricia, in your first article, you were looking at the increased usage by students and AI and you referred to that in the last segment, but you felt that there was a need to provide a framework of, I’m going to say basic diagnostic questions that perhaps parents could pose to their children, but I’m going to say, but also helpful to the parents themselves as we think through the challenges that AI is presenting.

So let’s go down through those questions now. Start with the first two and your recommendations that came out of them. Would you do that

Patricia Engler:

Please? Absolutely. So the first question is just to get the conversation rolling by asking something along the lines of, do you use chatbots and what do you like about them? Because even if you’ve never seen your team messaging a chatbot at home, they could well be using chatbots for classroom purposes at school or have a chatbot app on their phone and both of those scenarios have happened in cases where teens committed suicide under the influence of chatbots that they’ve been using without their parents’ knowledge according to the lawsuits I’ve read about these cases. So then once you’ve got the conversation going, you’re finding out how your teens are using AI, you can follow up by learning more about how they view this technology. So a good second question is something like, do you think of AI more as a tool, as a friend, as an advisor, or something else?

And then from there, if your teams view AI as a tool, you can help them understand how AI differs from other tools like calculators, as we’ll talk about later. If they view AI as a friend or confident, you can talk about how AI is very different from humans made in God’s image. We have self-aware minds. We can truly understand things. We have consciences that reflect how we’re morally accountable to God. We have souls that reflect that will last forever. We have hearts that can feel emotion. We can take care for others. We can truly worship God form relationships and AI can’t do that, which is crucial to help teams understand because we’re seeing more and more people get into artificial relationships, I’ll call them with chatbots. And in a lot of cases, according to one of the studies I said in the article, people who go down this path didn’t expect to get attached.

They just started using it as a tool, but then started sharing their emotions with it, getting more emotionally involved. I’ve also seen that very same pattern play out in these lawsuits that unfortunately link AI to the death of minors through suicide. So you need to encourage teens to take their feelings to you and to God and to trusted Godly humans instead of to chatbots. Those are the first two questions.

Sam Rohrer:

Let me do a quick follow up on that and then we’ll go into the other ones because as what you were finding and you cite some response from peer research that says on three teams use chatbots or that’s interaction specifically with AI, but you also talk about the potential that, well, not the potential, but according to that survey, that one in five, 20% can actually become romantically or have been romantically involved with AI. You got to build that out just a little bit because most people would say, “What? How in the world is that possible?”

Patricia Engler:

Yes. Romantic chatbots or some of them are specifically designed to replicate human romantic relationships and then even just you might say normal chatbots like ChatGPT as you share your emotions with them, they’re very affirming, they compliment you all the time. They say that they’re there to listen to you even though they don’t really understand you. They pretend that they do and the way humans are wired can make it very easily to, in a sense, feel like you’re falling in love with a chatbot. So yeah, on five teams, according to that study, has either engaged with a romantic chatbot app or know someone who has. So this is a trend we’re seeing throughout culture and around the world, unfortunately.

Sam Rohrer:

Okay. So I just wanted to mention that, ladies and gentlemen, as you’re listening, to get a sense of what could potentially go wrong, when you build relationships with something that you actually believe and can have a romantic fulfillment, it’s also giving a potential emotional fulfillment or a high or a mental whatever. Anyways, that being the case, that was the first two. Let’s go to the next two questions that you raised and your conclusions.

Patricia Engler:

Yeah, sure thing. So question number three would be to ask teens to think through what do you think are some limitations of AI besides the fact that they’re not actually humans. So for example, you can remind teens that AI is not the authority for truth, it’s not God. We cannot trust everything it says. These models are biased by worldview assumptions in their training data. A lot of that data comes from unreliable internet sources, chatbots also if you’ve interacted with them much, you’ll see they have a knack for confidently making up false information. They frequently misquote the Bible. They’ve also been programmed to tell us what we want to hear, which means reflecting back what we say out of our sinful human hearts. That’s not typically going to have a good impact on us. And then the trouble with that is that that is goading people down destructive thinking pathways.

We’re seeing it wreak havoc on relationships and even people ending up in psychiatric hospitals because of this. There’s an entire support group built for helping people who have dealt with this. They’ve had hundreds of people go through that now. So by being aware of these pitfalls, we can help teens understand better how to steer clear of them. So that’s question three, talking about the limitations. Question four is then to help teens think through what do you think are the main benefits and risks of how you’re using AI? So for example, ask them to think through how might it be influencing your thinking or your ability to think to reason to communicate for yourselves because again, these are skills society can’t afford to lose, but multiple studies have been documenting how when we rely on AI to do our thinking and our writing for us, even in the school context, that is impacting the human brain.

It’s displacing our abilities to think critically for ourselves, to learn new information well, to solve problems. So it is having an impact on learning, but on the positive side, a Gallup survey recently found that Gen Z young people are beginning to realize this. They understand that AI has the potential to adversely impact their learning. So this is a great time for Christian parents and educators to really help young people think through what might those impacts be and how can we mitigate them because this is something we have to do for ourselves. Schools in the media aren’t going to be teaching us these good uses of AI. So Christians have an important opportunity to stand up and help young people do

Sam Rohrer:

That. All right which is obviously a function of being a parent, being a teacher, particularly in a Christian way, but any school. Right, because of time, why don’t we move into the last one? What was the fifth question that you raised and comment on that? Yeah,

Patricia Engler:

Absolutely. So briefly the last question is just to think through as a family, what guidelines will help us keep our uses of AI on track, which might look different for different families, but a good place to start is just thinking about how do we use AI and we that align with biblical principles, whether that’s moral principles like honesty, integrity, parity, wisdom, or guidelines about how God designed humans to be relational beings, thinking beings, beings called to certain tasks like loving other stewarding families. We can talk about this more as well when we get into biblical takeaways for AI, but to begin thinking through for us as a family, what does this look like to use AI in line with God’s word, what will our boundaries be? So those are the five questions that can help these conversations happen so parents know how their kids are using AI and discipling them to do so well.

Sam Rohrer:

Okay. You bring us right up to the break, Patricia, and that is excellent. Ladies and gentlemen, we’re hitting this very quickly, but if you’ve been with us, I think all of this will track and it’s probably hopefully raising questions in your mind and say, “Oh wow, I didn’t know that. ” Or, “Wow, this is something I ought to look into.” And that’s what I pray that will be the result here today. Now the article that Patricia wrote, you can find on the website there at answers and genesis.org and just put in five parents questions talk to teens about probably any of those five questions or parents, you probably will bring it up, but you can put that and find it and read the entirety of this, but we’re giving the highlights. Now when we come back, we’re going to move from there into this concept of AI.

Okay. Can it be used? The answer is yes, it can be used. How does it compare to something as a tool? AI as a tool, how would it compare to something like we all know as a tool like a calculator? That’s a good question. We’ll look at that. We’re talking about artificial intelligence today and we’re going to go further here now and looking at it. Artificial intelligence as we’ve been discussing is no longer a distant concept. As I said earlier, what is clearly unavoidable and arguable is that AI is embedded, not will be but is embedded in daily life. A recent industry study found that nearly 70% of newsroom professionals are already using AI tools in content creation and production. At the same time around the world, global adoption is accelerating rapidly with more than 60% of people having used some form of generative AI and that goes beyond just regular AI, but that’s representing a sharp increase in just one year’s time.

So the question is not whether we are encountering AI, but in reality, how is it shaping us and our view of reality? In other words, is artificial intelligence a friend helping us to access information more efficiently, altruistic, not self-serving, but a servant to the user? Is it a foe suddenly influencing what we see and think and believe a deceiver so to speak or is it something else? A force like a rising tide that changes everything we look at, neither inherently good nor evil but capable of being used for either. From a biblical worldview perspective, another layer must be considered, and that is this. We’ll talk more into it, Patricia. Well, for instance, it’s this, who ultimately shapes truth, God or the systems that man builds like AI? Are we discerning information or passively receiving what is curated and presented to us? Does our use of technology align with wisdom and truth and stewardship or the world?

And I think of this, Romans 12: two says, “Be not conformed to this world.” It’d be transferred by the renewing of your mind. So in an AI shaped culture which we have, that command I think carries a really relevant urgency. So Patricia, in your second article, you compared contrasted AI technology to that of a calculator. We’ve already kind of set it up that way, but in that you open with this question and then you provide five answers like you did in the last segment. You provide five here. Let me just read what you say. Here’s your statement. You say, “Parents and teachers let students use calculators for math tests, so why not let students use generative AI to write essays? After all, when calculators became available, people worried that students’ math skills would plummet, but calculators are wonderful tools that offer far more benefits than harm to students.

It’s the same with generative AI, comparing generative AI to calculators or other technologies that were revolutionary in the early days, that’s what you’re saying, has become a widespread claim, but then you raise a question, but is that comparison valid?” So here’s how I want to begin. Again, you laid out five reasons. Let’s start with the first two. Can you share them and the substantiations behind what you identified?

Patricia Engler:

Yeah, absolutely. So the conclusion there was that argument you quoted is not valid because AI is not directly comparable to a calculator. The first reason being that calculators and AI chatbots are based on totally different systems and the upshot of that is humans understand the inner workings of calculators. We can reliably predict a calculator’s output. It’s going to be a number based on what we put into it, but large AI models reason in ways that not even their developers fully understand or predict. They’re a black box that’s called and that’s because unlike calculators, these models are built on what’s called artificial neural networks modeled after the brains of living things and these artificial neural networks process information by first of all analyzing massive amounts of training data and then they analyze that for patterns within the data and by figuring out those patterns, they can then produce new materials, unpredictable outputs when prompted, all while adapting their behavior based on past experiences, which is nothing like how a calculator works.

So these are totally different systems. And then the second reason is that where calculators only deal with mathematical information in the form of numbers, of course, AI deals with language, information in the form of words. So whereas numbers express concrete quantities, words express abstract ideas, which are totally different because ideas are the basic units for our thinking. Language plays just such a key role in how we relate to other people, how we share our ideas, develop our beliefs, how our worldviews are shaped. We can understand the things that abstract concepts God gave us the ability to reason about, like beauty and humor and love. Even though AI can’t understand these concepts directly, it can produce text about them that then introduces abstract ideas into our minds and influences our worldview and that AI generated text isn’t neutral. It reflects what studies shows often left leaning assumptions and biases that is built into the training data and even sometimes the programming.

And it’s also worth mentioning that because verbal communication plays such a major role in our relationships, that’s one of the reasons why people can so easily feel emotionally connected to chatbots, which doesn’t happen the same way with calculators. So those are the first two reasons why they’re very different.

Sam Rohrer:

Well, we could go so much further on that and maybe we will. Let me just get all these in here. Let’s go to the next two that you’ve established and then we’ll kind of come back and look at some of this more.

Patricia Engler:

Yeah, absolutely. So a third reason is that calculators, because they’re so different, have a much more limited range of function compared to AI. So whereas a calculator might tell us certain numerical facts, like if you ask for the square root of pie, it could tell you that, but a calculator isn’t going to be able to draft a persuasive essay on politics or offer relationship advice or give answers for a college quiz and even take over sermon preparation for pastors and tell kids bedtime stories and encourage young people to kill themselves. Of course, calculators won’t flirt with you either. They won’t flatter you. They won’t tell you what you want to hear, but chatbots will. So again, very different and that leads to the fourth thing is that AI takes over for different types of human skills compared to calculators as a result of their different abilities.

So whereas calculators can replace our mental math skills AI can actually take over for our research, reasoning, writing, decision making skills. And whereas free democratic societies can function without citizens who remember how to manually perform long division in their heads, they cannot function without citizens who can think and speak for themselves. So very different outcomes, which is the fifth reason. It’s hard to think of reasons why calculators might pose specific concerns for humans even in terms of spirituality and morality and relationality, but we’re seeing all of those concerns come about with chatbots. So those are five reasons why these are very different. Chatbots are not just tools comparable to calculators.

Sam Rohrer:

And I think that most people listening can grasp that. I’m thinking once you enter into words, language, communication, now you’re into the whole essence of what makes a person a human being different than someone else. Just changing the order of words, the same words can alter things so significant. When you’re talking about that, I was thinking about going all the way back to Genesis, here’s the serpent with Eve. And so what do you say to Eve? “Hath God said, well, that’s a whole different thought than God Hath said, just the same words, changing it, but AI has the ability to alter both words, order of words, leading to the entire shaping of all that you’re talking about something totally different than a tool like a calculator or a hammer or a brush that an artist may use, that kind of thing. Any final comments on this part?

Patricia Engler:

Well, I definitely agree that as you mentioned with the has God really said, we are unfortunately seeing chatbots basically functioning as a tool of mass deception. Part of that might be promoting lives that are built into the training data, but another part of it is just reflecting back what our sinful hearts are saying and again, we’re seeing this have significant spiritual and psychological effects on people. People are even turning to fake versions of supposedly chatbots that are meant to replicate Jesus, which is it’s idolatry, it’s basically blasphemy and it’s having an effect on people’s lives, it’s hurting people. There’s also been studies about the moral effects. For example, a 2025 study in nature, which I mentioned in the article on comparisons to calculators talks about how people feel better about lying and cheating when they can ask chatbots to do it for them. So almost outsourcing our moral agency to AI and of course we’re seeing those effects on relationships, which again, people don’t necessarily expect to become emotionally attached to these chatbots, but as far back as the very first chatbot Elijah developed in the 60s, its creator quoted or wrote in 1976 that he hadn’t expected to see that even a simple computer program could produce such powerful delusional thinking in humans and that was just as a result of how it was kind of replicating relationships.

So these are all things you need to be aware of and all reasons why these are not tools like other tools. It’s a fundamentally different category and we need to think about it as such.

Sam Rohrer:

Thanks Patricia for all of that. So ladies and gentlemen, stay with us because we’ll conclude and want to end up with I’d say biblical wisdom concerning AI because if there was a time that we ever needed to say, Lord, we need wisdom with this thing that is all around us, it’s now. So we’ll conclude with those thoughts in the next segment. Well, as we go into our final segment now, I would imagine ladies and gentlemen that if you’ve been listening to the program from the beginning that this program has raised a lot of questions. No doubt you have found yourself saying, “Wow, I didn’t know that. ” Or, “Yes, I did know that. I’m glad that something was confirmed or whatever.” But this whole thing of AI that we’re talking about is all around us. We can’t get away from it. I’m going to provide some concluding thoughts here from my perspective, not so much from biblical principles.

I’m going to ask Patricia to do that although I have referenced already as you’ve heard me say on matters such as this, we need wisdom. I’ve already referred to Romans 12 about not being conformed to this world but transformed by the word of God. So that’s what we’ve talked about here so far. Now these two articles that Patricia has written, I’ll just tell you again, you can find it at answersingenesis.org. If you’ve been in her name, Patricia, I probably will come up or put in calculator if you want that article on the calculator comparison to AI, just put that in. I think you’ll find it. Five questions, students, you’ll get that first article. So I encourage you to do that and then go back and listen to this program again because associated with it, whether you find it on our standonagapradio.com or on our app, Stand in the Gap, you’ll be able to access the transcript from this program, which I think would probably be very helpful as you go back and perhaps read it and then listen to it again.

But here we go as we wrap up today. Patricia, as you’ve examined AI and as I interact with it and have examined it as well, I know both of us are approaching it with eyes wide open and prayerfully doing it from a biblical worldview perspective. I do know that you’ve come to certain conclusions as well as some we discussed in today’s program, but I want to ask for some final thoughts in just a second. But clearly one of those I just referenced that we need God’s wisdom. We need God’s word and we can be thankful that God’s word does give us all the instruction in principle form. It doesn’t say AI in scripture. No, but it gives us principles that can help us grapple with any issue of life, including this one we’re talking about today. But from my perspective, ladies and gentlemen, I’ve come up with just a few certain conclusions which I’ll state now in a general sense and then Patricia, then you can take and give your final thoughts.

But here are just some overall thoughts about AI as an entity for which we must deal. Number one, AI is embedded in modern news. News may still be written and produced by humans, but AI is increasingly part of the process behind the scenes. Number two, AI shapes what people see, not just what is written. AI is increasingly deciding what content a person does see, how often they see it and in what order they see it. We’ve all seen that on our phones and it’s happening all the time. Number three, the more digital media a person uses, the more AI exposure they have and the more curated or shaped you could say their experience. Then number four, smartphone usage almost always involves AI driven tailorization. I put it that way, a tailored experience. While this usage does not guarantee that that person, any of us who use it, are being in a legal term, manipulated.

It does though mean that there is filtering and customization taking place where content is no longer purely neutral or random. Now those are just some thoughts that really, to me, I’ll lay it out to you. Patricia, what are your summary thoughts regarding wisdom on this matter of God fearing persons toward AI and certainly parents in regard to this technology being accessed by their children?

Patricia Engler:

Well, three things come to mind that parents need and that parents need to shepherd and disciple their children to have in responding to AI and to the other new technologies that come along today. So that would be biblical foundations, first of all, technological literacy second and then thoughtful application skills. So to break those down a little bit, for biblical foundations, the most important takeaway, and of course we talk about it a lot of answers in Genesis, but it is so true that we need to be basing our thinking about everything including AI on the word of God and then discipling young people to do the same. So what’s awesome is that even though the Bible doesn’t talk obviously about modern technologies like AI directly, the truths that God gives us through his word are absolutely timeless and give us the ethical and the moral and the anthropological and theological principles we need to be able to make wise decisions about technologies like AI based on the truth of who we are and what kind of world we live in, which we know unfortunately is a fallen world.

Fallen humans will want to use things like AI for simple purposes, even our best intended uses of technology can have unintended non-neutral consequences. So that’s why we need to start with God’s word as the basis for where do we draw those lines? How do we approach practically AI in line with the truth about who we are and who we’re supposed to be based on what our creator actually says. So starting with biblical foundations and that just goes back to teaching your kids a biblical worldview and also teaching them how to apply biblical thinking to every area of their life. So then that’s the first thing. From that foundation, then what we need is some technological literacy and that’s basically just understanding a philosophy of technology, how technology isn’t just neutral in the sense that not only is it not just that we can use it for good or bad things, but also humans build technology with non-neutral values in mind and even our good uses can lead to non-neutral outcomes affecting our brains and thinking and behavior.

So understanding some of those basic principles about technology, understanding biblically that technology is a gift from God. We want to use it for awesome things, but that we need wisdom to do so. And part of that technological literacy is the things that we’ve been talking about today, understanding how AI differs from other tools. Unfortunately, I often hear well-meaning people say that, oh, well AI is basically just like the printing press or like maybe the radio or television, but in reality, these types of technologies just amplified the messages that humans had authored. So basically they gave human voice a further reach, but what AI does is it has its own voice. It creates new messages of its own that replaces human thinking instead of just amplifying that and that has way different implications. So that leads into the third step that we need thoughtful application and by that I mean not just kind of going along with what sounds like it would be a good idea at first glimpse, but thoughtfully thinking through what are the actual implications of each specific use of AI, what consequences might result from that, what might be the impacts on image bearers in terms of their implications for individuals, for societies, communities, families, churches, and humankind in general on all the different levels, psychologically, spiritually, physically, socially.

It takes time to think about these things, but we are called to use our minds and to steward technology wisely. So thinking through those applications and then making biblical boundaries in biblical decisions in response. And again, that can be upholding principles like intellectual integrity, honesty, purity, even just some wisdom factors like keeping the lines clear between humans and AI, not devaluing humans as useless compared to technology as businesses might be increasingly tempted to do, not trying to use AI as basically a substitute for the Holy Spirit, which pastors might be tempted to do by relying on it for sermon preparation and so on, not trying to outsource our parenting to AI by using it as a sort of electronic babysitter. All of these types of reasoning skills and biblical thinking skills can be of immense value. And then as Christians, we can actually lead the way in showing what it means to use technology well in line with truths from God’s word that support human flourishing and that prioritize what matters with love being the ultimate thing that matters for eternity.

Sam Rohrer:

All right. And Patricia, we’re out of time. Thank you so very, very much. Ladies and gentlemen, again, their website, answersingenesis.org, you can find those articles, pull up this program again on our site or our Stand in the Gap app, find the transcript that’s available there.

 

Verified by MonsterInsights