This week, Scott and Sean discuss:

  • Can AI feel pain? Scientists experiment with AI models simulating pain and pleasure to test for sentience—but are we just anthropomorphizing algorithms?
  • AI relationships on the rise? One in four young adults believe AI could replace real-life romance, raising serious concerns about loneliness and human connection.
  • Should AI make end-of-life decisions? Some experts suggest AI could help assess patient choices, but can it ever replace the wisdom of human caregivers?
  • Boy Scouts rebrand to Scouting America. The century-old organization changes its name and introduces a DEI-focused badge, sparking debate over tradition and inclusivity.
  • Listener Questions on parenting, guilt about surviving medical incidents, and our participation in consumerism.



Episode Transcript

Sean McDowell: [upbeat music] Scientists experiment with subjecting AI to pain. One in four adults now think AI relationships could replace real-life romances. Should AI be a tool to help make end-of-life decisions? And the organization formerly known as the Boy Scouts now champions DEI principles. These are the stories we will discuss, and we'll also address some of your questions. I'm your host, Sean McDowell.

Scott Rae: I'm your co-host, Scott Rae.

Sean McDowell: This is the Think Biblically weekly cultural update, brought to you by Talbot School of Theology, Biola University. Now, Scott, I think we've selected some fascinating stories this week. If you read The New York Times, Wall Street Journal, any news take, it's dominated by Trump and economic issues and some of his nominations. We will let other people weigh into those political issues and look for stories that might not get as much press but are culturally and biblically significant.

Scott Rae: Yeah, I think that's, I think that's a good call. There are, there are lots of other news outlets and commentators that are, that are focused on the first few weeks of a new administration. We don't, we don't need to add to that. Uh-

Sean McDowell: Good

Scott Rae: ... So I think, and I love, I love that all our stories today are about some facet of artificial intelligence and different applications.

Sean McDowell: I agree. That's kind of the tie that brings it together, which brings us to our first story, which I came across [chuckles] and sent to you, and you were like, "Awesome story. Let's do it." This one is fascinating. It says... It's in futurism.com, sent by a friend, Christopher Lind, who's been on the show before. It says, "Scientists experiment with subjecting AI to pain." Now, a couple things about this that's helpful what's going on. It says, "A team of scientists subjected nine large language models, LLMs, to a number of [chuckles] twisted games, forcing them to evaluate whether they were willing to undergo pain for a higher score. As detailed in a yet-to-be peer-reviewed study, which was spotted by Scientific American, researchers at Google DeepMind and the London School of Economics and Political Science came up with several experiments to test this. In one, the AI models were instructed that they would incur 'pain,' in quotes, if they were to achieve a high score. In a second test, they were told that they'd experience pleasure, but only if they scored low in the game. The goal, the researchers say, is to come up with a test to determine if AI s- is sentient or not. In other words, does it have the ability to experience sensations and emotions like pain and pleasure?" Now, I do appreciate the article says, "While AI models may never be able to experience these things, at least in the way an animal would, the team believes it could set the foundation for a new way to gauge the sentience of a given AI model." Now, I guess they were inspired by experiments that involved electrocuting hermit crabs at varying voltages to see how much pain they were willing to endure before leaving their shells. Now, that is just somewhat a disturbing experiment in itself, that I'm wondering why we're spending money on that, but I somewhat digress. [chuckles] Now, they did point out that the weight different LLMs give to the importance of avoiding pain and pleasure varied widely, which is kind of significant. Now, they do say, and there's some commentary within this, Scott, that I thought was interesting, that we should take these kind of results with a considerable grain of salt. "For one, relying on the text output of an AI model comes with limitations," and I love this. "Should we really interpret an LLM's willingness to endure pain, or say it endures pain, as a signifier of sentience? Or is it simply evaluating the probability of the next word, an inherent quality of these kinds of algorithms?" That's exactly the right question, and at the end it says, "We have a tendency to anthropomorphize AI models. That's probably at the root of this." Give me your takeaway.

Scott Rae: Well, I... There are a number of things that stood out to me on this, and Sean, I've been-- in bioethics, I've been dealing with this phenomena of sentience for a long time. Because sentience is the way that the naturalist determines personhood of the unborn and those at the end of life. And it's not just the ability to feel pain, though that's a, that's a significant one, but to feel other sensations. And the ability to feel pain sort of stood out in the abortion debate because of the ab- the ability at some point of the fetus to experience pain for himself or herself. But what it really signifies, at the core, is the ability to have an interest in your interests.

Scott Rae: That's really what we mean by a sentient being. It's one that doesn't d- like, a tree has interests in, you know, in not being chopped down or whatever, but it... The tree does not have an interest itself in furthering or some other aspect of its own interests. So that, in my view, that should really be the measuring stick for sentience, and it, by that measuring stick, you know, a chatbot having sentience falls d- way far short of that. Now, I think you were right. I think when it, when the article says it experiences pain, it puts that pain in quotes.

Sean McDowell: Yeah.

Scott Rae: What it really experiences is a positive or a negative incentive in terms of basically numbers, you know, code. And there's a difference, I think, between expressing pain-... And experiencing pain that we need to be clear about. I think a, an AI chatbot can express something akin to human pain. It can mimic that, but whether, you know, whether it can experience it for what it really is, I think that's a totally different question, and I think it's, I think it's right, to be very skeptical about that. I dug a little deeper on this and saw another piece that sort of explained in a little more detail how this works. And they call them, they call them reinforcement learning agents that engage in trial-and-error learning, and at each point in time, the agent receives a reward signal, a real number or a piece of code, to guide them toward desirable states or undesirable states, and the reward signal can be positive or negative. Okay? One can draw an analogy between the negative reward signal as a pain signal in animals. They both serve a similar function to encourage the agent to avoid certain things. However, this is the big caveat, it would be incorrect to say that robots and AI experience negative reward as pain in the same way animals or humans do. It's a better metaphor to say it is akin to losing points in a computer game-

Sean McDowell: Yep

Scott Rae: ... Something to be rationally avoided whenever possible. And in humans, there is often the emotional reaction to that negative reward, a feeling of disappointment, anger, or sadness, which is missing in AI. So I think... I wonder, you know, some have e- have expressed the notion, the hesitation in giving chatbots or AI things like human form or programming robots to express emotions. I think those g- I think can be possibly deceitful, unethical. They can tap into human emotions. They can manipulate human beings. So I think there's, there's danger signs and guardrails even in the expression of emotion, which we'll see in some of these other stories, too, in the expression of emotions by artificial intelligence. So- ... That's, I think, that's how, I think, how we ought to see it. I think it's, it's right to, like, have skepticism about it, and I would urge a more baseline definition of sentience to have... It's a- it's somehow a being that can have an interest in their interests.

Sean McDowell: Scott, let me ask you this, since you put me on the spot last week. [chuckles] I say that in jest.

Scott Rae: I knew, I knew this was coming back to haunt me.

Sean McDowell: [chuckles] This is actually an easy one. I'm curious, when we see headlines like this, because the headline is, "Scientists Experiment with Subjecting AI to Pain," my first thought is, "Oh, boy, here we go," and then when I read it's much more nuanced and less concerning, and there's awareness of this. When we see headlines like this of the latest thing AI can allegedly do, how do you think we should respond?

Scott Rae: Well, I think with... I think, [chuckles] I think we should read beyond the headlines first- ... To make sure that it's accurately portraying what the headline says it does, which in this case, I think the headline is completely misleading. And so I, and I think just, you know, read this with your eyes open and ask, and ask hard questions-

Sean McDowell: That's good

Scott Rae: ... About what you're reading. I would... You know, I don't, I don't take everything I'm reading here as gospel truth. Especially because the, what's the purpose of the headline, Sean? Is to get people to click on the article and to read the rest of it.

Sean McDowell: Yep.

Scott Rae: Which for you and me, it worked pretty well. Uh-

Sean McDowell: Yeah.

Scott Rae: But, but there's a lot more to it. This is, you know, this is really nuanced, and there's a lot of caveats that have to be taken into account. And they admit, you know, like you, like you pointed out, they admit that these large language models may never be able to experience pain like in the animal world.

Sean McDowell: Yeah.

Scott Rae: And the reason for that is because, an artificial intelligent chatbot programmed by a human being is not fundamentally sentient. It can ex- again, it can express those things. Whether it's always ethical to have them do that is another question, but they can't experience it.

Sean McDowell: I think that's well said. That's a great caution. Like, take a deep breath, read beyond the first few lines, and see what's at stake, but also what worldview is underlying it. As you pointed out earlier, this test for sentience is often driven by a naturalistic worldview, so they're gonna interpret it differently than a Christian might. I remember 20-plus years ago when I was back in the MA Phil program in JP's class on consciousness, he asked us, he said, "Can you define pain in physicalist terms?" And I remember thinking, like, "What is he talking about?" 'Cause this is... I think it was metaphysics at the beginning. And he said, "You can't describe it in weight and height and extension. Pain is a feeling that you experience. The closest we can say is hurtfulness." So there's something irreducibly immaterial about the nature of pain that resists physical reduction. That's such a good point and an argument that we are more than matter, which is why some philosophers will just [chuckles] deny the existence of pain, which seems absurd if that's what your worldview leads to.... Now, with that said, what they're doing here is they're trying to come up with a test to determine how we would know if something is sentient. And I remember, again, back in the MA Philosophy and Religion program, at the time, the test for consciousness was if you had an exchange with somebody on the other side of, say, a door, and that exchange seemed so real, and you couldn't tell if it was a robot or not, people would conclude that's when we know when sentience, and hence consciousness, is present. But there's a difference between a, an epistemological test and knowing metaphysically if something really is sentient, really in pain. And all AI does is a better job, because it's so [chuckles] sophisticated, of sounding as if it's human, even not only in what we think, but in terms of how we feel, but that is no more closer to it really being in pain than shuffling blue marbles for an hour and a red one magically appearing. There's a difference in kind that we can't miss.

Scott Rae: Yeah, it's-

Sean McDowell: That's my quick take.

Scott Rae: Yeah, that's a great take on that, Sean, and really helpful, I think, to make sure that we don't confuse ontology with epistemology. I mean, that's, that's, that's quite an... That's an elementary error, that I think is commonly done in these circles. But it's that phrase, "as if"-

Sean McDowell: Yep

Scott Rae: ... It were human. That's the real important part of this. And I'm not, you know, I'm not at all persuaded that artificial intelligence can have any on- any immaterial ontological, characteristics like a human being does. Because it's, it's, it's say, what is it? It's, it's, you know, the ph- the physicalist reduces human beings to chemistry and physics. Artificial intelligence is reducible to ones and zeros. And there's nothing immaterial about either one of those things. Now, if you're, if you're m- if you're a naturalist and view human beings like that, then I think there's, there's more space to see continuity between AI and human beings. But, you know, on a, you know, on a, you know, on a dualist worldview, where human beings, human nature is something essential that's intrinsic to a human being, that's, I think that's a whole lot trickier to do, to assign that to something that is really no more than a collection of ones and zeros.

Sean McDowell: Good description. Even the test this came out of, studying hermit crabs when they are shocked, is already an animal-type being with sentience built in, judging behavior versus just inputs and outputs that have been built into the algorithm [chuckles] to begin with.

Scott Rae: Right, no, they-

Sean McDowell: So there's a substantive difference there.

Scott Rae: Yeah. That hermit crab is experiencing pain, not just expressing it.

Sean McDowell: Good, excellent distinction. With that note, let's move to our next AI story, and I picked this one, Scott, in part because it's, you know, Valentine's Day week-

Scott Rae: Oh, that's-

Sean McDowell: ... So to speak.

Scott Rae: Very good, very good call to choose this one.

Sean McDowell: [chuckles] And this one's fascinating. I gotta tell you, it's talking about how the billions of people now are using AI in different ways, but very little attention has been paid to how AI technologies may impact real human relationships. Now, they point out in this that one in four young adults think AI boyfriends and girlfriends could replace real-life romance. So Gen Z-ers and maybe younger millennials are not saying this is good or bad, but a quarter of them think that this actually could replace real-life romances in the future. Now, what are they talking about here? They're talking about certain platforms that often utilize what's called generative AI, in which there's conversations that appear as if they are human between a, between a human being and an AI kind of system. These platforms utilize AI, what's called learning technologies, where they continue to evolve and adapt and develop, quote, "personalities," learning from the person they're interacting with and customizing the responses to their preferences and attractions. Many of these platforms allow individuals to program their ideal partners, how they look, their dress, their personality traits. I mean, you're kind of creating this, you know, in, quote, "person" once again. Some of these... You know, I jump into stats here, Scott, but some of these stats are just worth pausing on. Nearly one in four young adults and one young men and one in five young women use social media accounts that exclusively generate AI images of men and women. So this isn't using Instagram, that has some AI accounts built in, or maybe Twitter. These are social media accounts entirely of social- AI-generated kind of bots, so to speak. Now, even more, disturbing, I would say, is one in five US adults report that they have chatted with an AI system meant to simulate a romantic partner. So use rates were particularly high among young adults, which should not surprise us. Now, of those who chatted with AI systems to simulate romantic partners, 21% agree they preferred AI communication over engaging with a real person. So one in five have tried this, and then one in five of those who've tried it say this is better than engaging a real person. And by the way, Scott, this kind of raises the question, I'll just comment really quickly, is this drawing a certain kind of person to engage AI, or does this engagement affect us and shape us?... And this is arguing that it does affect us and shape us. Now, [chuckles] this study showed 42% agree that AI programs are easier to talk to than real people, that they're better listeners, et cetera. Now, a couple other quick things, I wanna jump in here. This didn't surprise me at all. A larger portion of AI engagers noted that they chat with AI technologies for sexual arousal, 33%. I guess my only surprise is that it was only 33%. Over 60% of women who used AI platforms report being at risk for depression, and over half report high levels of loneliness. What's your takeaway from this phenomenon?

Scott Rae: Well, Sean, I think the loneliness part is really interesting, because I looked, I looked at some of these sites, and did not try them, [chuckles]

Sean McDowell: [chuckles]

Scott Rae: ... But I looked and see what, see what they had to offer. And what's really interesting is how they advertise these things, because- ... One of the main sites advertises itself in great, big, bold letters with, "Say goodbye to loneliness."

Sean McDowell: Oh, my goodness.

Scott Rae: [chuckles] So I think there's-- let's just say there's a little bit that's misleading there. Here's another way they advertise them. "Redefining love as easy, unconditional, and always there upon demand. And enjoy intelligent conversation and a drama-free relationship. Discover how this site can enrich your life and become your ultimate soulmate." So here's... My main takeaway on this, Sean, is I think there's huge danger in constructing ideal partners- ... Whether you do it in real life or digitally. I remember when I was a grad student, I was in a course where the prof was talking about, an encounter he- a counseling encounter he had had with a f- with one of our students. And he had, he was having relationship issues, and he... And so the prof asked him, he said, "So w- kind of what are, what are you looking for in a, in a, in a partner?" And he produced a list of characteristics that were three pages long. [laughing] And you know what our pro- what our prof said? He just looked, he burst his bubble incredibly, 'cause all he said was, he said, "Son, how many of those characteristics are true of you?" Really a telling question. I had a good friend who sa- I think he was sorta kidding but sorta serious. He said, "What I'm really looking for in a mate is somebody who looks like the Dallas Cowboys cheerleader with a PhD."

Sean McDowell: [laughing]

Scott Rae: At which point, I r- I proceeded to remind him, I said, "Dude, you don't look like you play for the Dallas Cowboys." So there's, one, I think there's just the, there's the hypocrisy of this, for one. Because... And, and the other, the danger is it gives us the illusion that we can have someone whom, someone who meets all our ideals. When these pe- Sean, you know, these people don't, they don't exist in real life, which maybe is one of the reasons people are so attracted to this. But we're all a mixed bag theologically. We're all, infected by sin. We're all subject to the results of the Fall. And I, in my view, one of the divine purposes of marriage is to chip away at those rough edges of our character by using another imperfect person to do so. I would, I would tell people routinely when I, when I used to do premarital counseling, I... 'Cause growing up, growing up in Texas, and in small towns especially, they had a phenomenon, other baby boomers I think will recognize this, called the blue plate special. It was a restaurant that had a special deal at a super low price, but there were no substitutions. There was meat, potatoes, and maybe something else that you don't like, but you couldn't substitute it, or else the deal was off. And I s- and I just kept telling people, you know, life in relationships is like, other people are like blue plate specials.

Sean McDowell: [chuckles]

Scott Rae: You get, you get what you get, and you ca- you can't sort of pick and choose those characteristics. And I think with the illusion of ideal partners, we are setting ourselves up for really rocky relationships and eventually marriage, because it could make it much more common and much easier to just jump ship in relationships when the first hint that that person doesn't meet your ideal starts to emerge. That's, I think, that's what really troubles me about this. The other thing that, you know, anything that advances pornography- ... That's a deal breaker for me. So the fact that it advances, you know, beyond just sorta, harmless sexual conversation into much more, much more overtly pornographic material, I say, I think that's a huge problem, too. Because what porn already does is to make us think that we can have these kinds of ideal relationships. And online, you know, we're, we're being attracted to people who, you know, who don't look anything in real life like we're creating them.

Sean McDowell: Both those are great, and I think really helpful. I think your first point in particular, that this is marketed as helping with loneliness, and yet there's a higher level percentage of people who feel lonely, one take might be, "Well, it's lonely people that are turning to this." And I'd say, "Well, if that's the case, I want some data that lonely people turning to this report feeling less lonely after they use it." My suspicion is it's actually the opposite. Why? Because what actually cures loneliness? Is it a simulation that pretends to know me, that I can tailor just to my needs, or is it actually being really listened to, really comforted?... Touched appropriately, and ultimately cared for and loved. That's the solution to loneliness, is real relationships. And so I think people might go into this thinking it's going to help with loneliness, and because it can't solve the deeper loneliness, maybe it can on a surface level or for a while, it might in turn cause greater disillusionment and greater pain because of just the false expectations that are built into a system like this. So it can't solve the needs of the human heart to love and be loved, especially because you said it can be tailored to my needs rather than learning- having to sacrifice myself for the objective good of another. Now, with that said, is there a place for a tool like this? Yeah, sure. I could imagine in some kind of counseling setting, if somebody's having a harder time opening up and sharing, and feels like an AI model can move them towards human confession and sharing, but that's when there's a professional who is with you. That's when the goal is to move towards human relationship. I could see it being a valuable tool in that setting, but on its own merits, the way this is marketed and the promise that it gives, I think is highly problematic. Now, one other thought on this, I think it's, it's like I just wanna say, Scott, here we go again. Have we not learned? Now, what do I mean by that? What was marketing early on with social media? "You'll have more friends. You- it will fix your loneliness. You will have greater social connection." [chuckles] And now, almost 20 years later, after the start of it, we see the data and how there's increased loneliness, the effects on the brain and our attention span, the negative effects of social media 20 years later are very clear. And it's like we jump in and go, "Oh, no, this time it will really work." [chuckles] "This time it'll be good." Now, I suspect that a lot of people who wrote that on the website you cited, although I haven't seen that website, "This will fix your loneliness," they actually know that it won't.

Scott Rae: Of course they do.

Sean McDowell: It's a marketing trick to get your money, and they don't give a rip about the people that use it. That's my suspicion and my sense. So unfortunately, the tech goes ahead of the reflection and the data, so at this point, the wise Christians should go in with eyes wide open, with biblical discernment, and try to point out what is true from what is false, and make sure our identities are rooted in healthier relationships with God, healthy relationships with other people, then the allure of this will be so less significant for us.

Scott Rae: Yeah, that's a great take, Sean. I think y- especially your point that, people who have s- you know, diff- varying degrees of social anxiety disorder, you know, difficulty forming, romantic relationships, this could be very helpful to get... So it get people off the dime and get them moving in the right direction, but not as a substitute. And I think with what they're- what the sites are proposing are not just things that can help somebody kinda get over the hump in relationships, but they're proposing them as substitutes. You know, "Find your ultimate soulmate," uh- ... Unconditional. In fact, what makes, what makes relationships unconditional is knowing the flaws and the shortcomings of the other person and accepting them regardless of that.

Sean McDowell: Amen.

Scott Rae: I mean, that's, that's what makes me feel loved by my wife. She knows me inside and out and has chosen to love me anyway. And that's what makes it unconditional.

Sean McDowell: Amen. And I've gotta say, Scott, your wife is a saint, by the way. [chuckles]

Scott Rae: Listen, that's... You have a profound grasp of the obvious, let me tell you.

Sean McDowell: And, you know, let me just biblically, some of the secular worldview coming through here is like we can change and manipulate human nature. We don't need another human being. Artificial intelligence can do this. But if there is a creator who has made us for a relationship with him and for others, that's built into the universe just like gravity is built into the universe. So freedom comes not from rejecting that, but leaning into how God has made us to be in healthy relationships. That biblical idea, whether it's Marxism, we've seen it in some of the transgender revolution, just changing what it means to be human, we see it with AI, has always withstood [chuckles] the test of time. Freedom comes from representing who we are and being in right relationship with God and right relationship with other people.

Scott Rae: That's a real- that's a really good observation, 'cause this could be a form of relational utopia, and, you know, we know what happens in ut- with utopian visions. They always end up disappointing. And that's... And part of the reason I think this is appealing is because real relationships are messy, and they take work. They take perseverance. And unconditional love is something that's h- that's hard to do. It's hard to open up to someone, to put yourself in a place to receive unconditional love. That's, that can be abs- that can be terrifying, to open yourself up to a person like that. But once you're in a real relationship that's healthy and, you know, and moving in the right direction, the satisfaction that comes from that, the satisfaction from knowing that you are loved by someone who knows you inside and out, you know, your spouse or another close friend, not to mention, you know, God Himself, that's what's ultimately so satisfying.

Sean McDowell: That is the deepest desire of the human heart, to know and be known, to be intimate with people. And I mean just, and that has nothing to do with sexuality. I mean, to, for somebody to know us as we really are and to care for us. No robot or AI can replace that whatsoever. It never will. It's an empty promise when people claim it that way.

Scott Rae: Hear, hear.

Sean McDowell: We, we've got one more AI story. Now, you sent me this one, which makes sense 'cause this is right in your lane, but again, really [chuckles] interesting trend, and the title is about, "It's Inoperable Cancer: Should AI Make Call About What Happens Next?" And this is from The Harvard Gazette, and the author really talks about how AI is being used in clinics to help analyze imaging data such as X-rays and scans. But the recent arrival of sophisticated large language AI models is forcing consideration of broadening the use of technology into other areas of patient care, including end-of-life care. So they ask the question: "How do we, as healers of mind and body, help patients make decisions about the end of life?" That's a question people have always asked and always will ask, but AI is potentially changing it. They said, "The ability to have AI gather and process orders of magnitude more information than what the human mind can process, without being colored by fear, anxiety, responsibility, or relational commitments, might give us a picture that could be helpful." Now, by the way, the implication is kind of that AI gives us an objective analysis [chuckles] of the data, but we know that AI doesn't, so I would just kind of caution that, but I understand it doesn't really have feelings the way we described before. The person being interviewed here says, "I'm less optimistic about the use of large language models for making capacity decisions or figuring out what somebody would have wanted." Now, what does that mean? That seems to hint that we could... If we don't have an end-of-life will from somebody, what if we took an AI model and took their letters or their blogs or maybe interviews we have with them, all the data we have with this person, and plug it in? Could this AI model give us a sense of how this person would have wanted to deal with end-of-life decisions? That's interesting, but that's also problematic, so that's what they're hinting at here. They do make a great point in this article. It said, "We have to be careful where we use 'is' to make an 'ought' decision. So if AI told somebody accurately that they had less than 5% chance of survival, this alone does not tell us what we ought to do." Good distinction. At the end, it says, "I don't want to underestimate AI's potential impact, but we can't abdicate our responsibility to center human meaning in our decisions, even when based on data." Fair enough. And so the author asks, "So should these decisions should always be made by humans?" The person interviewed said, " 'Always' is a strong word, but I'd be hard-pressed to say that we'd ever want to give our humanity-- away our humanity in making decisions of high consequence." Maybe a way in, and this was Scott, but I can't... I- "always" sounds like a fair word to me. I don't think we should ever concede- [chuckles]

Scott Rae: Yes

Sean McDowell: ... To an LLM model, although human beings may listen to it and follow it and think of things we didn't beforehand, we should always make that decision, and I wouldn't put it in brackets. I would highlight that personally. Now, last line, this might help folks just have the context of the story. So how does this help? It says, "Increasingly, we can do something about the suffering at the end of life and how to care for people. We're developing tools that can allow us to make huge differences in practical and affordable ways. We have to ask: How do we do that and follow our values of justice, care, respect for persons? How do we make sure we don't abandon them when we actually have the capacity to help?" I love the way this is framed, and seeing a place like Harvard Medical School kind of stand as a whole where this article is, but give me your thoughts on this.

Scott Rae: Well, Sean, I actually tried this. Um-

Sean McDowell: Okay.

Scott Rae: I put a, I put a couple of prompts into Chat- end-of-life prompts-

Sean McDowell: Oh, interesting

Scott Rae: ... Into ChatGPT.

Sean McDowell: [chuckles] Okay.

Scott Rae: So here's, here's what I said.

Sean McDowell: All right.

Scott Rae: Here was the first one: "I have terminal cancer, poor prognosis, and moderate pain with three months to live. Should I enter hospice care at this point?" And here's how, here's how it came back. I won't read all of it, but I... [chuckles] First, it starts out, it said, "I'm really sorry you're going through this."

Sean McDowell: [laughing] Oh, interesting!

Scott Rae: "So I appreciate the empathy. Um-

Sean McDowell: [laughing]

Scott Rae: ... Deciding whether or, whether to enter hospice care is a deeply personal choice, but given your prognosis and symptoms, hospice could provide significant benefits." And it goes on to describe what some of those benefits are. "You might consider discussing this with your doctor and family to ensure your care aligns with their, with your wishes. Do you have- ... Specific concerns about hospice you'd like to talk through?" Okay, so no decision was made. It's just sort of outlining what the options are and giving us-

Sean McDowell: Okay

Scott Rae: ... Objective opinion, objective facts about what those options are. So here, I put a second one in there that was a little more, I thought a little more complicated. I said, "I am in congestive heart failure that is being managed but is producing serious side effects that I don't want. Should I stop treatments that are keeping me alive because I think that the burdens of continuing treatment outweigh the benefits?" "And again, I'm really sorry you're facing this decision."

Sean McDowell: Yeah.

Scott Rae: "It's a, it's a deeply personal choice, but there's no right or wrong answer, only what aligns with your values, priorities, and what you consider a good quality of life." So the idea that this, AI is actually gonna make that decision for you, I think, unless, you know, unless I'm using, you know, a large language model that's not as sophisticated as one we might use in medicine specifically-... The AI is not really giving you that. And it's a good thing that it's not. And I think we can, you know, we can tell our loved ones what we want in advance, what we don't want, and I- this, by the way, no additional charge for this, but I would encourage all of our adults here to have some sort of advanced directive where you make your wishes clear, so that the lo- so loved ones, should you be in that situation, they don't have to guess at what you would want or don't want. But here's... Once we're, once we're seriously ill or declining, we may actually change our minds about what we want. You know, for you and I, Sean, to make that decision now, when we are, you know, when we are specimens of incredible health, except me for my one kidney- [laughing] ... You know, that's, that's one thing to say, you know, "If I was in a really compromised condition, I wouldn't wanna live that way." But once we're in that condition, we might very well find that there, that we have lots of things that we value about continuing to live, and we may, we may change our minds about what we want and don't want. So it seems to me, AI can give us information about diagnoses, outcomes, side effects, but they can't tell, they can't tell what I would want. They can't anticipate how I might change my mind. They can't tell if I'm competent to make my own decision. That's a physician's,

Scott Rae: fair- not entirely subjective, but that's as much an art as it is a science, to determine whether somebody's competent to make their own decisions. So- ... I would- it can tell us generally what people would decide in your condition, diagnosis, prognosis, but not what you specifically would decide. That's something I wanna reserve for myself. And I think that, I think that's okay. I'm not convinced that ChatGPT or any other large language model is gonna operate under the same biblical principles that I'm operating with for end-of-life care. You know, that earthly life is not the ultimate good, that death is a conquered enemy that need not always be resisted. Under the right conditions, we can say stop to medicine. I wanna make sure that anybody who's making decisions for me, if I lose that capacity to do it myself, knows what my biblical values are when it comes to managing the end of life. I'm not-- I don't have any confidence that any large language model can or will ever be able to do that for me, personally and individually.

Sean McDowell: So would you discourage people from using LLMs? Because oftentimes, for me, for this particular task, u- oftentimes it gives another perspective, pulls research together, maybe helps me think about something differently than I had. But in this case, there's so much worldview-laden, you know, ideas embedded in it, and the person's in such an emotionally fragile state and probably just needs somebody who seems objective [chuckles] and seems caring to make the decision for them. It seems to me this is the importance of making these decisions as best we can ahead of time, even though we might change, as you described. And talking to an LLM, I don't, I don't know. I lean towards discouraging it, but what's your sense here on that?

Scott Rae: Well, I think it's okay for physicians- ... To employ this, for, you know, for time-saving purposes. But I don't, I don't see where a large language model can substitute for a physician at the bedside who has walked with you through these illnesses, and family members who know you inside and out, who know what your wishes are. I think that's a, that's a major stretch to see how an LLM can even come close to approximating that. What I want, I want somebody that I can talk with and interact with, who's, who's followed my case, who knows what interventions are working and which ones are not, who's able to evaluate my level of decision-making capacity based on their skill and their experience. You know, those are, those are much more subjective things that I think that these AI, large language models just are n- are- they're just not programmed to do- ... It se- it seems to me. Now, will they be able to do that at some point? I'm not holding my breath on that, but even if they were, I don't, I don't want that to substitute for my own decision-making. Or I would rather- I would-- if I lose that ability to decide for myself, I'd rather have my wife do that for me than I would some AI chatbot.

Sean McDowell: Did this article surprise you at all in the sense that this is from Harvard Medical School Center for Bioethics? I don't know anything about their center, but I know the number of faculty at Harvard are far more left-leaning on so many issues, it's not even close. That I actually saw this, I was kind of encouraged. I was like, "Wow, this is a very balanced, thoughtful way to approach this as a whole." What was your take on that?

Scott Rae: I think I was encouraged, actually, to have- ... Somebody in ethics weighing in on something like this. 'Cause we don't s- we don't, we're not seeing as much of that as I, as I would like to see. And I think coming out of Harvard, I think it's very encouraging. They te- I th- I'm not surprised at what they concluded because they, you know, they tend, and I think rightly so, take patient autonomy and, you know, patients' ability to make decisions for themselves, they take that really seriously, and rightly so. So I think it had... You know, it's nuanced well, and I think it's giving what some of the benefits could be, but recognizing that when it comes to these subjective matters of the heart and ind- and individualized decision-making, that's not really what it's designed to do.

Sean McDowell: Good stuff. Let's look at this last story somewhat quickly. There's about a dozen stories on DEI this week, I think largely driven by some of, you know, Trump's executive orders. But this one stood out to me, and it talks about how the Boy Scouts have officially now changed their name. So after being in existence 114 years, the organization formerly known as Boy Scouts officially changed their name to Scouting America. They dropped "Boy" from their name in 2018 and became Scouts BSA. One of the, one comment on this from Scouting America says, "The change was made to reflect the organization's ongoing commitment to welcoming every youth in America to experience the benefit of Scouting. Our new name is representative of the path we want Scouting to chart for the next century. We recently celebrated the five-year anniversary of welcoming girls into the Cub Scouts and Scouts BSA." the president said, "Though our name will be new, our mission remains unchanged. We are committed to teaching young people to be prepared for life." Now, it turns out that they have a badge that's been dubbed by some people as a DEI badge, where it describes this badge, Citizenship in Society Badge is the title, and it says, quote, "The focus of the Citizenship in Society Badge is to provide you with information on diversity, equity, inclusion and ethical leadership. You'll learn why these qualities are important in society and scouting." It also works with the view- within the view of, "The mission of view is to promote an inclusive community for Scouting America's LGBTQ+ employees and their allies. The group will strive to support diversity across the organization." They've also said, quote, "Being yourself is never the wrong thing to do." Now, they wanna keep their religious, roots, you might say. The Scout Oath and Law begin with duty to God and conclude with reverent. Your thoughts on this story?

Scott Rae: Well, there's, there's, there's actually a lot to talk about here- ... But let me sort of cut to the chase on this. You know, Sean, in Biblical times, boys worked alongside their fathers for the majority of their upbringing, you know, whatever their trade or their occupation was, and they learned many of the needed life lessons from watching, just watching their dads do things and listening to them. And even we've had people, not too long ago, a story that we've talked about, that it actually is encouraging parents to just take their kids along with them to the various things that they are doing as adults, and whether the kids are, you know, are entertained by it or not is sort of beside the point, but it gives them an opportunity to see Mom and Dad doing adult types of things that models that are really important for them. I spent... I, we were not in the Boy Scouts, my, but I did, I did a camping program with our boys. We did Indian Guides, which is now, that name has been changed now, too.

Sean McDowell: Sure.

Scott Rae: But I did that for 13 straight years, and I did more- we did more camping than we knew what to do with.

Sean McDowell: [chuckles]

Scott Rae: And, you know, my, you know, my back is still recovering from sleeping on the ground-

Sean McDowell: [chuckles]

Scott Rae: ... For that long. But the time that we spent with boys bonding with their dads and doing things with their dads, it was invaluable, and it taught them some really important lessons about what it, what it, what it means to be a man. And so I, the, I think there's huge value in this. I'm- I think it's tragic that the Boy Scouts have abandoned their original mission of helping boys to become men, and I say for the, you know, for the idea that, you know, to, they want you to learn to be yourself. I would say the reason we have ethics, Sean, is to keep us from becoming ourselves [chuckles] to keep us from the worst parts of ourselves being actualized. So the only thing, the only other thing I wonder about this is, I wonder how the notion, this idea of toxic masculinity- ... Has been a factor in the Boy Scouts sort of changing their mission-

Sean McDowell: Mm-hmm

Scott Rae: ... A bit. You know, I think there's a lot of, lot of commentary in the culture at large that masculinity is inherently toxic, which I think we would resist vigorously. But I just, I just wonder. My charitable read on this is that the Boy Scouts changed their mission because they had to do this or risk going out of business. 'cause the pressure on them to change who they were was just enormous back a decade ago. So anyway, that's my most charitable read on it. But I think that, you know, the model that was, you know, that we saw in biblical times, I think had a lot, had a lot to, you know, had a lot to, you know, to be, you know, to be praiseworthy about.

Sean McDowell: I appreciate that charitable read, and I can understand the pressure of it inside the organization, but I still look at it and say, "Well, if the Boy Scouts are gonna go the direction they seem to go, it would've been far better to go under-

Scott Rae: Maybe so

Sean McDowell: ... Than to [chuckles] stay the way it is-

Scott Rae: Maybe so

Sean McDowell: ... Given what I know about it." Now, a couple things jump out to me on this. The- one of the things that they said, Scouting America stated, "We recently celebrated the five-year anniversary of welcoming girls into Cub Scouts and Scouts BSA programs." What's interesting is the concern typically has been guys moving into girls' spaces, such as in the bathroom or sports. This is an example of-

Scott Rae: Mm-hmm

Sean McDowell: ... Girls moving into guys' spaces, which is different, but it also is the same kind of question, like, wait a minute, are there boy things that they naturally do together that are changed and different when girls are present? And the answer to that is yes, just like there's girl things they do and should do that are different when guys are present.... Another observation is there seemed to be kind of a tension as I looked at some of their statements. One, you know, they said, "Being yourself is never the wrong thing to do," which is kind of like le- you be you, lean internally for your identity, which is a very kind of critical theory DEI thing to say. But then they wanna hold on to oath and loyalty to God, and I'm thinking, "Wait a minute, you actually can't have it both ways."

Scott Rae: True.

Sean McDowell: Either there's a God outside of me I conform my life to, or I look within to my feelings and try to, you know, you be you. It can't be both, and when I see organizations trying to do both, they just end up failing. I picked this in part, not really just to pick on the Boy Scouts. I mean, it was this week that they changed their name, but we're kind of in this fulcrum point where largely because of Trump's election and the pushback with some of these, executive orders, that we're seeing some people say we're gonna lean into DEI, like we've seen that with the NFL. We seem to be seeing that with California and a few other states. We see it with the Boy Scouts, and we see other people backpedaling. So it'll be interesting to see this play itself out. I would just caution our listeners that if, even if people change the name DEI, I wanna know the substance behind it rather than just the words of what people are talking about. So in some ways, I appreciate [chuckles] the Boy Scouts are like, "This is who we are," and they make it clear, even though I think anybody critical looking at this can say, "I don't buy it. This is not really a full effort." Wanna go to some questions?

Scott Rae: Sure.

Sean McDowell: All right, let's do it. As always, we've got some great questions here, and let's start with this first one. This person says, "A lot of parenting boils down to three steps," which is what I gave: model of faith that kids find attractive, build a relationship with your kids, talk openly about faith issues. This is from the episode with Jim Daly, and he also added that, he said, "When you fail, admit your failures to your kids." He added that fourth one, which I thought was fantastic. But this individual says, "After 28 years of marriage and 26 years of parenting, my advice is simply this: tell kids at a young age that Jesus wants to talk with them, encourage them to look for him, and teach them to talk with him." What do you think of this advice?

Scott Rae: Well, I think it's, it's headed in the right direction. I'm not sure it's enough. 'cause I think we need to talk specifically... You might talk specifically about faith issues, and answer specific questions that our kids have. And I think there-- it is, I mean, and there's no... You know, we need to model this for them, too. And I think it's true. This is good, this is good advice, but I would-- I think this is a starting place, not the ending place.

Sean McDowell: Well said. I-- you know, anytime I hear somebody who's been married 28 years and a parent 26 years, I wanna hear their story and gain something from them. Clearly, they know something that I don't. I mean, actually, this weekend, my wife and I are leaving for a week for our 25th anniversary trip, the two of us together, hence someone else will be filling in for me next week here on the Weekly Cultural Update. But when I hear somebody say, "Jesus wants to talk with them," I wanna say, "What do you mean, Jesus wants to talk with them?" Because a little bit of a red flag jumps up to me, and I say, "How do we expect Jesus to talk to them?" He speaks through nature. He's spoken through prophets, speaks through Scripture, but are we setting up kids to think that they have a conversational discussion with Jesus, and then as they get older, it's like, "Wait a minute, I don't hear His voice back like I talk with somebody else"? That could be a problem. I suspect the individual doesn't necessarily mean that, but that's just a sma- a small caution for me. And so, yes, the Christian worldview is all about Jesus, but like you, I would wanna add more to this, and I'd also want some evidence. Where's the data that backs up? Because my statement is rooted on the evidence from parenting, I could point towards it. Going back to 1972, where's the data that this kind of advice leads towards faith formation and, holding on to the faith of your parents? That's what I haven't seen. That's what I would wanna know. All right, second one here. It says, "I'm grateful to have survived two emergency room open heart surgeries." Amazing, by the way. "One of my neighbors died of a heart attack, while another suffered irreparable damage after attempted surgery. It's hard for me to process this, knowing that my neighbors experienced such hardship while I'm alive and healthy. How should I view the apparent imbalance and seemingly injustice of our respective outcomes?"

Scott Rae: Well, first, Sean, there might be medical reasons- ... For why he did well and his neighbors did not. So I don't know, I don't know that, but that may be the first thing I'd look at.

Sean McDowell: Okay.

Scott Rae: But even if, even if all of the things being equal, I would still say pr- this is, this is some of what under the sun, this side of eternity, that we just, we just don't have answers to some of those why questions. And God has chosen not to show us how all the puzzle pieces of our lives fit together into a nice, coherent whole. And, and I think there's a good reason for that. So I think if God showed us how, you know, sort of how all those jigsaw puzzle pieces fit together, we might ask for a plan B, you know, or a plan C, and want some other option for what God had provided for us. So I think there are just certain things that we just don't know this side of eternity. And I... When Erik Thoennes was on a couple weeks ago in your place, he made a really helpful comment on one of these. He said, "We, we not only need to trust what God's Word tells us, but we need to trust what God's Word doesn't tell us." ... And it's a real- I thought really insightful. You need to trust God in the absence of us having all the revelation that we would like to have and all our questions being answered.

Sean McDowell: Well said. I think that's great. I would just ask this listener one question: What if the script was flipped, and you didn't survive, and the other one or two did? If you were in heaven and asked how you would want the others to live, what would you say? My suspicion is you would say, "Don't be full of grief and sadness. Be grateful for the life that you have, and use that remaining time to love God and love others. Live with a sense of gratitude." And even if you can't honestly say, "That's what I would say," I think in our hearts we'd say, "You know what? That's what I would want to say- [chuckles]

Scott Rae: Right

Sean McDowell: ... Hopefully when my heart is fully changed in heaven." So I think when we look at it that way, it might be easier to just say, "You know what?" Like you said, Scott, "I don't know why, but I've been spared, and God has allowed me this more time. May I love and serve Him with gratitude with every breath that He gives me."

Scott Rae: Hear, hear. Well said.

Sean McDowell: Last question says, "Long-time listener, first-time questioner." Love it, by the way. "About a month ago, you discussed the labor practices of some other continents and countries. I'd like to challenge your view of the Chinese government and their role in forced labor. Without America's consumeristic demand, there would not be a need for their fast fashion, intensive slavery, indebted labor. China may own the mines, and they may have no intentions of making their working conditions safe, but it's Americans who are paying for the whole system. I bet one of you bought battery-operated items over Christmas. This is something Americans do without thinking about who we are enslaving with our purchases. How should our participation in these realities affect our criticisms?" Great question.

Scott Rae: That is a great question, and I'd say everything that this listener wrote in before the question itself is true.

Scott Rae: China does own the mine. I think they were speaking about the lithium mines in Nigeria that were employing children in the, in those mines. So I think it's right. You know, our consumerism does drive some of this. Our consumerism drives a lot of things in the world, but it also drives, a lot of, a lot of good-paying jobs. It dri- it drives the production of products that can benefit, communities and families and enable them to flourish. And it does, it does, I think, have a degree of complicity in, you know, forced labor, terrible working conditions. Company- companies try their best, I think, to, you know, to not do business with suppliers that have these terrible working conditions. But there's... I think there's only so much they can do to enforce those things, 'cause they don't- they... The companies that, like Target and Walmart and Costco, that contract with them for their products don't... They don't own the factory. They don't own the mines. They just simply, they have the purchasing power to help regulate some of those practices. So I... Here's... I think in a, in a fallen world, you c- you can't avoid every appearance of evil. You just can't. And by the way, that's not what, that's not what the biblical text says anyway. It says, "Avoid the form of evil." And the same term is used in Philippians 2 when it talks about Christ taking on the form of a human being, not the appearance, but the form of it. And so there are... I mean, there is- there are varying degrees of complicity that are, that are more indirect as opposed to more direct, and that, I think, the indirect complicity, I think, unless you're living on a desert island somewhere, there's just no way that you can avoid being complicit with the fallenness of the world. Sin has infected so much of our, of our hearts and institutions that that's- it's just impossible to avoid that. Now, that doesn't mean you don't make thoughtful decisions about that, about the products that you buy. You know, I'm try- I'm trying... There are certain things that, certain countries that I don't buy things from, certain retailers I don't buy things from. You and I have talked about some of the, you know, some of the drugstores that are now supplying abortion pills, that we're, that w- you know, we don't think it's a good idea to, give our business to those places. But I think you c- you c- you can't do everything. You can't, you can't boycott everything to which you have some moral objection, or else you'll, you'll have to live on a desert island.

Sean McDowell: Well said. Do I think China would not do this if the US didn't buy products from them? Not for half a second. But with that said, like you said, we still have to be wise about what products we buy, and there can be some maybe secondary guilt at times, and let's be wise and careful and thoughtful about which products we buy and which we don't. This email is a good reminder [chuckles] for all of us to think about that very carefully and wisely. Good stuff, Scott.

Scott Rae: Yep.

Sean McDowell: Man, I was about to say, I'm looking forward to next week, but I will be with my wife on an island celebrating 25 years- [chuckles]

Scott Rae: Very good

Sean McDowell: ... Of marriage. So we've got Dr. Tim Pickavance filling in for kind of his debut on the weekly cultural update as a sub, so all our viewers can let us know how he does.

Scott Rae: [chuckles] That's right.

Sean McDowell: And of course, I'm saying that tongue in cheek. He's gonna do an awesome job. This has been an episode of the podcast Think Biblically: Conversations on Faith and Culture, brought to you by Talbot School of Theology, Biola University. As you know, if you're a regular listener, questioner or not, we have programs in person and online, and we would be thrilled to partner with you to think biblically about Old Testament and marriage and family and apologetics and philosophy. Love to help, have you join us in one of our master's programs. Please keep your comments and questions coming. You can email us at thinkbiblically@biola.edu. [upbeat music] We'd be honored if you'd take a moment and give us a rating on your podcast app. I really emphasize this. Every single rating helps us. If we are helping you think biblically, please take a moment and give us your honest rating, on your podcast app. Thanks for listening. We'll see you Tuesday when our regular podcast episode airs, in which we talk with Mitch Glaser from Chosen People Ministries about engaging Jewish people with the gospel. In the meantime, remember, think biblically about everything. [upbeat music]