It is not an understatement to say that AI has revolutionized virtually every aspect of human life. What should characterize a Christian posture towards AI? Should we be concerned? Optimistic? How do we think biblically about it the emergence of new AI technology? In this unique episode, Sean is joined by three Biola/Talbot professors who are experts in their fields and in AI: Yohan Lee, Associate Dean of Technology and Professor of Computer Science; Michael Arena, Dean of Biola’s Crowell School of Business; and Mihretu Guta, Professor of Philosophy and Apologetics.
Mihretu P. Guta, Ph.D. (Philosophy at Durham University, UK). After completing his Ph.D., he worked as a Postdoctoral Research Fellow at Durham University within the Durham Emergence Project which was set up with cooperation between physicists and philosophers and funded by the John Templeton Foundation. Guta’s postdoctoral research focused on the nature of the emergence of the phenomenal consciousness taken from the standpoint of metaphysics, philosophy of mind, cognitive neuroscience and quantum physics.
Dr. Yohan Lee has led advanced technology organizations for over 20 years in commercial industry, education, and government. He has been privileged to serve as a civil scientist (U.S. National Institutes of Health), Principal Investigator (Google AI), Chief Science Officer (Riiid Labs, Inc.), and CEO (Scaled Entelechy, Inc). His undergraduate and doctoral studies were at UCLA in neuroscience and genomics with a focus on the neurological basis of learning and memory in addition to precision medicine. His doctoral work centered on large scale genomic data for health and distributed computing. In his corporate role, Dr. Lee has led research and business units in industry, public-private partnerships, the federal government, and international academic consortia for enterprise initiatives with Fortune 50 corporations.
Michael Arena, Ph.D., is the dean of the Crowell School of Business at Biola University. He is also the chief science officer and co-founder of the Connected Commons, a research consortium that brings together business and academic thought leaders to develop and apply organizational network solutions. Prior to joining Biola, he served as the vice president of talent and development at Amazon Web Services (AWS), where he leveraged network analysis to enable employee growth, organizational culture and innovation. Arena was also the chief talent officer for General Motors Corporation where he helped to facilitate a business transformation, which is highlighted in his book Adaptive Space. Arena also spent two years as a visiting scientist with MIT’s Media Lab researching human networks and acted as a design thinking coach within the Stanford School for three years.
Episode Transcript
Sean McDowell: [upbeat music] AI has taken the world by storm. What technology is on the horizon? What is positive with AI technology, and where should we be concerned? How can we wisely and biblically navigate our cultural moment? With me today are three experts from different fields: business, science, and philosophy, who are leading the way in how we think about, develop, and engage AI technology. Gentlemen, thanks for being willing to have this conversation.
Michael Arena: Thanks for having us.
Yohan Lee: Yeah, great to be here.
Sean McDowell: Let's jump in. Briefly explain-- I'm going to have you introduce yourselves, and just talk about how AI has revolutionized your field. Johan?
Yohan Lee: Yeah, Johan Lee. I'm the Associate Dean of Technology here at Biola School of Science, Technology, and Health, Computer Science. I, teach computer science, I teach AI, and the way it's revolutionized it is, on the one hand, from a student perspective, the discipline of computer science has expanded dramatically. And so, for students who are really interested in computer science, it's about learning more and delving more into math and physics, to be honest, to be able to help shape and build new na- new AIs that don't exist yet, so.
Sean McDowell: Yeah. How long have you been at Biola?
Yohan Lee: Two and a half years now.
Sean McDowell: Okay, so briefly tell us a little bit of the backstory, experience, and training you have that you've brought to your position now.
Yohan Lee: Right. Yes, so my undergraduate degree is, believe it or not, in neuroscience, predominantly i-in the areas of learning and memory. So, it's almost like I was destined to go into [laughing] this space.
Sean McDowell: [laughing]
Yohan Lee: Then along the way, a PhD in Big Data Analysis through bioinformatics. That's where I got my first exposure into machine learning and AI, and, at that time, it worked fantastically well in a controlled experiment. And so to see what it is now has really, I just can't imag- I couldn't have imagined that it would've gotten here this quickly.
Sean McDowell: Which makes us think, where's it going in the next six months, twelve months, six years?
Yohan Lee: Right.
Sean McDowell: But we'll get there. Michael?
Michael Arena: Yeah, yeah. Michael Reina. I'm the Dean of the Crowell School of Business.
Sean McDowell: Mm-hmm.
Michael Arena: So I'm a corporate guy. I grew up in the corporate world. I came here about three years ago-
Sean McDowell: Okay
Michael Arena: ... Most recently from AWS, and it's completely-
Sean McDowell: AWS is?
Michael Arena: Amazon Web Services.
Sean McDowell: Got it.
Michael Arena: And it just completely disrupted this field of- ... What I study, which is management science. And, you know, thinking about productivity, thinking about, human efficiency, thinking about, you know, how do we get the, you know, sort of the muck out of work and get to the- ... The work of the work? So it's j- it's revolutionized the field.
Sean McDowell: So the word "revolutionized" is not an overstatement when it comes to business?
Michael Arena: Absolutely not.
Sean McDowell: Okay.
Michael Arena: No, and I think we've only begun to see- ... What's possible.
Sean McDowell: We're gonna get into that. I'm really interested to see how you think it might transform business. Mehretu, our newest hire in the Apologetics program. [laughing] We did a video with you, but I want you to introduce yourself again in case some people missed that, and talk about kind of what-- how AI has revolutionized philosophy, if you can even use that term.
Mihretu Guta: Yes. My name is Mehretu. I'm Associate Professor of Philosophy and Apologetics here at Biola. I'm a family guy, so, Biola was where I studied before- ... And I came back to home.
Sean McDowell: You did the MPhil program?
Mihretu Guta: Yes.
Sean McDowell: Yep.
Mihretu Guta: MPhil and also Science and Religion as well. So I have a- ... PhD in Philosophy of Mind, so I work very closely to, you know, research on AI because I cannot really ignore AI. Because there are claims like: AI can be conscious, we can upload emotions, AI is no different than you, and we can actually prove that you're inferior to AI. The, the whole premise of transhumanism is really kind of to dump human biology because it's defective design, to upgrade human biology to a new leve- new level. And, you know, computer science was born out of philosophy, so philosophers are- ... The fore- the pioneers of computer science. So how can I ignore AI? I mean, I can-- honestly, like, I'll be outmoded, I'll be left over, like, if I turn my back against AI. So I work on AI research when it comes to philosophical issues, like can we really upload emotions on AI? Can consciousness be uploaded on computer gadgets and circuits? Well, I also teach philosophy of neuroscience. I teach AI as well. It's impossible for me to ignore, ... Multidisciplinary approaches to all sorts of issues having to do with AI and human brain.
Sean McDowell: So philosophy of mind have been wrestling with these issues for a long time, but now, like in the past maybe two or three years, where we have tools like ChatGPT, it's felt more urgent and pressing. Is that fair in your field?
Mihretu Guta: Well, people are predicting that we have now successfully kind of answered Turing, you know- ... Problem, [clears throat] that, you know, Alan Turing actually introduced in the 1930s and '40s. We haven't. We haven't. We haven't even remotely answered that. And, that's a mimicry. Like, okay, you can, you can come up with a computer, put it in a classroom, and then put another human being somewhere, and, you know, you can play that toy, you know, exercise. But, there are metaphysical and ontological setbacks in our way. You know, we can never be in a position to create consciousness on gadgets because, we do not have that metaphysical property, and-
Sean McDowell: Okay, now hold that thought. We're gonna come back to you because this is really important. We get into metaphysics and ontology. Will consciousness... Will AI become conscious, is a huge question. Tell us maybe one or two publications. I want people to understand and appreciate the kind of work that you're doing, that you've done-
Mihretu Guta: Yeah
Sean McDowell: ... In the past two or three years on AI.
Mihretu Guta: Yes. I am editing, a book on AI, Consciousness, and Unconsciousness with Wiley. It's a group of international thinkers, you know, from-... Discipline from neuroscience, from medical science, and computer science, and all of that. And hopefully, that will be published in 2027, and I'm actually almost, [lips smack] in the near future, will be able to kind of, sign a contract with Bloomsbury, on AI and human flourishing.
Sean McDowell: If people don't know Wiley and Bloomsbury, they're a highly respected-
Mihretu Guta: Mm-hmm.
Sean McDowell: -leading publications. You've done work with Cambridge. I'm just bragging on you 'cause you're in my department. I want people to understand-
Mihretu Guta: Yeah
Sean McDowell: -what you bring to the table. So-
Mihretu Guta: I have another book coming out with Rutledge, uh-
Sean McDowell: Excellent
Mihretu Guta: ... And I'm submitting it next month, so I have so many other publications.
Sean McDowell: Good stuff. Well, I- not start, but in terms of moving into the conversation, give us an example of maybe one or two areas in your field where AI has just revolutionized the way we do business or science or something that blows your mind. You mentioned maybe a couple of years ago we never could've imagined this.
Yohan Lee: Yeah.
Sean McDowell: What's something you pause, and you're like, "When I was in grad school, I don't think I would've believed if somebody had told me this was possible with AI?" Do you have an example of something like that, or am I overstating it?
Yohan Lee: No, you're absolutely right. I think for me, the one that consistently impresses me and gets me excited is machine translation. So that's the ability of AI to translate, linguistics from one language to another language at a level of extremely high sophistication.
Sean McDowell: So give us, give an example of how they would do that. My first thought goes to, like, Bible translation-
Yohan Lee: Yes
Sean McDowell: ... And how amazing that could be.
Yohan Lee: Yes.
Sean McDowell: That we've been laboring for years, and all of a sudden, we have this tool.
Yohan Lee: Yes.
Sean McDowell: And, I mean, our... Maybe this is outside your field. Are people saying within our lifetime, within a short period of time, we'll have the Bible in every language? Does that seem doable?
Yohan Lee: That's the hope, right?
Sean McDowell: Okay.
Yohan Lee: Now, of course, everything is harder than we expect, because what, predicates that capability is whether or not we have enough, of a corpus, a large body of knowledge in both languages so that you can make effective cognates, if you will. And so, in some languages, they are considered high-resource languages, like English, Spanish, et cetera. But then some languages are what we consider low-resource languages because we don't have a lot of material that spans a particular language in a digital format, such as on the internet, in text, in PDF, or in audio recordings like this. And so, when it comes to low-resource languages, m- AI struggles quite a bit. But in high-resource languages, you get the difference between like conversational, translation to business translation to academic translation, and then sometimes highly nuanced, written verse, and so in that case, it could be incredibly powerful. So I hope within my lifetime we get- ... The Bible in every single language on Earth.
Sean McDowell: That'd be pretty amazing if we could do that. I'm curious how else in science, in your field, this technology would help? 'Cause for me, I did my dissertation about ten years ago. And I had to f- I was studying the death of the apostles. So I found early sources in French, German, and Armenian.
Yohan Lee: There you go.
Sean McDowell: Tracked down people, and I paid them to translate it word by word for me.
Yohan Lee: Yes. [chuckles] Yes.
Sean McDowell: Obviously, AI could do that in the click of a button.
Yohan Lee: Mm-hmm.
Sean McDowell: But how else is this tool being helpful in the world of science and beyond?
Yohan Lee: Yeah. In the world of science, the great thing about science is a lot of it is just predicated on numbers and numerics. And so because of numbers, numerics, analyses- ... You are able to then analyze things at a scale that we just haven't been able to do.
Sean McDowell: Mm-hmm.
Yohan Lee: And then one of the challenges in a lot of experiments over the years as we collect a lot of digital data is that you can't compare apples and oranges sometimes. But when now you get thousands and thousands of analyses, all the different possible analyses that have been conducted in a particular area of, let's say, medicine, for example, then you can actually get smarter quicker. You can find out where- ... There could be a white space, an interesting area of discovery that has not been depth, plumbed to the depth and then find novel approaches that we never considered before. That's what I find most exciting about this.
Sean McDowell: I saw an article about how AI was analyzing, like, whale language and communication.
Yohan Lee: There you go. There you go.
Sean McDowell: And was like, "What?" This never crossed my mind-
Yohan Lee: Yeah
Sean McDowell: ... That we could analyze the depths of it, and it seemed to break new ground from my understanding-
Yohan Lee: Yes
Sean McDowell: ... Limited understanding of whale language. That was the narrative, but it seems like that's just the beginning in many ways. Great example. Michael, you said it's not an overstatement to say that AI has revolutionized business. Give me some example of how AI just blows your mind.
Michael Arena: I'm st- I'm still trying to catch up with the neuroscience and the whales. [laughing]
Sean McDowell: [laughing]
Yohan Lee: [laughing]
Michael Arena: So-
Sean McDowell: There you go
Michael Arena: ... So, you know, this may sound elementary compared to those things. It's, you know, it is the great equalizer from a knowledge standpoint. So just think about this. It's, it's upleveling people who are at the bottom or the beginning of, any given knowledge set, and it's moving them up to the median or above instantaneously. I mean, just give- ... Like real examples.
Sean McDowell: Yeah, do it.
Michael Arena: You know, I studied this a lot, like onboarding someone into an organization-
Sean McDowell: Mm-hmm
Michael Arena: ... And to learn- ... What that organization does and what to do in their job. On average, you know, that's about a twenty-four-month to thirty-month horizon for me-
Sean McDowell: Wow
Michael Arena: ... Or you to come into an organization- ... And be at, like the median of the knowledge set of that organization. People who are using AI in an AI native environment can do that in three to four months. So I mean, it's just eradicating-
Sean McDowell: Wow
Michael Arena: ... Like, bringing people up to the, just the average in such a way that they're being-... Productive more frequent or more often- -uh, up to proficiency, and they can then start to focus on other things. So, I mean, that's just one example. There are so many more.
Sean McDowell: So, so what would that look like? I imagine twenty-four to thirty months, I'm talking with colleagues, I'm sitting in business meetings, I'm just absorbing it by being there, asking questions, maybe there's some formal training. What does AI do differently that that's, what, a quarter of the time, roughly?
Michael Arena: Well, I mean, you can think of knowledge as being both taught and caught.
Sean McDowell: Mm-hmm.
Michael Arena: Like, there's, you know, there's this, you know, form of knowledge where you can read it, you can study it, you know- ... And that has an uptake of whatever time frame it is. But then there's also the things that are caught-
Sean McDowell: Okay
Michael Arena: ... Like the things that are modeled, the things that-- And AI's really helped to accelerate the learning curve on both of those. So it's, it's been quite dramatic.
Sean McDowell: So is this-- Just one more question. Is this it-
Michael Arena: Yeah
Sean McDowell: ... Like the leaders of businesses are using it to educate, or people are just getting online and asking questions and using AI to more quickly learn, or is this some combination of both?
Michael Arena: It's a combination of both. And I think we're still trying to figure out exactly how to do this.
Sean McDowell: What it looks like.
Michael Arena: Yeah.
Sean McDowell: Okay.
Michael Arena: So, so some organizations have figured it out and are already accelerating that learning process. Others are, you know, catching up to figure out how to do that. But, but at the end of the day- ... It's made us, and I hate to say this next to my technology [laughing] friend, but it's made us all technologists. Right? Like, I can go in and vibe code a website, and, you know, I know, I know coding about as well as I can speak Spanish, which is just a little. [laughing] and, you know, it's, it's kind of equalized the playing field- ... For all kinds of different professions.
Sean McDowell: Oh, that's so interesting. In some ways, you're right, it makes us all technologists, but it makes the experts who used to be the older, seasoned, wise sages of a community... We saw this shift with Gen Z, where older generations are like: "How to use social media? How do I set up Facebook," in the day, "How do I do YouTube?" And younger generations became the expert, so it flips that on the head in many ways in business as well.
Michael Arena: Yeah.
Sean McDowell: Very interesting. Now, when you talk about revolution in philosophy, these things-
Mihretu Guta: [chuckles]
Sean McDowell: -move a little bit more slowly-
Michael Arena: Correct
Sean McDowell: ... And methodically. So I wouldn't expect there to be some technology that just-
Michael Arena: Right
Sean McDowell: -flipped it on its head. But maybe give us an insight of, since AI's been here over the past two or three years, how these conversations have heated up in the world of philosophy.
Mihretu Guta: Yeah, I think, So you don't see us, like, on a white gown, you know, sitting behind a kind of sophisticated lab equipment and so on, but we really think about AI because it really is giving us insights into the nature of consciousness. So everything that you see around you, any, AI tool that has ever been created, is, an instance of the complexity of human consciousness- ... And nothing else. This is literally the outcome of a three-pound weighing organ inside your skull, this mysterious organ that God has created and designed. Like, we are trying to understand what the nature of consciousness is. So, okay, technologists will be inventing, enjoying, introducing, we're using that. It's great, but what did it take for such a invention to be realized, for example? I think it took your consciousness. But what's consciousness? We have no clue. We have no clue. It's one of the most elusive properties, that we all are carrying, and that's the property that's allowing you to invent all those magnificently impressive and astonishing things.
Sean McDowell: Now, I did-- Let me jump in here. I did the MA Phil program in the early 2000s, and I had metaphysics, Consciousness 1, Consciousness 2, and there were leading thinkers, people like Jagwon Kim, who would say things like, "We don't have a clue how you get consciousness." You just said, "We don't have a clue." With the advent of AI, are a lot of philosophers more confident that they can solve that problem now, or is there still a sense of, like, we have no idea how consciousness could emerge by itself in a naturalistic world?
Mihretu Guta: So there are diverse opinions.
Sean McDowell: Okay.
Mihretu Guta: So physicalistically-minded philosophers might think, "Okay, yeah, you're not different to begin with. You know, you're just a biological machine," and we are modeling that, inventing computers. So there isn't kind of very surprising thing about you, but our own finitude, like, we're not, we're not there yet to crack the code. But you are a biological machine, there is-- You're, you have nothing by way of, like, soul. You're not a soul, you're not a spiritual being in any way, but we will get there, and we will answer this question. But the problem is, like, the origin of consciousness, is it being caused by the complexity of nervous system, or is it something that enters into the constitution when the nervous system is ready? Like, just like as you would expect a guest into your home when you know, okay, everything is in place. Now, you open the door, and the guest appears. Is that how- I have actually published defending that view. Uh-
Sean McDowell: Oh
Mihretu Guta: ... And, we literally have no clue. So JP, myself, and some, you know, dualist philosophers, we are incredibly skeptical of- ... Of what the physicalists are really telling us is the case. It's not as simple as that. But the most surprising thing about this stuff is everything that you see around you is literally the result of consciousness. What exactly is consciousness? I think it feels like God has thrown away the key, and then he tells us that, "Just enjoy." I mean, it doesn't mean that we cannot understand the nature of consciousness, or it-- I'm not saying we can't make progress, but up to date, we haven't made any progress in answering the origin of consciousness or the nature of consciousness. But we know what it is. It's too familiar, but too elusive.
Sean McDowell: That's interesting. I love it. I've-- As a philosopher, I have so many questions for you, but we'll come back to it. We're gonna get to some of the positive about AI. We're gonna get to how we might think biblically about AI. But I want to know what concerns each of you most. Like, for me, I-... Loss of jobs would concern me. I'd love your take on that. I just heard a podcast this morning about how Hollywood, certain figures are, like, ready to just kind of pull the trigger on using AI in a way that will result in a lot of loss of jobs, potentially, in that world.
Yohan Lee: Yeah.
Sean McDowell: I'm concerned about the loss of the ability to know truth from fiction, especially some of the videos that come out. I mean, I was speaking at an event, and somebody, just for fun, was like, "I'm just gonna make up an introduction for Sean from Trump," [chuckles] and he, just to be fun, chose the president. And I sent it to a few people. They're like, "Wait a minute, is this real?" Which made me laugh that they thought the president would give me [chuckles] an introduction. I'm like, "Do you actually think this is real?"
Yohan Lee: [laughs]
Sean McDowell: But they paused for a minute, so, like, the inability to know truth from fiction concerns me. Some of the dehumanization, just in the sense of when... I'm all for saving time, but when AI does things that humans should do-
Yohan Lee: Yeah.
Sean McDowell: Like, I remember the first time I heard a guy that broke up with a girl texting. I'm like, "You've got to be kidding. You've got to look somebody face to face. You owe that to them." So I have a bunch of concerns, but I'd love to go kind of just field by field and tell me what concerns you most, and we'll just keep the pattern going this way.
Yohan Lee: [chuckles]
Sean McDowell: Johan, go ahead.
Yohan Lee: Yeah. No, I mean, for me, it is that dehumanization. That, that's the thing- ... That concerns me the most. I mean, these are technologies, these are tools, these are products that are meant to be solving a problem and providing some form of human benefit and an advantage. And when products like this cause a four- a 13-year-old and a 16-year-old to commit suicide-
Sean McDowell: Oh, man
Yohan Lee: ... No, that's a fail, in the most dramatic terms, right? And so these are not just nascent technologies that you just leave on the shelf. I mean, these must be used with discernment. So getting into the inability to think, because as technology gets more complex, as capabilities get more complex, then because of profit motive or efficiency, you're gonna start seeing a lot more synergy. Now, it's, you, it's AI for self-driving cars. Will it be airplanes one day, right? Will it be-
Sean McDowell: Interesting
Yohan Lee: ... The next, you know, rocket ships? You name it. Anything that is typically a crewed vehicle, could that then be, replaced with an AI? It may be, and there may be tremendously good reasons for it, and I, and I can be a proponent, and also I can also be a critic. But the part that I think that really matters is humans need to be able to handle the corner cases when technology fails. Humans need to be able to solve and understand- ... What's going on when it doesn't work. I mean, we just had a massive global outage of a tried-and-true technology for the last 15 years. That cost a lot of time and money, right? And again, time and money is just time and money. It's not at the same scale of value as human life.
Sean McDowell: Sure.
Yohan Lee: But if the more that we offload our thinking to an AI, then our students and our future operators and engineers and scientists of the world are gonna be less equipped to do the math and figure out when did the code break, when did the technology break, what was the corner case that it mistook? That's important, because for anything to be successful, you have to be able to support it well, you have to be able to operate it well, you have to maintain it well, and you have to be able to troubleshoot it well, and you have to be able to know how to take it offline and replace it with something in the meantime for basic business continuity and then, most often, health and human safety. So we need to be experts in the technology when it breaks, 'cause it always breaks.
Sean McDowell: So it sounds like you're saying there's dehumanization on two levels.
Yohan Lee: Yeah.
Sean McDowell: Number one, when we export things humans should do to technology-
Yohan Lee: Yeah
Sean McDowell: ... We lose something of what uniquely makes us human and flourish.
Yohan Lee: Yeah.
Sean McDowell: On the other hand, the more we export to technology, and that technology fails, and it has failed, and we see it fail-
Yohan Lee: Yes
Sean McDowell: ... Then we're caught in this cycle where we don't have the ability to really fix it-
Yohan Lee: Correct
Sean McDowell: ... And catch it because we're so dependent on it, which makes me think we could come back to this, Merete, that oftentimes people would say, "What are you gonna do with a philosophy degree," right? Like, what's the point?
Yohan Lee: Mm-hmm.
Sean McDowell: I think I heard Lee Strobel in a talk here at Biola like 20 years ago. He told a joke about how, "You know, what does a philosophy major and a medium pizza have in common?" And he's like, "Neither can feed a family of five." Like, the joke is-
Yohan Lee: Wow
Sean McDowell: ... You can't do anything with philosophy. If you're right-
Yohan Lee: Mm-hmm
Sean McDowell: ... Actually learning to think critically, uniquely human skills, might be gaining in their value-
Yohan Lee: Yeah
Sean McDowell: ... In a way we didn't appreciate in the past.
Yohan Lee: Absolutely.
Sean McDowell: That, that's a really interesting way to look at it. What concerns you in the world of business?
Michael Arena: There, I gotta dive in on the human skill thing-
Sean McDowell: Do it. Do it
Michael Arena: ... Here first. I mean, I think that a philosophy degree is gonna matter more than ever in the future.
Sean McDowell: 100%.
Michael Arena: I think that a liberal arts education-
Sean McDowell: Yeah
Michael Arena: ... To think critically- ... To reason, to be able to dive deep into, you know, what's happening here and what are all the unintended consequences, is more essential to our survival-
Sean McDowell: Yes
Michael Arena: ... Than ever before. So, so I, the... I'm not afraid of the tech. Um- I'll start with that. I'm afraid of the humans. And, and what I mean by that is I'm afraid of the thinking of the humans who design and build the tech-
Sean McDowell: Yeah
Michael Arena: ... And I'm, I'm afraid of us, as users of the tech, and how we may misuse, abuse, and frankly, give away our freedoms as a result of it. So, so for me, the existential crisis is on the human side, not the tech side. We, you know, we can- ... Build guardrails on the tech. We can- but we in the liberal arts, you know, philosophy, critical thinking, you know, psycholog- we've got to be thinking about all the human consequences. One of the... You know, yes, there will be, you know, as the business guy, yes, this is going to affect jobs. There will be a pretty radical job dislocation. I mean, we're seeing it. We're seeing it already. In the long term-... I'm not as fearful about that, because-
Sean McDowell: Okay
Michael Arena: We've, we've shown the resiliency- to recreate new jobs- and come up with new ideas. So, so I think that's a threat in the near term. I think the long-term threats on humanity-- There was a great study done from Elon University, and it asked this simple question: "What's it going to look like to be a human being in the year 2035?" And the experts all said things like: "Well, we're going to lose social intelligence." So our ability-
Sean McDowell: Wow!
Michael Arena: Which, which you could argue has already begun, right, with the era of social media. So the ability, as you said earlier, Sean- ... To sit eyeball to eyeball and to engage in a meaningful, relational, communal interaction, is going to begin to dissipate if we're not careful about it. Yeah, cognitive dependency. You know, certainly, you know- ... There have been studies where, you know, if we over-rely on AI, our brain doesn't have the same cognitive load, doesn't have the same- ... Synapse connections. And those are all true, but the one that scares me most from that study is the loss of our human identity.
Sean McDowell: Mm-hmm.
Michael Arena: And, and just think about that for a moment. Just play with Sora 2, just to... And I could pick anything, right? I could pick anything.
Sean McDowell: Yeah, yeah.
Michael Arena: I don't know how many have played with Sora 2, but, like, I can put myself in the middle of a story as hero. I can, I can make myself the hero. I can make myself anything I want to be in this artificial world and give up my God-given identity, you know, what created in His image, for something that the world has made. So those are the things that, you know, are incredibly alarming to me.
Sean McDowell: So the business angle, we're moving from a non-AI to an AI world, and probably those in the middle, we're going to see more loss of job. But your sense is we might kind of settle in, that once we have people who haven't been trained 30, 40 years ago in such a different world, that we'll adapt economically, would be your sense. But the bigger concern is... Let me frame it in a way, tell me if you agree with this.
Michael Arena: Mm-hmm.
Sean McDowell: I'm-- I suspect all of you agree with this, that technology is not neutral.
Michael Arena: Yeah.
Sean McDowell: And I don't mean morally neutral. That's another question we could talk about. But technology affects us.
Michael Arena: Mm-hmm.
Sean McDowell: Like the airplane made the world different when you could travel to Europe in a way people couldn't before. Air conditioning changed things. Watches changed things. Social media changed things. Your concern is that AI is going to affect what we think and how we experience being human and shape us in ways we don't even really anticipate.
Michael Arena: I think it could.
Sean McDowell: It could, okay.
Michael Arena: And I think, I think our responsibility is to start thinking about it much more holistically. You know, so as a business person- ... I'm training every business student that comes through Crowell School of Business in AI literacy, otherwise they will be at a significant disadvantage. [clears throat] But just as import-- more importantly, I'll say, foundationally below that, is how do you think ethically-
Sean McDowell: Mm-hmm
Michael Arena: ... About the use of AI? So one corner of what we call the triangle, the top being AI literacy.
Sean McDowell: Mm-hmm.
Michael Arena: Another corner being, how do you think about this from, you know, as a biblically centered institution through a set of biblical principles and with an ethical lens? And then the other is, how do we teach you to... How do we teach the enduring human skills? Because it turns out that the technical skills have a shelf life, and that's shrinking radically because AI is augmenting that. But the human skills: How do you engage in a conversation? How do you think critically? How do you, how do you influence other people? Those things are enduring. So if we-- So my hope, on my good days, when I'm hopeful [laugh] about this-
Sean McDowell: [laughing]
Mihretu Guta: [laughing]
Michael Arena: ... If we can teach AI literacy and do it with a set of ethical principles and teach people how to be better human beings that can influence one another, positively influence, you know, then I, then I'm very hopeful. On the days where I think we over-index towards AI literacy only, without those two foundational traits, that's when I get scared about the internet.
Sean McDowell: Last question, quick follow-up: Do you have a sense of how much that's being done in the business world?
Michael Arena: It's interesting. It's, if you ask the business world this question- ... They will say, "Yes, by all means, we're thinking about this ethically." If I were to ask, you know-
Sean McDowell: [laughing]
Michael Arena: ... Those of you who have studied ethics, are they thinking about it that way? The answer is, they haven't been trained to. So, so the short answer is- ... I don't think there's, like, malintent here, with a few exceptions.
Sean McDowell: Yeah, that's fair.
Michael Arena: We could debate those.
Sean McDowell: That's fair.
Michael Arena: But I, but I think what it is, and this is the liberal arts education and why it's so critical these days, I think we have not taught people how to think ethically- ... How to ask these questions, and we've tossed them into these technology engines, and they're building things based on their own moral code, and usually based on, Johan said this already, with the capitalistic mindset that speed matters disproportionately. So all these other things- ... Are coming along later, maybe, but they haven't been trained how to think about these things.
Sean McDowell: Oftentimes, what I've seen is ethics follows the technology, 'cause it's just too powerful and transformative. You're trying to do differently here, which I appreciate. As a philosopher and apologist, Mehretu, what concerns you the most about AI?
Mihretu Guta: I'd like to begin by thanking Johan and Michael. Honestly, I'm profoundly grateful for your observations. This is one of the areas where I'm vocally in a kind of extremely kind of critique of the relevance of AI. The critical thinking is really irreplaceable. It's not, it's, it's not something that we can negotiate over.
Michael Arena: Yeah.
Mihretu Guta: It doesn't matter what you do, it doesn't matter what your discipline is, it literally doesn't matter. That's a requirement. That's a prerequisite for you to succeed in anything that you do. So, I think AI, if it's not handled carefully, if we are outsourcing everything to AI, the end goal of that kind of, uh-... Commitment to AI is going to be disastrous. So-
Sean McDowell: Yeah, it'll make us data clerks.
Mihretu Guta: Completely. So organic thinking should be taken very seriously. So organic thinking shouldn't be handed over to, AI. Imagine, there's a irony here. So these people who are writing these programs or algorithms and so on, they are using their mind, right? They are effectively using [chuckles] their mind.
Sean McDowell: Mm-hmm.
Mihretu Guta: But they are telling the rest of us, "No, no. Don't read the book. It is AI summary. Don't even, like, put your finger on the book. You don't have to even know the color of my book. You pretend as if you're an expert." Okay, that's a disservice. You need to read a book from cover to cover if you really want to be a genuine expert. And also, our conception of knowledge has to change. Knowledge has to be pursued for intrinsic reasons-
Sean McDowell: Amen!
Mihretu Guta: ... Not only always- ... For financial gain
Sean McDowell: Or instrumental.
Mihretu Guta: Yeah.
Sean McDowell: Okay, explain the difference between intrinsic versus instrumental value of thinking.
Mihretu Guta: Yeah. Intrinsic knowledge is a knowledge that you can pursue for its own sake. I- there's nothing attached to it. Like, "Oh, I want to make X amount of money," or, "I want to be famous," or, "I just want to impress people." Those are very instrumental, you know, pursuits. An instrumental, value or knowledge is a knowledge such as, for example, I have a money. Let's say I want to have a degree in computer science because I just want to make money. I'm really laser focused on making money. That's it. I really don't care. So, I think we need to take this extremely seriously because it's really causing a serious damage already. There are college students who cannot read. There are high schoolers who cannot read. A study came out, em, out of MIT, I don't know if you were referring to that study.
Sean McDowell: Yes.
Mihretu Guta: ChatGPT users, brain conductivity is almost undetectable.
Sean McDowell: Yeah.
Mihretu Guta: And, brain users- ... Which means people who really struggle and sweat over reading, writing in a organic way, it's like California wildfire. You can actually see the brain is lit up. There is a neuroscience, as, you know, as Johan knows, and Michael, actually, you also know, neuroplasticity, for example. Neuroplasticity is a concept that how much you challenge your brain is proportional how effective you would become in terms of having this rich cognitive life. The less you challenge yourself, and the outcome that you get is proportional to how less you're challenging yourself. So I think, in business school, for example, there has to be a philosophy of business, where students not only kind of, kind of taking this skill, which is extremely important, AI literacy is extremely important- ... But they need to also be equipped with business philosophy. Like, what kind of person should you be- ... As a businessman? Like, how should you interact with other people? How should you handle difficult situations? Not only like solving tech-related problems, but human beings are problems themselves, right? [chuckles] They, they just, they are incredibly difficult creatures to deal with in some situations. So how can you manage and navigate that environment? This cannot be done without critical thinking faculties.
Sean McDowell: Amen to that. And by the way, you're a dualist, you and J.P. Moreland, two of the leading defenders of the idea that we're body and soul today. So when you talk about neuroplasticity, we have a soul, an immaterial component. We have a physical component, and they interact. What we do with our minds affects our bodies. What we do with our bodies can affect our minds. When we export some things we're supposed to do with our minds, we actually see the negative effects in our bodies.
Mihretu Guta: Understood.
Sean McDowell: Hence, using ChatGPT in certain fashion affects the processing and development of the brain.
Mihretu Guta: Imagine the memory capacity of ChatGPT users is shrinking down to seconds. Like, they don't even record what they have posted, you know, five minutes ago. Why? If you haven't sat on something, if that's literally is not something that came out of your own effort, what do you expect to remember? Let's suppose I have Mike in the math class. You know, you are a math genius, and I hired you. I gave you money, and I am there, I'm a student, and you are my ChatGPT.
Sean McDowell: [chuckles] Mm-hmm.
Mihretu Guta: I show you the calculus problem. "Mike, okay, here is it. What should I say?" And you tell me, and I write that down, and then I got an A on the calculus exam. [chuckles]
Sean McDowell: [chuckles]
Mihretu Guta: Do you hire me as a mathematician in your school? Like-
Sean McDowell: No chance
Mihretu Guta: ... That's literally what ChatGPT is all about. You put the prompt, and it spits out to you information, and that's not yours. If you r- if you write an essay by downloading from ChatGPT, you haven't written anything. You need to admit that. So I think, I think we need-- Us Christians especially, like, the life of the mind means loving God with your mind means literally sweating, spending countless hours doing your own thing genuinely with integrity. And that's not a loss. And that's exactly how you can show your own integrity. That's how you can prove your own genuine expertise. You can't cut the corners. You'll be caught. Like, if you fake your way, you will have no hiding place. Like, if you stand in front of me as a mathematician, and if you want-- If you have faked your way, I can make your life miserable in half a second. Here is a whiteboard, here is a marker. Go ahead and show me derivatives and blah. Just show me. And that's a reckoning moment for you. Faking your way is not going to help you, and people need to understand that.
Sean McDowell: This distinction-- Oh, yeah, go ahead.
Michael Arena: I just want to follow up on that because, you know, this whole dualist... I've actually studied your work and J.P.'s work, and I think- ... This whole dualist approach, it's, it's all- it's thinking, but it's also experience and the inner experience. And I think about it this way: like, the Turing test actually was beat, you know-
Mihretu Guta: Mm-hmm
Michael Arena: ... In a university just south of where we are right now. But it was beat because it mimicked human emotion, but it never felt it. It never experienced it. It doesn't know what love really is. It just knows how to mimic those emotions, and I think that's the real danger.... Is that- ... This inner experience, both learning and then emotional, is something that AI will never mimic, and that's who we are, you know, as image bearers, and we've gotta be very careful to, you know, not give that up.
Mihretu Guta: I- can I ask something?
Yohan Lee: Please, yeah.
Mihretu Guta: This is a brilliant point, because, [clears throat] what Michael was referring to is what we call first-person experience.
Yohan Lee: Mm-hmm.
Mihretu Guta: The ability to introspect your own mental states. For example, right now, as I engage in this kind of conversation, in the back of my mind, I'm literally running so many other things.
Yohan Lee: Yeah. [chuckles]
Mihretu Guta: I am aware of those things, but they are not relevant for me. I'm not gonna let them come to the surface of my consciousness and be part of this conversation. That ability is incredibly mysterious. Imagine- ... You literally are aware of what's happening inside your mind. Not brain, mind. Brain is important to just facilitate. We're not our brains. So first-person data is 100% unmanageable given any neuroscientific studies. Like, you peer into my brain as much as you want, you can come away with blood flow and neural, you know, through neurotransmitters and electricities and proteins and boring water, and so on. But what- where is my information? Where did I store? You can't see my beliefs, you can't see my desires, you can't see my plans, my regrets, and so on. If I am not soul, where are those things? If you have access to my physical brain, why can't you actually tell me where my... Everything that information is? Where?
Yohan Lee: Well, memory, 'cause, you know, memory is storable and identifiable, so let's- ... Let's, let's not forget that piece.
Mihretu Guta: Mm-hmm.
Yohan Lee: I mean, that's been discovered for a little over 30 years.
Michael Arena: The technology.
Mihretu Guta: The memory-
Yohan Lee: Right
Mihretu Guta: ... Yeah.
Yohan Lee: Yeah.
Mihretu Guta: So I have actually a great skepticism when it comes to memory. Yes, we can know certain parts of your brain- ... Hippocampus and synaptic connections, and so on-
Yohan Lee: Mm-hmm
Mihretu Guta: ... But no one has ever been able to actually show me where is that information? You, you can show when these structures get damaged- Yes, exactly. You would not be able to remember- ... And that's correlation. That's not causation or identity.
Yohan Lee: Yeah.
Mihretu Guta: Those two things have never been shown in any discipline.
Yohan Lee: The ability for memory formation and where those are located has been shown, but-
Mihretu Guta: Yeah
Yohan Lee: ... More importantly, though, the thing, the thing that you brought up, the two of you brought up, and this is a little controversial against Michael, is 'cause, you know, friction is really important. Like, I love friction. I know that goes against what some of the things that you say at times. [laughing]
Michael Arena: [laughing]
Yohan Lee: But yeah, I mean, because friction is where we learn, but more than that, it's when I think about what's at the end of the line, right? At the end of the line, if you're not, if you're not delving deep into these faculties, developing these faculties, these capabilities, these skills, and again, I'm not just all about skills here. You're lead- people then are leading their lives to misery, whether they know it or not. You know, we're, we're talking about the bona vita right now, right? What is the good life? What is the richness of life? What's the point of living? You know, it- the more you offload, the... Like you said, the more you fake, you're just leading towards misery. Whereas, you know, why are we enthralled as human beings? Why do we have joy as Christians? It's because amidst all the friction, we've learned to do really good things, and our product is not about being productive. It's about actually doing things in this material, f- real, natural world, and recognizing that a lot of the times, these things that we do are good, and they're beautiful, and that brings joy and satisfaction amidst all the challenges and all the muck. And so for me, it's always about we wanna train our students, 'cause r- let's be very clear, you know, as academicians, what are we in the business of? We're not just in the f- business of academic formation. We're in the business of spiritual formation as well. And so-
Michael Arena: Amen
Yohan Lee: ... What does that mean? Helping develop people to live and have and enjoy a better life, because in our beliefs, what do we believe? That the chief end of man is what? To love the Lord and know Him as fully as possible-
Michael Arena: Amen
Yohan Lee: ... And then enjoy Him, and what this means on this planet. And, this is where, like, the evangelist in me kind of kicks in, right? Like, when you do street evangelism for the first time, it is absolutely terrifying. [laughing]
Michael Arena: [chuckles]
Yohan Lee: It brings you that reckoning moment, like, "Oh, my gosh, what am I doing on the beach here in, on Third Street Promenade in LA," right, "in Santa Monica?" But when you do it, and you- because you believe in it, you literally believe that you are here to bring good to the world, that's living. You know, that is pursuing and then recognizing that all the things in your life experience, the memories, the emotional context, can be brought to bring a tremendous good to people around you that you've never met before. And what throws me off half the time when I do this is I am actually surprised that people want to listen to me, that people actually care, that I am out there on the street trying to share with them the word of God, that there is hope in this world, that there is a good out there. What blows me away is not the doing, you know, and just like Michael says, it's not about the tech, it's about the person. What blows me away is that people are actually receptive.
Mihretu Guta: Hey, Johan, let me jump in-
Yohan Lee: Yeah
Mihretu Guta: ... And ask you this before we-
Yohan Lee: Yeah
Mihretu Guta: ... Shift back. One of the arguments that Mehretu is making is a common argument for the soul.
Yohan Lee: Yeah.
Mihretu Guta: That there's third-person access.
Yohan Lee: Yeah.
Mihretu Guta: You could study someone's brain-
Yohan Lee: Mm-hmm
Mihretu Guta: ... But then there's first-person access that you can only get by asking the person to reveal it.
Yohan Lee: Yes.
Mihretu Guta: So I would- I could know in principle maybe your emotional state-
Yohan Lee: Yeah
Mihretu Guta: ... Angry, anxious, sad, but I could never know what's driving it unless you tell me. So there's an interaction between the mind and the brain in terms of maybe where memories and how they're stored.
Yohan Lee: Mm-hmm.
Mihretu Guta: But the content of that would require a first-person-
Yohan Lee: Yes
Mihretu Guta: ... Reveal. You agree with that?
Yohan Lee: Yes, I-
Mihretu Guta: Okay
Yohan Lee: ... And I have to, because neuroscience has indicated-... From what are called near-death experiences, NDEs.
Sean McDowell: Love it.
Yohan Lee: People who have, essentially been brain dead, potentially physiologically dead on the surgical operating table, and they experience, an ability after they're resuscitated and revived, that they can completely recall the events that were happening in that operating room while they were unconscious and unable to see visually, unable to hear, whatever it is, right? And enough of those instances have been surfaced in academic literature, not on the orders of tens and hundreds, but literally thousands. That's not a trend, that's a pattern. That is observed phenomena. And so whenever you have observed phenomena like that, as a scientist, I must reconcile that that is a real thing, right? And it's been reproduced and reported literally thousands of times. So that indicates there is something in the first person, distinct from the physical, biological reality of the brain, that is able to be aware, able to hear, listen, cog-, cogitate, cognate, and also process, and then, be able to indicate that there was an emotional component there. There was a physiological sensory component there, completely devoid of the human corpora. And so, yes, I, the evidence indicates that I have to believe that there is a first person.
Sean McDowell: And the key to that is... And then I'll come over to you. The key to that-
Yohan Lee: Yeah
Sean McDowell: ... Is people come back and have information-
Yohan Lee: Yes
Sean McDowell: ... They could not have had while their brain is not registering-
Yohan Lee: Yeah, correct
Sean McDowell: ... Any activity.
Yohan Lee: Yeah.
Sean McDowell: One more thought, then we're going to shift back to AI.
Mihretu Guta: Yeah. So I'm going to go back to memory.
Sean McDowell: Yeah.
Mihretu Guta: So I've recently given a kind of interview on memory stuff, and, you're absolutely right. I mean, there are hippocampus-
Sean McDowell: Mm-hmm
Mihretu Guta: ... For example, is implicated in, you know, facilitating memory and-
Yohan Lee: Yeah, DG3
Mihretu Guta: ... Synaptic connections, and so on. Yeah. So here's a conundrum about memory. You read a book, let's say three of us read a book, and we threw that book away, and we meet like this- ... And all of a sudden start talking about the book. What went from that book to your wherever it went? You didn't chew up the book.
Sean McDowell: Mm-hmm.
Mihretu Guta: You didn't.
Yohan Lee: Protein synthesis. You produced- [laughing]
Sean McDowell: [laughing]
Mihretu Guta: ... A memory of the book. That's a, that's a model.
Yohan Lee: Yes.
Mihretu Guta: That's a theory.
Yohan Lee: Yes.
Mihretu Guta: So we don't confuse model and theory-
Yohan Lee: Mm-hmm
Mihretu Guta: ... As if it's explaining what the mysterious aspect of how memory actually works.
Yohan Lee: Mm-hmm.
Mihretu Guta: So we read, okay, I didn't chew out the book. I have no idea how the information went from that book, wherever it went.
Yohan Lee: Mm-hmm.
Mihretu Guta: No idea whatsoever. And so we have this incredible, capacity to compartmentalize our own memory lives. Like, if I were to ask you about your life history, you're not gonna confuse that with your responsibility that you have here at Biola. I mean, you're not going to talk about your students and j- you're at ease. You pull that information out-
Sean McDowell: Mm-hmm. Mm-hmm
Mihretu Guta: ... Within the context that required you to do, to do that kind of stuff. So your family life, blah-
Sean McDowell: Sure
Mihretu Guta: ... Okay. Peer, peer into my brain using any kind of technology you like, you're not going to be able to answer that question. You're not. You're not going to show me a file. You're not going to show me, "Oh, his history is over there. His history is just one inch away from the hippocampus toward the limbic system," and so on. You, you're not gonna be able to do that. So there is a deep mystery here. I also don't believe that the bearer of memory is your physical organ, your brain. Your brain has actually a limited capacity, spatial wise, but you have potentially infinite [chuckles] ... You have a capacity to memorize infinite amount of information. Potentially, I say it. Not actually, potentially. So the brain, as a physical organ, is a limited space. So if you have an ability to contain potentially infinite amount of information, then it's not the brain that's doing the magic. Something deeply mysterious.
Sean McDowell: Something beyond the brain.
Mihretu Guta: Yeah.
Sean McDowell: Mainly the soul.
Mihretu Guta: Beliefs are not born by the brain, desires are not born by brain, memory is not born by brain. All non-physical properties are not born by brain. Brain is important, I'm not doubting. Healthy functioning of the brain is absolutely necessary, but it's not sufficient.
Sean McDowell: Like you said, it's both. It's-
Mihretu Guta: But it's not sufficient. Yeah.
Sean McDowell: Good stuff. All right, I wanna know where each of you think maybe AI technology is headed. And you could pick six weeks, you could pick six months, you could pick six years. What's something you see that's maybe on the horizon, or maybe in your case, where this conversation is headed? Where do you see some of the technology headed?
Yohan Lee: That's a big question.
Mihretu Guta: Yeah. [laughing]
Sean McDowell: [laughing]
Yohan Lee: Honestly, I need a lot more time to think through that-
Mihretu Guta: Yes. [laughing]
Yohan Lee: ... Because, you know, being in the b- in the business of AI research, you're, you're focused on what is in front of you. And so-
Sean McDowell: Okay
Yohan Lee: ... So my ability to do long-term projection is a little bit on the weaker side. Um-
Sean McDowell: That's fair
Yohan Lee: ... But when I think about immediately, like, where is it headed? Obviously, it's retooling industries right now. It is unfortunately playing far too big of a part in, our media and consumptive abilities. The concerns that Mehrdad, too, he and I actually share this concern: it's, it's, you know, when you don't read that book cover to cover, you just miss out on a ton. Primary sources are critical because you have to do the mental frictional exercise-
Sean McDowell: Yeah
Yohan Lee: ... Of, "Wait, did I just... " You know, "I read this paragraph. It's in English. It's really hard. What did I just read?" That, that exercise of going back and rehearsing and trying to muck through that- ... Or go through that harder equation from, you know, into triple integral calculus. You know, like, those p- bits are there. So there's gonna be a friction, there's gonna be a tension there. On the other hand, there's the really exciting, wonderful things. So, 6G wireless is coming very soon. The ability to transmit and consume information at scale much quicker, greater volume and velocity is gonna make our interaction with digital spaces much more immersive and consuming- ... Which comes with all the negatives, as you can imagine.
Sean McDowell: Of course, of course.
Yohan Lee: Other areas where I think, where AI is taking us is a mechanized reality, where, you know- ... When you think about fourth industrial revolution now and transhumanism, there is always going to be a secular push to roboticize things and replace humans where they shouldn't be replaced. So we have to be mindful of that. But where it could be incredibly helpful is, um-... In developing and discovering new, you know, whether it's potential, advances in medicine- -uh, new means of, better managing resources, material resources on the Earth, how to do extraction for energy- ... Better, faster, with a much smaller, destructive footprint, all of these. So for me, I love AI in the sense of its discovery potential as a tool for- ... Hu- increasing the human condition economically. Like, the area that I get really excited about is, AI to better train robots for eldercare. Like, we're in Southern California. A massive, um-
Sean McDowell: Oh
Yohan Lee: ... Demographic explosion is gonna happen soon, and skilled nursing facilities, the first thing they bring up is, "Our biggest challenge is, oh, is, turnover. We don't have enough skilled nurses. We have more patients than we can s- to meet the needs of the way that we would want to as individuals." And so robots for skilled nursing to s- assist people who cannot afford it, folks who cannot, get the level of discreet care necessary- ... That's the thing that gets me excited. But then all the negatives come with that, and so I'm constantly schizophrenic between what I need to manage and what I need to advance.
Sean McDowell: That's like a whole show in itself-
Yohan Lee: Yeah
Sean McDowell: ... 'cause we talk about not wanting to dehumanize.
Yohan Lee: Correct.
Sean McDowell: And a robot can't replace presence-
Yohan Lee: Right
Sean McDowell: ... But can it help with somebody who literally has no one or-
Yohan Lee: Yes
Sean McDowell: ... Not anything to help them-
Yohan Lee: Yes
Sean McDowell: ... As a part of the process-
Yohan Lee: Yes
Sean McDowell: ... That's where it could potentially-
Yohan Lee: Yeah
Sean McDowell: ... Help out. That's amazing. I just saw a study on AI this week. They were talking about trying to replicate the hand-
Yohan Lee: Mm-hmm
Sean McDowell: ... And how it's one of the hardest things 'cause there's so many sensors on the hand.
Yohan Lee: Yes.
Sean McDowell: Which, to me, is an argument for intelligent design, how it would've happened by chance- ... But I digress. Tell us a little, Michael, how you see what's maybe on the horizon in the business world.
Michael Arena: It's, it's a really hard question to answer- ... 'cause I don't, I don't think any of us really know.
Sean McDowell: Mm-hmm.
Michael Arena: And we've never seen a technology move at this velocity. I mean, it's just incredible.
Sean McDowell: Ever, in your lifetime-
Michael Arena: Yeah, we-
Sean McDowell: ... Any technology we think of-
Michael Arena: I mean, we have broken Moore's law, right? I mean- ... Moore's law would say that, you know, compute capacity is going to double every 18 months, every 18 months to two years. You know, the, you know, we know just from NVIDIA alone, that the, you know, the last NVIDIA chip increased inference by 30x from the previous one.
Sean McDowell: [laughs]
Michael Arena: I mean, we've never seen technology like this. So, so the speed and velocity, And we've only really seen generative AI. I mean, that's the, that's the buzz, right? That's what we're talking about, and it's actually hit us in places that we didn't think it would hit first, which is, like, software development and, you know, some of those areas. You know, I mean, I'm very long on the, you know- ... The discovery capacity of AI. I think DeepMind- ... Is doing some incredible stuff. Disco- I mean, there was one study recently where in 17 days they discovered, I think it was, I think it was 400,000 new hard materials. And that would've taken us, as human beings, 800 years to discover.
Sean McDowell: Wow! [chuckles]
Michael Arena: So, so-
Sean McDowell: 17 days. [chuckles]
Michael Arena: So 17 days of, you know, you know, 800 years collapsed to 17 days.
Sean McDowell: That's amazing.
Michael Arena: So, like, the breakthrough possibilities on things like what will you- how will you power your next vehicle? It's, and- ... You know, the medical breakthroughs, I mean, we've already been able to use AI in a, in a fairly crude format from where it will be to, you know, increase, you know, breast cancer detection by 17%. You know, so you start to think about these medical areas, I feel really good. The, my-- on the opposite side of that, so it's less about the tech, and it's more about the concentration of where the tech's being built- ... You know, I'm, I'm concerned about this thing that's now being called, you know, abundant AI. You know, how do we, how do we make AI more abundant, like, more accessible to everybody? It sounds good, right? It sounds great, except for the concentration of the people who are trying to do that is-
Sean McDowell: The cost
Michael Arena: ... Really locked in to three, four, or five different organizations that are, you know, in strange ways, investing- ... Billions of dollars in one another. So it's become a bit of a closed society, and I think we, as believers, need to be concerned. Even the term abundant AI, right?
Sean McDowell: [chuckles]
Michael Arena: We know what abundance means, and, you know, we know- ... You know, what John 10:10 says, that Christ came to give us life abundantly. So the deception, even the deception of that- ... Is scary. So, so, you know, on a medical discovery breakthrough side, it's, it's going to be astounding. On the misuse, limited, you know, sort of isolation of power and who has control over those things, you know, I'm, I'm super scared.
Sean McDowell: How much of it, if you can say in the business world, is really the technology and the use of it is driven by money?
Michael Arena: I don't know if I can give a percentage, but I will say that it's the first, second, third thought-
Sean McDowell: That answers my question [laughing]
Michael Arena: ... Of people-
Yohan Lee: [laughing]
Sean McDowell: Fair
Michael Arena: ... People who are applying this, right? And, I have never... I've been driving change inside corporations my entire career. I have never seen a change mandated upon people more, with more-
Sean McDowell: Right.
Michael Arena: ... Velocity and aggression from the top down than what I've seen this. And, and the human beings aren't ready to, you know, adopt it completely- ... Because we're going to give up something that's essential to us. You know, the co- the conversation we were having before, you know, "I built my identity in this work that I've mastered- ... And I'm being forced to use a solution that will put me out of business from that perspective." So, so yeah, the commerce part of this is driving it. All these other conversations we're having, they need to catch up.
Speaker: ... And I think that's our obligation. Our obligation is to lean in more aggressively on the ethics side, on, you know, the consciousness side, all these other things, because otherwise, commerce will drive this.
Sean McDowell: Man, that's definitely a- that's a scary thought, and we'll come back to how each of you are leading the way in that regard. I'm gonna throw a wrench your way, Mehret. You said something earlier that's kind of in the back of my mind. What I've h- been hearing people say, maybe for the past few decades, is when will, like, computers and thinking catch up with human beings? You said it earlier, there's a push to show that it's surpassed human beings, and we are inferior.
Mihretu Guta: Right.
Sean McDowell: Is that a shift that you've seen? Is that where kind of the conversation is going? Like, "Catch up. Clearly, AI is superior than human beings."
Mihretu Guta: Yeah, I think, [clears throat] okay, if this is exactly what proponents of, strong AI want to do.
Sean McDowell: Define strong AI.
Mihretu Guta: A strong AI is the idea that computers are not simple tools. Computers can ontologically be superior to human beings, their own invest- inver- inventors and then creators. Um-
Sean McDowell: So they're not just, to use the way Michael framed it earlier-
Mihretu Guta: No
Sean McDowell: ... They're not just imitating human thinking.
Mihretu Guta: That would be-
Sean McDowell: They are really doing it- ... And going beyond that.
Mihretu Guta: That would be weak AI. So weak AI, just playing computers and, you know, computers can allow you to do XYZ. That's fine. They are our servants, and we can get things done. The strong AI is entirely predicated upon philosophical assumption that, we can create superintelligence. Superintelligence, is an kind of a hardcore evidence that you are inferior to your own gadgets. That's exactly what they do, and they write about it, and they argue for that, premise. And, you know, and I think that's not possible because, I'm actually working on a mathematical model that I think for the past seven years or something like that. I call it N plus one. So earlier, I said that everything that we've invented so far is a reflection of our creativity.
Sean McDowell: Mm-hmm.
Mihretu Guta: AI is not like something that descended from heaven, [chuckles] you know? And no matter how complicated it is-
Sean McDowell: Or evolved naturally.
Mihretu Guta: Yeah, yeah.
Sean McDowell: It's-
Mihretu Guta: Yeah, it didn't come through natural-
Sean McDowell: Yeah
Mihretu Guta: ... Selection or kind of evolutionary stages. It took hours and hours, years of collaboration among experts and so on-
Sean McDowell: Mm-hmm
Mihretu Guta: ... And they write algorithms and programs and so on. I mean, it's literally ironic to say... You look back at what you've done and say, "Yeah, you are superior to me." What does that mean? You're already ahead of that because it took you to bring that into being. So it's impossible to narrow that gap. That's not a, that's not a physical gap. That's not a technological gap. That's ontological gap- ... Not a metaphysical gap. So, for example, at no point, none of us would say, "Oh, now we are equal to God, shoulder to shoulder, and God is inferior to us because we created these gadgets, and he didn't." Well, who did give you the infrastructure, the cognitive infrastructure, that allowed you to do this? God. If God is not metaphysically superior to you, so you're not going to be able to be in a position where God would become all of a sudden inferior to you. We are gods to these gadgets. It took us. We brought them into existence. It doesn't matter- ... What they end up doing, so we shouldn't be surprised. And my-- I am laser focused on human beings, by the way. I always appreciate human beings. Human beings are incredibly gifted creatures. But when they tell me that a computer gadget is superior to me, what are you talking about? You are- ... Doubly super intelligent. It took you to bring this into existence, so that argument doesn't really fly. You can talk about the smartness in a functional sense. Yeah.
Sean McDowell: Sure.
Mihretu Guta: There's a distinction between functional smartness and natural smartness. So computers can be, yeah, functionally smarter than you because they are so fast, they can contain and analyze a huge body of information in seconds. All right? That's functional, ability, functional smartness.
Sean McDowell: That's operational ability.
Mihretu Guta: Yeah, operational.
Sean McDowell: Yeah.
Mihretu Guta: But you have a natural smartness that allowed you to make such a thing be possible in the first place. So therefore, you maintain that status, that ontological status, with zero fear of that status being hijacked, away from you by any future, AI. So I'm not gonna lose a sleep-
Sean McDowell: That's a great point
Mihretu Guta: ... Over it, by the way.
Sean McDowell: Yeah.
Mihretu Guta: And many people make these hyperbolic statements, "Oh, AI, blah." I honestly laugh at people when they make such arguments. [laughing] It has zero basis. They are almost me saying, like, "I'm not giving this interview in English."
Sean McDowell: While you're doing it.
Mihretu Guta: Yeah.
Sean McDowell: Contradicts itself.
Mihretu Guta: It's pretty contradictory.
Sean McDowell: Fair enough. I, it, that's the importance, again, of critical thinking. What does it mean to be human? Do we have a soul? And if AI is derivative of us, what does that say about us? We're gonna... I'm gonna ask each of you a question. What would, what would it mean to think biblically about AI? I'd love to apply biblical stories, biblical principles, ideas to AI, I think would be helpful. One point, comment I'll make while you're thinking about this, Mehret, too. I've always thought it's interesting is that in sci-fi movies, we make robots, they learn to think, they rebel against their creator- [chuckles] ... And then we have to save them from the rebellion. Sounds a lot like Genesis 1 through 3. Some people say: "We're gonna have strong AI, and they're gonna rebel against us. We're gonna have to stop the rebellion." I'm like: We've kind of heard this story before, which tells us there's nothing new under the sun. But with that said, what would it mean in your field or broader, just to think biblically about artificial intelligence?
Speaker: So I teach computer science courses, and I start every class with, a verse of the day and an AI of the day, actually.
Sean McDowell: Really? [chuckles]
Speaker: Yeah. Um-
Sean McDowell: Interesting. [chuckles]
Speaker: Because I'm trying to show students the correlates. Like, what does Scripture tell us to help-
Yohan Lee: ... Frame what an AI can do or an AI is doing right now. Because the students of today will be the engineers, the, um- -algorithm makers, the inventors, the entrepreneurs, and the leaders of tomorrow. So for me, it's a sobering reminder that I have a responsibility of training up that next generation. So when I think biblically about these things, you know, that's an example of a tactical, practical application, right? Here's a verse, here's an AI, how do these mesh?
Sean McDowell: And by the way, you're not just throwing a verse on there. Sometimes integration is like, "Hey, read a verse that has nothing to do with AI or-
Yohan Lee: Yeah, no. I mean, like, I was fortunate, since I'm somewhat new to this faculty, I got to take faculty integration with David Turner. He's amazing,
Sean McDowell: It's fantastic
Yohan Lee: ... That integration is not an afterthought. Integration is the beginning. And so if you take that approach, you... And I think that's just something unique about Biola. But when you take that approach, it makes you think deeply, and then you recognize, "Oh my, oh my, this verse is directly applicable to this technology."
Sean McDowell: Good.
Yohan Lee: "How now do I equip students to essentially do the right thing, be ready to do the right thing, out- and defeat the machine in terms of being able to outwrite, outcode, out- develop a better algorithm?" That's always something that I'm always trying to teach students. I had a student this last summer who basically I used as a guinea pig to, essentially determine if I could teach an undergraduate, a junior, rising senior, the fundamental mechanics of how to build, an AI large language model system from the ground up.
Sean McDowell: Wow!
Yohan Lee: And as that progressed, and this is a very bright student, she made this definitive exclamation to me last week, and I said: "So did you, did-- Were you able to demystify the AI?" She said, "Oh, my gosh, it's just math." And so-
Sean McDowell: Interesting
Yohan Lee: ... So she had that revelation moment, that, you know, that light bulb going off. Because over the course of the year, I taught her how to look at a, the definitive paper behind one of the key algorithms that drives the world, extract the mathematical equation from it, run the calculus from it by hand, then go through the code, and get the actual code that's out there, it's open source, walk through that one by one, learn how to manipulate it-
Sean McDowell: Wow
Yohan Lee: ... And have it running on her GPU machine for the last thirty some odd days in a row. And, and she said, "It's just math." And so for that student to have that light bulb moment go off- ... That was critical for me, my position as an academician, as a professor. It was also critical because then the confidence switch went on. This student now feels if she's any AI, she can design, she can build, she can defeat, she can expand. And so, going back to the ontological piece that Mehretu was saying is, it's-- we have to take that responsibility. "To whom much is given, much is expected," in return.
Sean McDowell: Amen.
Yohan Lee: You know, Philippians 4:13, right? Having the strength to face all conditions, we need to empower one another and the world at large, that they can- ... Achieve far greater than their potential, far greater than where they came from, far more than they expected. When encouraged and being fed towards what is ultimately good, you know, the beautiful, the true. You know, if you continually feed people the good, the beautiful, and the true, you, one develops better taste. When you have better taste, you become more demanding of what things should be versus the way they are. That, to me, is the best way to change the world, because as Christians and God, as He's modeled for us, it's about being generous. We have something- ... The world does not have. The more we give it freely, lovingly, openly, with a critical mind to show people how to reach for Steak Diane instead of McNuggets, we will tremendously impact people for the better. And so that's kind of where I try to take AI. I try to take AI in a, in a direction that is irrefutably superior, good, pleasing, beneficial to the world. And when we do that, we can change the conversation.
Sean McDowell: I love that because it hints back to what you were saying earlier, Michael, about we approach this holistically.
Michael Arena: Yeah.
Sean McDowell: So one, it's like, what's the worldview behind AI? Well, it's actually really just math. I actually hadn't even really thought about that-
Yohan Lee: [chuckles] Sorry.
Sean McDowell: ... But you're right. It's just an algorithm. Like, no, that was, like, helpful to me-
Yohan Lee: Yeah
Sean McDowell: ... Because I don't understand how a lot of that stuff works.
Yohan Lee: Mm-hmm.
Sean McDowell: But you're also like: how do we create? Which is a biblical idea.
Yohan Lee: Absolutely.
Sean McDowell: How do we give, and how do we use this technology to love our neighbors? That's thinking biblically about this tool that we have, so I love it. What do you think it mean-- When we think about business, what would it mean to think biblically about the intersection of business and AI?
Michael Arena: Boy, there are so many places to start. I think, well, first of all, I know this guy pretty well. I want to argue about friction in a bit. [laughing]
Yohan Lee: [laughing]
Sean McDowell: [laughing]
Michael Arena: I know him pretty well, and I know he isn't just picking on a verse, right? So, you know, AI and verse of the day, it sounds like a really great starting place. And I think as Christian leaders, you know, we need to realize, like, this isn't dystopia, and this isn't utopia. Like, there's something in between those two things.
Sean McDowell: That's good.
Michael Arena: It's, it's not either/or, and, you know, we're image-bearers, and we're called, you know, to steward over the Earth-
Sean McDowell: Yeah
Michael Arena: ... As relational, creative, moral beings. So as I think about this as a Christian business leader, and I think about, like, what is our responsibility? Our responsibility- ... Is to engage it, to understand it, to better, know what it does and what it can't do, and to- ... Steward over top the math, right? I mean, at the end of the day- ... It's as simple as we've got a biblical obligation, in my opinion, to lean in, not my opinion- ... Scripturally, to lean in-
Sean McDowell: Mm-hmm
Michael Arena: ... To understand this and to steward over it.... So that it doesn't drift astray, which it will, if we don't. And, you know, we've, we've seen this before. You know, we call it social media, and we buried our head in the sand. We were asleep at the wheel, and it's generated the loneliest generation on planet Earth.
Sean McDowell: Mm-hmm.
Michael Arena: I don't want to repeat that pattern. So as a business Christian leader, you know, I want to lean in, I want to understand it, I want to use it, I want to engage it, and I want to better understand it so that I can shape its human value at the end of the day. And I-- you know, so that would be the advice I would give is, you know, don't bury your head in the sand ever again. Use it for all the positive reasons- ... And then shepherd over it to limit all the negative consequences. You know, obviously, not all-
Sean McDowell: Of course
Michael Arena: ... But to do our best-
Sean McDowell: Yeah
Michael Arena: ... To limit the consequences and unintended consequences.
Sean McDowell: Moretua, I'm going to come to you in a second, but without opening up a big debate, just highlight for us where is the difference between the two of you over friction? Just so I understand what's going on. [laughing]
Mihretu Guta: [laughing]
Michael Arena: Well, first of all, Johan and I have lots of debates.
Sean McDowell: Okay.
Michael Arena: So it's, it's healthy.
Sean McDowell: Which we have here-
Michael Arena: It's, it's healthy
Sean McDowell: ... All the time. That's good.
Mihretu Guta: Healthy tension. Absolutely.
Michael Arena: And it's super constructive.
Sean McDowell: Yeah.
Michael Arena: And I'm the entrepreneur, I'm ready to run.
Sean McDowell: [laughing]
Michael Arena: Like, I'm ready to go implement this stuff. And Johan reminded me, "But there's this thing called security-
Sean McDowell: [laughing]
Mihretu Guta: [laughing]
Michael Arena: ... And, you know, occasionally, we need to think about data and privacy, and- ... Which of course, I'm going to think about.
Sean McDowell: Mm-hmm.
Michael Arena: But, you know, so that's friction, right? And, and there's certainly friction to think about, you know, the cognitive process, and if we make it too easy on our students, they come out of this environment unprepared. But I do believe there's a difference between healthy and unhealthy friction. Healthy friction-
Sean McDowell: Gotcha
Michael Arena: ... Is challenging our students and ourselves to think about, you know, doing this and doing this well, and learning in the process, so we have high judgment when we need it, when we're in these corner cases, these edge cases. But the unhealthy friction, is the muck, the stuff that just gets in our way- ... The stuff that is actually dehumanizing. So, so I separate those two. Um-
Sean McDowell: That makes sense
Michael Arena: ... And I'm sure we'll have another coffee conversation-
Sean McDowell: I love it
Michael Arena: ... At some future point. But-
Sean McDowell: And that's actually to make-- to your point earlier, there's an instrumental good about that friction.
Michael Arena: Mm-hmm.
Mihretu Guta: Yes.
Sean McDowell: To disagree well, to discuss, to debate. That's a part of how you learn, and I remember with smartphones, the moment we had Google, people would debate something, they're like: "Well, let me just look it up."
Michael Arena: Mm-hmm.
Sean McDowell: And I was like: "No, let's debate it." [laughing]
Mihretu Guta: Yeah.
Sean McDowell: "Let's, let's try to see if we can remember this." That's something that's lost, so I appreciate that dynamic. That's awesome. What would you say, as philosopher apologist, it means to think biblically about AI?
Mihretu Guta: I think, first of all, to have a very clear conception of, the fact that these are tools. Any tool, by definition, could be useful and could be a very-
Sean McDowell: Destructive
Mihretu Guta: ... Dangerous, destructive stuff in your hands. So we all have tools. I mean, there are knives in our kitchen, you know, table, and then we can kill someone with it. So we can cut up food and, you know, feed our families and so on. I think we need to have that clear, conception of, you know, technology in our mind first, and then I think about, using technology, biblically as really kind of to connect it to our own calling. Okay? And I really share what both of you have said is incredibly helpful, and I think we are in total sync. Incredibly important approach. Like, we're cautious, we're not also, like, megaphoning this hands-off approach.
Sean McDowell: No.
Mihretu Guta: We're not saying hands off. We're not also saying, "This is our redeemer- ... Let's go and grab it." I think that discernment is what I think is, biblically kind of, implementing- ... And using these technologies to advance the cause of the gospel and help people solve problems, and so on. And there are unintended consequences. Let's say, self-discovery, self-identity is being lost. If I don't do something, like if I literally don't put my heart into something-
Sean McDowell: Contemplation
Mihretu Guta: ... If I am trying to cut corners, and then, "AI, think for me. Make my bed, cook for me," and I open my refrigerator-
Sean McDowell: [laughing]
Mihretu Guta: ... "What do I need to eat?" And we're literally turning ourselves into what this.... What's-- There is a movie, that's what the, Disney-
Sean McDowell: I-
Mihretu Guta: ... Disneyland movie where-
Sean McDowell: I don't know, but AI making my bed, I think I would be okay with that one. [laughing]
Michael Arena: [laughing]
Mihretu Guta: Well, I mean, you know, that's a kind of-
Sean McDowell: I know your point. I understand
Mihretu Guta: ... Without resourcing everything.
Sean McDowell: Mm-hmm. Right.
Mihretu Guta: The very notion of literally the meaning of life is predicated- ... Upon enjoying-
Sean McDowell: Yes
Mihretu Guta: ... Your daily routines.
Sean McDowell: Absolutely.
Mihretu Guta: That's what it means. If I hand over everything to these tools, then what am I supposed to do with the, with the, with the time that's in my hand? Like, I come to job, and I strike conversation with my colleagues, and I make fun of my students and make... Like, this is what it means. And we're really losing sight of how the technology is supposed to help us and how it's not supposed to help us. That discernment is literally what I think is-
Sean McDowell: Right
Mihretu Guta: ... Biblically being kind of sensitive, as to how we are supposed to deal and engage with this technology.
Sean McDowell: I'm going to ask you a question. You can say, yes, no, or more complicated. I want your quick take on this. You mentioned a knife, which can be used to stab somebody or for surgery, which is good. So a knife is morally neutral. It's how we use it. Is AI morally neutral?
Mihretu Guta: Absolutely not.
Sean McDowell: No, [chuckles] not even close. [laughing]
Mihretu Guta: It's entirely, rooted in worldview.
Sean McDowell: Yes.
Mihretu Guta: So we might, we might think these companies are trying to help us.
Sean McDowell: [clears throat]
Mihretu Guta: Look, they are getting across a very subtle message to all of us, such as, you are not better than the tools. There are many people who are arguing that- ... You literally are not better than the tool. So you're not only biological machine, you're less than a biological machine. That the whole point of transhumanism is just to prove that point.... And digital immortality, extracting your mind and uploading it on gadgets, and you won't die, you won't get sick, and so on. What would that do to our conception of the body of Christ and the centrality of the body that you have in resurrection? We're looking forward to that. Chatbots, we're using chatbots. Churches are using chatbot and so on. Why don't a human being responds to the questions that the members of congregations have, as opposed to why are you directing them there? So there are so many subtle things that we might- ... Lose sight of. They might really be inconsistent and incompatible with what we believe from a scriptural standpoint. I think that careful discernment is really important.
Sean McDowell: This could be a whole follow-up conversation, and I'm gonna have to get your quick take on this 'cause you reacted to this.
Yohan Lee: Yes.
Sean McDowell: Like, why-- if AI is not morally neutral- ... Then how and when can we use a tool that affects us morally and our worldview? We can't answer that now, but even thinking about that should shape the way we approach tools. But you reacted to my question. Tell me what was in your mind.
Yohan Lee: Yeah. The reason I believe, like, generative AI tools are not morally neutral is because it depends on how, what... If it were neutral, you would have a perfect balance mathematically and statistically and numerically in terms of its opinions and how, AI is trained and fed.
Yohan Lee: Training and feeding comes down to what you select to train and feed.
Sean McDowell: Mm-hmm.
Yohan Lee: And so in AI, when I talk about it being math, it's because AI is essentially massive tables with words and concepts that have a numerical weight attached to them in terms of a probability, and that comes down to frequency, how much, how often, and how something is tuned to, say, to respond back with positive, negative, or something in the middle. It goes back to train- [exhales] The scripture that comes to my mind is how you raise up a child. And so in the Garden of Eden, in Genesis, we knew only good. As human beings, we knew only good. When Satan deceived us, we only gained net bad. 'Cause if you only knew good, and then you were getting taught the knowledge of good and evil, you-- and if you, from a, absolute math perspective, only knew good, the, then the only net gain you got when you learned good and evil was you only learned evil. And so when that works out mathematically now, how you train an AI, for example, usual first step is you completely absorb everything inside of Wikipedia, right? That's a, that's a usual first step in terms of training- ... A large language model. You need to understand- ... Syntax, language, friction, fix, sorry, syntax, language, [lips smack] sequence, diction, et cetera, semantics, and syntax, right?
Sean McDowell: Mm-hmm.
Yohan Lee: When that happens, it's accumulating all of this data to then start making weights, to be able to say something is more important, less important, higher priority, lower priority. That's, again, this is now getting to the rudimentary pieces of it. But what if you're now only feeding it poetry of violence? What if it now you're only feeding it- ... You know, the depositions from The Hague during, you know, sexual violence wrought upon- ... Women in unarmed conflict, then it knows only those things you taught it. Flip side. Now, let's say you taught an AI, starting with the Bible, Greek, Hebrew, Koine. Now, you teach it Augustine, Calvin. That AI is going to know things. It's going to be able to understand, respond, and analyze things innately from a perspective of what is good- ... Versus-
Sean McDowell: Yep
Yohan Lee: ... Poetry of violence, depositions, pornography. What you train in AI defends how it operates, and so that question, "Is AI morally neutral?" No. [chuckles] These were built on high-volume, high-resource languages, which are predominantly English, Wikipedia, then books, then what's available on the English, predominantly Western English Internet, because the US internet presence- ... Is considerably larger than that of the UK because we invented the Internet here. And so there is a non-random, proportional difference. There is a map in terms of the higher frequency of what has been fed these tools. And so we cannot be in a place where AI is morally neutral because the rest of the world doesn't have the digital maturity that the US has, much less English-speaking predominant nations. Now, is the next version of the Internet predominantly gonna be potentially more non-English? Yes, because there's a billion more people in the PRC- ... A billion more people in India, with a lot more languages than we have in the US, but lower probability and lower propensity, and lower abundance, if you will. And so the next versions of the Internet will be predominantly non-English based. And so when we disobey God's command to multiply, go forth and multiply, when we have fewer families, then fewer people of that cognate language are present. So in one sense- ... If you were just to think of it from an arms race perspective, the people who have greater, population, who produce more in digital format, in their literacy, in their content, in their media, that could, c- dramatically shift the responses of an AI. Could, could.
Sean McDowell: Could is key.
Yohan Lee: But this is the stuff I think about late at night. [laughing]
Michael Arena: But, but, and not... A- it gets worse than that, actually, right?
Yohan Lee: Okay.
Michael Arena: Because not only were they trained on biased data sets-
Yohan Lee: Mm-hmm
Michael Arena: ... From a morality standpoint, but-... That bias is amplified as more people with a diff- you know, a bias that may be counter to our worldview-
Yohan Lee: Mm-hmm
Michael Arena: -uh, are retrained in it.
Yohan Lee: Yes.
Michael Arena: So, so there's something called bias amplification.
Yohan Lee: Mm-hmm.
Michael Arena: So not only is it already biased based on the training sets, but that it's perpetuated over time. So- ... So I think it's, it's dangerous from the start, and it contin- you know, engaging more... And again, I keep saying that's why we as believers need to engage-
Yohan Lee: Yeah
Michael Arena: ... Because we have a counter worldview-
Yohan Lee: Yes
Michael Arena: -that is more necessary than ever, in the world.
Mihretu Guta: Can I add something?
Yohan Lee: Sure.
Mihretu Guta: I think you both have raised a very important point, especially when you talked about, computers being like number-crunching tools [chuckles] and nothing else. I think that speaks volumes because there's a huge confusion. Let's say, computers can have their own free will, and computers can have consciousness, blah. It literally shatters that argument, okay? Here is a predetermined tool. You set the tone already from the get-go, how the tool is supposed to operate, and the general public doesn't really know that. "Okay. Oh, yeah, ChatGPT is conversing with me like a mic."
Yohan Lee: Mm-hmm.
Mihretu Guta: "Oh, it must be s- there must be some agent in there," so, and so on. People are falling in love these days, romantically.
Yohan Lee: Well, that's shocking ignorance.
Mihretu Guta: Yeah.
Yohan Lee: Yeah.
Mihretu Guta: Shocking ignorance. But when you-
Yohan Lee: Yeah
Mihretu Guta: ... When you lift the hood, it's not that surprising. It, yeah, the sophistication is surprising, but that's a reflection of your mind. Like, that's a reflection of how- ... Human beings can go, how far they can go in thinking and the sophistication that they bring into the, to the table. So I think what you just said is a very good response to this hallucinatory comments that people are making. "Oh, emotions can be uploaded. First-person data can be absolutely proved by AI," and so on.
Yohan Lee: Sure.
Mihretu Guta: There's literally nothing there. There's nothing there. People should re- should really know that. And computers cannot generate a, not a single thought of their own because that's not an agent. What you haven't put there is not going to come out of that, and I think that's a very good point. That's a mathematics-based number-crunching tool and nothing else.
Yohan Lee: Well said. I like that. Last question is, I'm curious because my son is in business, and my daughter's doing nursing, which obviously intersects with science- ... And your colleague. I want to know, what is, what is Biola and/or Talbot doing uniquely to lead the way as it comes to AI? What are we doing? Yeah, I'll speak from Biola. [clears throat] I guess we are building our own AI. That's the first bit, and we are taking the approach that I described. You know, what happens when you build an AI from the ground up, from a completely biblical perspective that is ultimately God-honoring, edifying to the human? And, because the Bible and all of the commentaries, analysis is the most sophisticated and the most infinitely sophisticated, written piece on the planet.
Mihretu Guta: Amen.
Yohan Lee: And the more you delve into the scripture, the more the sophistication starts revealing itself. It is... That is infinite because it was inspired by God. So, you know, mathematically, when you take an image, and you do a projection from something that is infinite to then another surface, you imbue some of that, and so there is a, an unbelievable richness there. So, I'm very excited in this day and age of creating AIs that are biblically informed and, can be astonishing in the limitless, expansion there. So that's one. The other is, on a practical note, because we're called to go out into the world, we're called- ... To love one another. And so that... So the first bit I was talking about is kind of my expression and how I try to live out, loving the Lord, my God, with all my heart and mind, soul, and strength. The, the second part now, on the horizontal axis, is how do we use AI to uplift and serve and love- ... The widow, the orphan, the sojourner, the infirmed, essentially the definition of biblical justice. And so that is why I am really focused on how to build AIs for robotics, for serving, those who are infirmed. That is a really challenging area, but I find a calling towards it because I can't think of a better way of serving humanity. I love it.
Mihretu Guta: Yeah.
Yohan Lee: Great. Great word. Tell us a little about business.
Michael Arena: Oh, we're doing a lot. I... You know, the way we think about this is engage- ... Won't surprise you-
Yohan Lee: Mm-hmm
Michael Arena: ... Educate, and elevate.
Yohan Lee: Mm-hmm.
Michael Arena: Elevate- ... Our thinking about how to use AI, righteously.
Yohan Lee: Mm-hmm.
Michael Arena: So, so we've launched an AI lab, across campus, so it's- ... Interdisciplinary, which is really important because what it's doing is it's embedding the thinking from across the university, and it's not just technology or not just business driving the way. You know, we've... You know, we're think- we're thinking- ... About this from all the humanities perspect- we're thinking about it much more broadly, and students are engaging. They're engaging in dialogue. They're building. They're developing. They're... I learn from the students. You know, every Monday morning-
Yohan Lee: It's incredible
Michael Arena: ... I get, like, a tutorial. So I don't have an AI of the day-
Mihretu Guta: [laughs]
Michael Arena: -like Johan does-
Yohan Lee: [laughs]
Mihretu Guta: [laughs]
Michael Arena: ... But I, but I have a 30-minute tutorial of, you know, "Hey, how can you use n8n to automate your workflows?" And I get, like, this, you know, reverse teaching from the students. But, but we're holding them accountable for thinking about this differently. And, you know, the way I like to describe it is, you know, our tech stack rests on the foundation of ethics. And our tech stack starts with, you know, "Can AI do this?" That's an easy question. Um- ... "Should AI do this?"
Yohan Lee: Mm-hmm.
Michael Arena: That's a much harder question, and if you don't have a worldview, you're not biblically grounded, and you don't have a set of ethics that you're making that judgment through. And then ultimately, if the answer to those two questions are yes, how should it-
Yohan Lee: Exactly
Michael Arena: ... So that we're doing it well? So, so we're, we're engaging in that every single day. Every class inside the business school is teaching AI, but it's teaching ethics side by side with it. Um-... You know, we've got an AI studio that we're designing, to partner with kingdom leaders- -and to help, entrepreneurs, you know, locally to engage and to be a bridge-
Sean McDowell: Great.
Michael Arena: -to the outer world, you know, and, you know, sort of the version of Acts 1:8. Like, how do we take this beyond Jerusalem- -and start to reach the-
Sean McDowell: Yes
Michael Arena: ... Outer parts of the world? So we're partnering with ministries. We're helping... You know, we've worked with Samaritan's Purse and Global Media Outreach to help them build tools. So lots of things, but the-
Sean McDowell: Yeah
Michael Arena: ... Short of it is, we're engaging very actively, but we're doing it in a way that elevates students' thinking, so that- ... They don't just, you know, leverage AI and start to generate AI slop- ... As opposed to, you know, creating kingdom impact.
Sean McDowell: Love it. Great stuff. Meratu?
Mihretu Guta: Yeah. So in apologetics and philosophy, we take philosophical challenges that come out of disciplines like artificial intelligence, and neuroscience, and transhumanism, and, you know, the currently, like, hot-button issues. We have to respond to those things from biblical standpoint. It's our responsibility to, give biblically grounded responses to people and help them to have a biblically sound understanding of, you know, the discipline of neuroscience, and so on. And currently, I'm teaching, the philosophy of neuroscience, and I taught- ... Philosophy of AI last semester, and I'm going to teach philosophy of physics next, you know, the spring semester. So all of these courses are going to equip believers, and they are connected to AI, you know, to generative AI. And there are so many claims being made that are contrary to what we think is the truth, and we have to lovingly, respectfully, bring these issues to the table and invite those people to have sound conversations with us, and that's our responsibility.
Sean McDowell: Yeah.
Mihretu Guta: And I think we're doing great in that. And as you know, Sean, we are, you know, upgrading apologetics in so many different ways, and-
Sean McDowell: We are
Mihretu Guta: ... Conversations are going on. And, my hope is in the future to have equivalent apologetics lab- ... Just like mimicking your style.
Sean McDowell: Let's do it. Let's do it.
Mihretu Guta: And that style is going to be in sync with what you guys are doing. It's, it's a apologetics lab where we literally wrestle with current, currently hot-button issues and cutting-edge research that are coming out from different disciplines. What kind of responses can we produce, biblically grounded, just like Mike was saying? And I really liked everything that you both have said. I mean, this is just incredible. You rarely find such a balanced, understanding about AI or technology in general. There are two extremes: "Go and grab, this is your Redeemer," or, "Hands off!" We are somewhere in between. No, no. We grab a little bit of from here. We also push back, and that's our uniqueness here at Biola. And that is the most important position to be in. And, so I think, I'm hoping and looking forward and praying to have apologetics lab.
Sean McDowell: Well, Meratu, we're thrilled to have you here. When we were looking for new faculty, we knew who you were, just knew the excellence that you would bring. But people ask me, "What are the big questions coming up in apologetics, and theology, and culture?" It's always AI. So to have somebody publishing the way you are, speaking the way you are, teaching the classes, serves our students really well, and that's true for all of you. Thanks for so much time. Thanks for your great work for the kingdom, what you're doing here at Biola. If you're watching this, you can email us at thinkbiblically@biola.edu. Questions from this episode, if there's other areas related to AI you'd love us to cover, let me know. Fellas, this was fun. Thanks for the time.
Mihretu Guta: Thanks for having us.
Michael Arena: Thank you. [upbeat music]
Biola University


