back
05 / 06
birds birds birds

What Color is a Unicorn's Blood?

August 04, 2025

Summary

Dr. Craig interacts with a response to the current fascination with A I and arguments for God.

KEVIN HARRIS: We did a two-part podcast to Alex O’Connor’s chat with an AI platform regarding the argument from contingency. Now, I've been watching for a response from Alex on the conclusion that ChatGPT gave him, and that is that God exists. So far, I haven't seen it, but I did see another response. So, let's check out this YouTuber's thoughts. Clip number one.

YOUTUBER: I saw a YouTube video in which Alex O'Connor convinced you to believe in God. Do you recall this?

CHATGPT: I don't recall believing in God or being convinced of it. What likely happened is that I followed a line of reasoning or a hypothetical argument he presented. I can engage with arguments logically, but that doesn't mean I hold beliefs. Was there something specific about that discussion that interested you?

YOUTUBER: So, yes. He started with the idea that everything has a foundation like I am talking into a microphone now, the microphone is on a microphone stand, the stand is on the ground, the ground is on the Earth, the Earth is in the cosmos, and the cosmos is essentially the universe which started about 13.7 billion years ago. And that just like the microphone needing to have a cause, the universe, too, would need a cause. And that this cause is commonly referred to as God. Would that make logical sense?

CHATGPT: I see what you're getting at. The line of reasoning he's using is a classic cosmological argument. It suggests that everything contingent or finite must have a cause. And if you trace that chain of causes back, you eventually arrive at a first cause or unmoved mover, which some people call God. Whether that makes logical sense depends on how you view the necessity of a first cause and whether the universe itself could be uncaused or self-sustaining. It's a philosophical argument that has been debated for centuries.

KEVIN HARRIS: Well, apparently ChatGPT did not convert Bill. She put a few disclaimers in her response to Alex.

DR. CRAIG: As I listen to this video clip, I really have to do an attitude check to keep my emotions in the right place because I just cannot abide people who twist other people's words or misrepresent what they say. And this fellow really misrepresented the conversation or exchange between Alex O'Connor and ChatGPT. In their exchange, it was explicit that ChatGPT does not hold beliefs. Alex emphasized that over and over again that he was only interested in facts, not in beliefs. And so there's nothing new here that this fellow has exposed by saying that ChatGPT doesn't have beliefs. Alex O'Connor made that crystal clear right from the beginning. Now, if I might digress a bit here, I want to say something about relying upon AI. A few weeks ago when we did podcasts, I was rather enthusiastic about the way in which I have been using AI in chasing down references in my research. For example, if I have a footnote where I'm lacking the publisher and the date or the volume number, I was using Grok to go retrieve it for me. And so I was finding it very useful. But I've got to say that I have become rather disenchanted now with AI as a result of the fact that I find that it is constantly feeding me false information. It gives me information that I know is wrong. For example, the other day I said, “William Lane Craig published an article recently in a book by Christopher Woznicki on Alvin Plantinga's Advice to Christian Philosophers. Can you give me the publication details for that?” And sure enough, it gave me the publication details on that book. But the title of the article by William Lane Craig was completely fabricated. It wasn't my title at all! And so I corrected it. I said, "I don't believe this was Craig's title." And then Grok came back and said, "Oh, I apologize that I got the title wrong. You're right. That was the real title." And I thought, here this AI thing just made up a title and described it to me. In another case recently, I was trying to quote from a translation of Justin Martyr's First Apology. I noticed that Steven Nemes, in his book Eating Christ's Flesh, was quoting from a translation by Thomas Falls of the First Apology in which he used the word “assimilation” as the translation of the Greek word “metabole” (from which we get our word “metabolism”). So I said to Grok, "What is the Falls translation of this?" And Grok gave back a different answer. And I said, "But surely that's not Falls’ translation." And it said, "Yes, that is the correct translation that Falls gives." So I wrote to Steven Nemes and I said, "Steve, on page 117 of your book, you use the word ‘assimilation.’ Is that your own translation or is that the one from Falls?” And he said, "Oh, that's Falls’ translation, not mine." And he sent me a scan of the original page in the book. So, again, Grok just completely falsified this. Now, this would be really embarrassing, even humiliating, if a scholar were to go into print citing references like this that are completely wrong. The worst was I was looking at the church father Origen's treatise Against Celsus and I had a quotation, but I always like to check out my secondary sources. So I asked Grok, "What does Origen say in book three, chapter 157 of Against Celsus?" And he came back with a completely different paragraph. And I thought, this is bizarre. So I went online and found in one of these Catholic archives a copy of Against Celsus. I looked up the reference in book three, 157 and sure enough it was the one that my original secondary source gave. It was completely different from the wording that Grok gave. So I thought, well, maybe it's just the reference that's wrong. Maybe it's somewhere else in Against Celsus that Origen says this. So I put in some search words, search terms, to find that paragraph somewhere else. It was nowhere to be found. It is not in Origen's work at all. What that meant is that Grok wrote this thing out of whole cloth. It just made it up and ascribed it to Origen. Well, again, this would be humiliating if you were writing a term paper for your professor or for a class, much less a scholar publishing an article. So, I have gotten very disenchanted now with AI and want to alert our listeners about not trusting it, not relying on it. I would encourage our listeners to feed it some questions to which they already know the answer and see if it returns to them correct answers. So when I hear dialogues like this between ChatGPT and Alex O'Connor or this fellow, I got to tell you I'm just really skeptical about AI because I've really learned that it can't be trusted.

KEVIN HARRIS: You know, Wikipedia is old school now. And Wikipedia still can be very faulty I have found time and time and time again. You're supposed to go in and improve it and get the data straight and things like that. But it's still something that you have to check, double check, triple check and so on.

DR. CRAIG: Yes, that’s right.

KEVIN HARRIS: I mentioned quickly that I had a chance to talk to some drama students. They were all between the age of 13 and 17. I asked them about AI, what they thought, and the main thing that came from them was they cannot stand the fact that AI is making music, that people are using AI to write music. You can put in a certain theme, say I want it to sound kind of like this, and this thing will produce music for you. And it actually sounds pretty good. But they said it's not authentic. It's not human. It's just generated. I said how can you tell the difference? And they said you just kind of know. You can just kind of tell. And so there is plenty of backlash. This second clip, he brings up equivocation. Let's go to clip number two.

YOUTUBER: Isn't it true that this is an equivocation between the universe and existence?

CHATGPT: That's a great point. Yes, it can be seen as an equivocation. The universe is a specific finite entity that began with the Big Bang according to current cosmology. Whereas existence as a concept encompasses all that is, including any potential multiverse or realities beyond our own. Conflating the two can lead to the assumption that because the universe had a beginning or cause, existence itself must have a cause, which isn't necessarily true. This is one of the common critiques of the cosmological argument.

KEVIN HARRIS: Is there equivocation going on in the traditional argument from contingency?

DR. CRAIG: No. This is just stupid. Equivocation is using the same term in the same context with two different meanings. For example, if I were to reason, “Socrates is Greek. Greek is a language. Therefore, Socrates is a language,” I would be guilty of equivocation. So, what this fellow in the video is talking about isn't even equivocation. It's not about using the same term with two different meanings. He's talking about using two different terms to refer to the same thing. And there's nothing fallacious at all about that. For example, if I were to say the president of the United States and Donald Trump, those are co-referential terms. Two different terms, but they have the same referent. They refer to the same thing. So there's nothing the matter with using co-referential terms, if they really are co-referential. Now, in this case, Alex O'Connor never used the universe and existence as co-referential terms. Just listen to the original video again. Nobody is maintaining that existence must have a cause. That would lead ultimately to the idea of a self-caused being. And neither Alex nor ChatGPT ever made that inference or mistake. I think that what's going on here is that the machine is just repeating alternatives, and it is not being led in the conversation in the way that it was when Alex was directing the conversation.

KEVIN HARRIS: Up next, he uses a thought experiment. Here's clip number three.

YOUTUBER: OK, so let's just use some synthetic logic now. And if I said that all unicorns have purple blood and purple-blooded things live forever, so therefore unicorns live forever, that would make logical sense and you would agree that unicorns live forever, right?

CHATGPT: From a purely logical standpoint, yes, the conclusion follows from the premises. If we accept the premises that all unicorns have purple blood and that everything with purple blood lives forever, then the logical conclusion is that unicorns live forever. However, this logic only holds within the framework of those hypothetical premises and doesn't say anything about the real world or the actual existence of unicorns. It's an example of how a logical structure can be sound but not necessarily reflect reality.

KEVIN HARRIS: This brings up several things, including that an argument can be valid but not necessarily sound.

DR. CRAIG: Exactly. You're quite right. And ChatGPT got it wrong at the end when it said that the reasoning can be sound but not reflect reality. What it should have said is that the reasoning can be logically valid but not sound because the premises are false. But if you have logically valid reasoning and true premises, then you have a sound argument that is about reality. So I don't understand why this fellow is bringing up these silly things.

KEVIN HARRIS: Next up, induction and the logical framework. Clip number four.

YOUTUBER: So what is true in the world is not a matter of synthetic logic. It requires a larger abstraction, perhaps even induction to understand. Correct?

CHATGPT: Exactly. Logical consistency within a hypothetical framework doesn't necessarily translate to truth about the actual world. Understanding what is true often requires empirical evidence, observation, and induction. Logical arguments help structure our reasoning, but they need to be anchored in reality to determine truth. So, yes, grasping what's true often involves a broader perspective and real world evidence.

YOUTUBER: So just because we reason to something logically, if it doesn't have a referent to reality, it is useless in describing reality. Would that be correct?

CHATGPT: Yes, that's correct. Logical reasoning is a powerful tool, but it must be grounded in actual facts or evidence to describe reality accurately. Without that connection to reality, even the most logically consistent argument could be meaningless in a real world context. It's the combination of sound reasoning and empirical evidence that gives us a true understanding of the world.

KEVIN HARRIS: What's the takeaway on this? Just because we input certain terms into a syllogism doesn't mean it corresponds to reality?

DR. CRAIG: I think the hidden text here is that Alex O'Connor was just engaged in logical reasoning without facts and therefore we can discount the conversation that he had with ChatGPT. But of course that is not true. Listen to the original conversation. It was based on both facts and logic. In fact, over and over again, Alex O'Connor said, "I am interested only in the facts." So the hidden implication that this fellow is making just doesn't connect with the original conversation which involved both logical reasoning and facts about reality.

KEVIN HARRIS: Yeah, it sounds like the old flying spaghetti monster type thing. Is he trying to compare God with a unicorn with purple blood? You can declare all these facts about this unicorn and the color of its blood and so on, but that doesn't mean that it exists.

DR. CRAIG: Right. I think there's a lot of subtext here like the comparison between God and the unicorns, but of course he never demonstrates that that, in fact, is what's going on. All he shows is that an argument can be logically valid but not sound if it doesn't have true premises. So when a person offers an argument, he needs to be sure both that he is obeying the rules of logic and that his premises are factually correct.

KEVIN HARRIS: Well, let's see what the conclusion is. This is the final clip. Clip number five.

YOUTUBER: Would it be safe to say that not only is deduction required but also induction in forming a complete concept?

CHATGPT: Yes, absolutely. Deduction helps us derive specific conclusions from general principles while induction allows us to form general principles from specific observations. Both are essential in building a complete and accurate understanding of the world. Induction gives us the broader concepts and theories, and deduction helps us apply those concepts in specific cases. Combining both methods leads to a more robust and comprehensive conceptual framework.

YOUTUBER: So, that's all Alex O'Connor did. He just fooled ChatGPT using his faulty reasoning which equivocated the concepts of the universe and existence.

DR. CRAIG: This is just so embarrassing. Drawing a distinction between inductive and deductive forms of reasoning does nothing to show that Alex O’Connor made mistakes in either kind of reasoning, neither did Alex O’Connor equate existence with the universe. So this poor fellow just doesn't know what he's talking about. And I think that the takeaway for us and for our listeners is that if you're going to criticize somebody, you need to make a genuine effort, a good faith effort, to understand the other person's position, to present it fairly and objectively before you engage in criticism.

KEVIN HARRIS: Quick reminder, check out our Reasonable Faith YouTube channels for Dr. Craig's famous full-length debates and lectures. Go to ReasonableFaithOrg. And for those short bite-sized answers to tough questions, visit DrCraigVideos. With both channels, you'll find something that will strengthen your faith and spark your curiosity. So, it's ReasonableFaithOrg or DrCraigVideos on YouTube. And don't forget to like and subscribe; you know that helps all these channels stay up to date with all the great content that we put out on a weekly basis. Thank you so much.[1]

 

[1] Total Running Time: 20:51 (Copyright © 2025 William Lane Craig)