The question of whether artificial intelligence can think is asked frequently and answered badly. It is answered badly because the participants in the debate have not agreed on what they mean by “think,” and in the absence of that agreement, the conversation devolves into competing intuitions rather than competing arguments.

I would like to try something more disciplined. I want to examine what it means to know something – not merely to produce a correct output, but to understand why the output is correct – and then ask whether current AI systems do that, or anything resembling it.

My conclusion is that they do not. But the argument for that conclusion is more interesting than the conclusion itself, because it reveals something important not just about machines but about what human reasoning actually is.

The Distinction

When I say that I “know” something, I mean at minimum three things. First, I hold a belief. Second, the belief is true. Third, I have reasons for the belief that justify it – I can explain why I believe it, I can trace the reasoning that led me to it, and I can evaluate that reasoning for soundness.

This is the classical definition of knowledge: justified true belief. It has been debated and refined for millennia, and there are well-known complications. But the core insight remains sound: knowledge is not merely having the right answer. It is having the right answer for the right reasons, and being able to give an account of those reasons.

Now consider a large language model. It produces an output – let us say, a correct answer to a philosophical question. Is this knowledge?

The output may be true. The model may even produce a sequence of sentences that look like a justification. But the model does not believe the output. It does not hold the output as true in the way a thinking being holds a belief. And the sentences that resemble justification are not the result of reasoning – they are the result of pattern completion over a training corpus. The model is not following an argument. It is predicting the next token.

This is not a minor difference. It is the entire difference.

The Chinese Room, Revisited

John Searle proposed a thought experiment decades ago that remains relevant. A person who speaks no Chinese sits in a room with a manual of rules for manipulating Chinese symbols. Chinese speakers pass questions under the door. The person follows the rules and passes back answers. From the outside, the room appears to understand Chinese. From the inside, there is no understanding at all – only symbol manipulation according to a set of rules.

The objections to this thought experiment are well known, and some of them are strong. But the core insight survives: the production of appropriate outputs does not entail understanding. Something can behave as if it understands without understanding anything at all, and the gap between behavior and understanding is not a gap we should be comfortable closing by definition.

When we say an AI “knows” something, we are using the word metaphorically. We are attributing understanding on the basis of output, which is the same error as attributing comprehension to Searle’s room on the basis of its Chinese responses.

Why This Matters Practically

If this were merely an academic distinction, I would not press the point. But the conflation of output with understanding has real consequences.

When AI systems are deployed to make decisions that affect human lives – in medicine, in criminal justice, in hiring, in education – the assumption that the system “understands” the domain grants it an authority it has not earned. A diagnostic system that produces correct outputs eighty percent of the time has not understood medicine. It has found statistical patterns in training data. The twenty percent of cases where it fails are precisely the cases where understanding matters most, because those are the cases that fall outside the patterns.

A human doctor who is wrong twenty percent of the time can explain their reasoning, identify where the error occurred, and learn from it. An AI system that is wrong twenty percent of the time can do none of these things. It cannot explain, because it does not reason. It cannot learn from its specific error, because it does not know what an error is. It can only be retrained on updated data, which is a process that bears no resemblance to understanding.

The Harder Question

There is a harder question lurking beneath this discussion, and I will name it directly: is it possible, in principle, for a machine to think?

I am not prepared to answer this with certainty. It may be that consciousness and genuine understanding require something that current computational architectures cannot provide – substrate-dependent properties that cannot be replicated in silicon. Or it may be that understanding is substrate-independent and that a sufficiently complex system, running a sufficiently different kind of process, could genuinely think.

What I am prepared to argue is that current AI systems do not think, and that the burden of proof falls on those who claim otherwise. The default assumption should not be that anything producing human-like outputs is thinking. The default assumption should be that output is not evidence of understanding until we have a rigorous account of how understanding arises from the system’s processes.

We do not have that account. What we have is impressive engineering and a tendency to anthropomorphize.

The Standard

I hold this standard for human thought as well. A person who produces correct conclusions without being able to explain their reasoning has not demonstrated knowledge. They have demonstrated recall, or intuition, or luck. Knowledge requires the ability to give an account – to trace the path from evidence to conclusion and to identify where that path could have gone differently.

This is demanding. It should be. The alternative is a world in which we cannot distinguish between understanding and imitation, between thought and performance, between knowledge and pattern matching.

That distinction is the foundation of intellectual life. If we lose it, we lose the ability to know whether anyone – human or machine – actually understands anything at all.

I am not prepared to lose that. The question of what it means to think is too important to be settled by product announcements.