Chinese Room Paradox

What is the chinese room paradox.

The Chinese Room Paradox is a challenge to the idea that a computer can truly understand languages and have a mind like a human. Imagine you’re following a recipe—you can bake a cake by following the steps, but that doesn’t mean you understand the chemistry of baking. The paradox, created by philosopher John Searle, asks whether a computer could ever truly “get” what it’s doing, or if it’s just following instructions without any real understanding.

John Searle came up with this scenario to stir up thinking about artificial intelligence—computers that are designed to think and learn on their own. Some people thought that if a computer could follow a set of instructions and act like it understands, then it’s as good as a human mind. Searle wanted to show that there’s a difference between just doing something and really grasping it.

The thought experiment goes like this: There’s a person who doesn’t know Chinese sitting in a room. They get Chinese writing through a slot in the door, and by following a set of instructions in their own language, they send back the right Chinese responses. From the outside, it seems like there’s a Chinese-understanding person in the room. But in reality, the person is just using rules without actually knowing what the words mean.

chinese room thought experiment summary

Simple Definitions of the Chinese Room Paradox

1. The Chinese Room Paradox Questions If Machines Can Really “Understand”: It’s like having a conversation in a language you don’t speak using a translation book. You can make it seem like you understand by finding the right responses in the book, but you don’t actually get what you’re saying or the conversation’s meaning.

2. The Paradox Challenges Whether Smart Computers Have Minds: If a computer acts like it knows what’s going on, is it smart like us or just faking it? To figure this out, the paradox uses the example of a person in a room using cheat sheets to respond in a language they don’t know; it’s a way to show that following rules isn’t the same as understanding.

Key Arguments

  • Symbol Manipulation Is Not the Same As Understanding: Just like moving chess pieces around a board doesn’t mean you understand the strategies of chess, processing symbols doesn’t equal understanding. This part of the paradox makes us think about what it really means to “get” something.
  • Machines Can Simulate, Not Duplicate Understanding: This part argues that computers, even when they seem smart, aren’t really grasping what they’re doing. They might be good actors, but they aren’t truly “feeling” the role.
  • Consciousness and Cognition Are Not Simply Computational: It’s not enough for a machine to go through motions and expect it to be conscious like humans. Understanding and awareness aren’t just about processing data—they’re more complex and harder to recreate in a computer.
  • Programs Are Insufficient for Minds: This shows us that no matter how complicated a program is, it doesn’t actually have a mind. A set of instructions can’t replace the real understanding that comes with being human.

Examples and Why They Are Relevant

  • Translating Languages: When a computer translates languages, it isn’t really “understanding” either language. It’s like using a phrasebook—you can find the right words, but you don’t truly know what you’re saying. This example shows the difference between acting like you understand and really understanding.
  • Playing Chess: A computer can play chess by calculating moves, but it doesn’t enjoy the game or get creative—that’s because it doesn’t really understand the game in a human way. This is similar to the Chinese Room because it shows how something can appear smart without actually having a mind.
  • Predicting Weather: Computer programs can predict the weather by looking at patterns, but they don’t actually “feel” the weather. This helps us see that understanding involves more than just patterns and predictions, much like the paradox suggests.
  • Online Customer Service Bots: These bots can answer questions and help you shop, but they don’t actually understand your needs or feelings—they’re just following a script. This is like the Chinese Room because the bot seems to understand but really doesn’t.
  • Siri or Alexa: When you ask Siri a question, it gives you an answer, but it doesn’t really “know” anything about the topic—it’s just finding information and reading it to you. This shows us the difference between a computer’s ability to simulate understanding and true comprehension.

Answer or Resolution

The Chinese Room Paradox is a hot topic, with lots of different opinions. Some agree with Searle and think understanding is more than a computer can handle. But, others think that the room and rule book together could be considered “understanding,” or that understanding might come from a very advanced system.

Others believe that just because the person does not know Chinese, it doesn’t mean machines couldn’t ever understand. They argue that if a system has enough complexity and experiences, it might actually be said to understand. These ideas fuel even more debates about how we think, learn, and exist.

Major Criticism

People have argued about Searle’s paradox. Some say it’s unfair because it treats understanding as this magical thing that can’t be put into physical form. They also argue that Searle’s just showing what one person can’t do, not what computers could potentially do. They believe that a good enough computer system might actually be able to understand, just like humans.

Related Topics

  • Turing Test: This test checks if a machine can act so human-like in conversation that people can’t tell it’s a machine. It’s connected to the Chinese Room argument because both deal with whether actions or behavior can prove understanding or consciousness.
  • Cognitive Science: This field studies how minds work, and the paradox has made scientists consider how understanding occurs. It’s related because it challenges us to think deeply about the mind and intelligence.
  • Philosophy of Mind : Philosophers wonder about what consciousness is and how it relates to the body and world. The Chinese Room is a big part of these debates as it asks whether machines could ever be conscious like us.

Why is it Important

The Chinese Room isn’t just a clever puzzle—it makes us question the essence of our own intelligence and the limits of machines. It’s key in deciding whether creations like robots or AI can be considered alive, or have rights. This has huge effects on how we treat AI, and how we let AI treat us. It’s important for everyone, not just scientists and philosophers, because as our world fills up with smart machines, we need to understand what they’re truly capable of—and that influences our work, laws, and entire lives.

This paradox urges us to reflect on human nature and whether we can, or should, make machines that could challenge our standing as the most intelligent beings around. It brings ethical questions to our doorstep, like whether machines that seem to understand us deserve some form of ethical consideration.

The Chinese Room Paradox remains a bold criticism of the belief that computers can be as intelligent as humans. We don’t have all the answers yet, and maybe we never will, but it’s a crucial part of understanding where technology could take us.

As technology grows and AI becomes more advanced, remembering the difference between mimicry and real understanding is crucial. The paradox keeps us thinking about what makes us human, how we understand the world, and how far we should go with our machines. Whether it proves or disproves strong AI, it’s a critical tool for navigating our technological future.

SEP thinker apres Rodin

The Chinese Room Argument

The Chinese Room argument, devised by John Searle, is an argument against the possibility of true artificial intelligence. The argument centers on a thought experiment in which someone who knows only English sits alone in a room following English instructions for manipulating strings of Chinese characters, such that to those outside the room it appears as if someone in the room understands Chinese. The argument is intended to show that while suitably programmed computers may appear to converse in natural language, they are not capable of understanding language, even in principle. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. Searle's argument is a direct challenge to proponents of Artificial Intelligence, and the argument also has broad implications for functionalist and computational theories of meaning and of mind. As a result, there have been many critical replies to the argument.

1. Overview

2.1 leibniz’ mill, 2.2 turing's paper machine, 2.3 the chinese nation, 3. the chinese room argument, 4.1.1 the virtual mind reply, 4.2 the robot reply, 4.3 the brain simulator reply, 4.4 the other minds reply, 4.5 the intuition reply, 5.1 syntax and semantics, 5.2 intentionality, 5.3 mind and body.

  • 5.4 Simulation, Duplication, and Evolution
  • 6. Conclusion

Bibliography

Other internet resources, related entries.

Work in Artificial Intelligence (AI) has produced computer programs that can beat the world chess champion, and programs with which one can converse in natural language. Our experience shows that playing chess and carrying on a conversation are activities that require understanding and intelligence. Does computer prowess at chess and conversation then show that computers can understand and be intelligent? Will further development result in digital computers that fully match or even exceed human intelligence? Alan Turing (1950), one of the pioneer theoreticians of computing, believed the answer to these questions was “yes”. Turing proposed what is now known as “The Turing Test”: if a computer can pass for human in online chat, we should grant that it is intelligent. Later workers in AI claimed that computers already understood at least some natural language. Beginning in 1980, philosopher John Searle introduced a short and widely-discussed argument intended to show conclusively that it is impossible for digital computers to understand language or think.

Searle argues that a good way to test a theory of mind, say a theory that holds that understanding can be created by doing such and such, is to imagine what it would be like to do what the theory says would create understanding. Searle (1999) summarized the Chinese Room argument concisely:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

Searle goes on to say, “The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.”

Searle develops broader implications of his argument. Searle also aims to refute the functionalist approach to understanding minds, especially that form of functionalism known as the Computational Theory of Mind, that treats minds as information processing systems. As a result of its scope, as well as Searle's clear and forceful writing style, the Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear in the in the past 25 years. By 1991 computer scientist Pat Hayes had defined Cognitive Science as the ongoing research project of refuting Searle's argument. Cognitive psychologist Steven Pinker (1997) pointed out that by the mid-1990s well over 100 articles had been published on Searle's thought experiment—and that discussion of it was so pervasive on the Internet that Pinker found it a compelling reason to remove his name from all Internet discussion lists.

2. Historical Background

Searle's argument has three important antecedents. The first of these is an argument set out by the philosopher and mathematician, Gottfried Leibniz (1646–1716). This argument, often known as “Leibniz’ Mill”, appears as section 17 of Leibniz’ Monadology . Like Searle's argument, Leibniz’ argument takes the form of a thought experiment. Leibniz asks us to imagine a physical system, a machine, that behaves in such a way that it supposedly thinks and has experiences (“perception”).

17. Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception. Thus it is in a simple substance, and not in a compound or in a machine, that perception must be sought for. [Robert Latta translation]

Notice that Leibniz's strategy here is to contrast the overt behavior of the machine, which might display evidence of thought, with the way the machine operates internally. He points out that these internal mechanical operations are just parts moving from point to point, nothing that is conscious or that can explain thinking, feeling or perceiving. For Leibniz physical states are not sufficient for, nor constitutive of, mental states.

A second antecedent to the Chinese Room argument is the idea of a paper machine, a computer implemented by a human. This idea is found in the work of Alan Turing, for example in “Intelligent Machinery” (1948). Turing writes there that he wrote a program for a “paper machine” to play chess. A paper machine is a kind of program, a series of simple steps like a computer program, but written in natural language (e.g., English), and followed by a human. The human operator of the paper chess-playing machine need not (otherwise) know how to play chess. All the operator does is follow the instructions for generating moves on the chess board. In fact, the operator need not even know that he or she is involved in playing chess—the input and output strings, such as “QKP2–QKP3” need mean nothing to the operator of the paper machine.

Turing was optimistic that computers themselves would soon be able to exhibit apparently intelligent behavior, answering questions posed in English and carrying on conversations. Turing (1950) proposed what is now known as the Turing Test: if a computer could pass for human in on-line chat, it should be counted as intelligent. By the late 1970s, as computers became faster and less expensive, some in the AI community claimed that their programs could understand English sentences, using a database of background information. The work of one of these, Yale researcher Roger Schank (Schank & Abelson 1977) came to the attention of John Searle. Schank developed a technique called “conceptual representation” that used “scripts” to represent conceptual relations (a form of Conceptual Role Semantics). Searle's argument was originally presented as a response to the claim that AI programs such as Schank's literally understand the sentences that they respond to.

A third more immediate antecedent to the Chinese Room argument emerged in early discussion of functionalist theories of minds and cognition. Functionalists hold that mental states are defined by the causal role they play in a system (just as a door stop is defined by what it does, not by what it is made out of). Critics of functionalism were quick to turn its proclaimed virtue of multiple realizability against it. In contrast with type-type identity theory, functionalism allowed beings with different physiology to have the same types of mental states as humans—pains, for example. But it was pointed out that if aliens could realize the functional properties that constituted mental states, then, presumably so could systems even less like human brains. The computational form of functionalism is particularly vulnerable to this maneuver, since a wide variety of systems with simple components are computationally equivalent (see e.g., Maudlin 1989 for a computer built from buckets of water). Critics asked if it was really plausible that these inorganic systems could have mental states.

Daniel Dennett (1978) reports that in 1974 Lawrence Davis gave a colloquium at MIT in which he presented one such unorthodox implementation. Dennett summarizes Davis' thought experiment as follows:

Let a functionalist theory of pain (whatever its details) be instantiated by a system the subassemblies of which are not such things as C-fibers and reticular systems but telephone lines and offices staffed by people. Perhaps it is a giant robot controlled by an army of human beings that inhabit it. When the theory's functionally characterized conditions for pain are now met we must say, if the theory is true, that the robot is in pain. That is, real pain, as real as our own, would exist in virtue of the perhaps disinterested and businesslike activities of these bureaucratic teams, executing their proper functions.

In “Troubles with Functionalism”, also published in 1978, Ned Block envisions the entire population of China implementing the functions of neurons in the brain. This scenario has subsequently been called “The Chinese Nation” or “The Chinese Gym”. We can suppose that every Chinese citizen would be given a call-list of phone numbers, and at a preset time on implementation day, designated “input” citizens would initiate the process by calling those on their call-list. When any citizen's phone rang, he or she would then phone those on his or her list, who would in turn contact yet others. No phone message need be exchanged; all that is required is the pattern of calling. The call-lists would be constructed in such a way that the patterns of calls implemented the same patterns of activation that occur in someone's brain when that person is in a mental state—pain, for example. The phone calls play the same functional role as neurons causing one another to fire. Block was primarily interested in qualia, and in particular, whether it is plausible to hold that the population of China might collectively be in pain, while no individual member of the population experienced any pain.

Thus Block's precursor thought experiment, as with those of Davis and Dennett, is a system of many humans rather than one. The focus is on consciousness, but to the extent that Searle's argument also involves consciousness, the thought experiment is closely related to Searle's.

In 1980, John Searle published “Minds, Brains and Programs” in the journal The Behavioral and Brain Sciences . In this article, Searle sets out the argument, and then replies to the half-dozen main objections that had been raised during his earlier presentations at various university campuses (see next section). In addition, Searle's article in BBS was published along with comments and criticisms by 27 cognitive science researchers. These 27 comments were followed by Searle's replies to his critics.

Over the last two decades of the twentieth century, the Chinese Room argument was the subject of very many discussions. By 1984, Searle presented the Chinese Room argument in a book, Minds, Brains and Science. In January 1990, the popular periodical Scientific American took the debate to a general scientific audience. Searle included the Chinese Room Argument in his contribution, “Is the Brain's Mind a Computer Program?”, and Searle's piece was followed by a responding article, “Could a Machine Think?”, written by Paul and Patricia Churchland. Soon thereafter Searle had a published exchange about the Chinese Room with another leading philosopher, Jerry Fodor (in Rosenthal (ed.) 1991).

The heart of the argument is an imagined human simulation of a computer, similar to Turing's Paper Machine. The human in the Chinese Room follows English instructions for manipulating Chinese symbols, where a computer “follows” a program written in a computing language. The human produces the appearance of understanding Chinese by following the symbol manipulating instructions, but does not thereby come to understand Chinese. Since a computer just does what the human does—manipulate symbols on the basis of their syntax alone—no computer, merely by following a program, comes to genuinely understand Chinese.

This narrow argument, based closely on the Chinese Room scenario, is directed at a position Searle calls “Strong AI”. Strong AI is the view that suitably programmed computers (or the programs themselves) can understand natural language and actually have other mental capabilities similar to the humans whose abilities they mimic. According to Strong AI, a computer may play chess intelligently, make a clever move, or understand language. By contrast, “weak AI” is the view that computers are merely useful in psychology, linguistics, and other areas, in part because they can simulate mental abilities. But weak AI makes no claim that computers actually understand or are intelligent. The Chinese Room argument is not directed at weak AI, nor does it purport to show that machines cannot think—Searle says that brains are machines, and brains think. It is directed at the view that formal computations on symbols can produce thought.

We might summarize the narrow argument as a reductio ad absurdum against Strong AI as follows. Let L be a natural language, and let us say that a “program for L” is a program for conversing fluently in L. A computing system is any system, human or otherwise, that can run a program.

(1) If Strong AI is true, then there is a program for Chinese such that if any computing system runs that program, that system thereby comes to understand Chinese. (2) I could run a program for Chinese without thereby coming to understand Chinese. (3) Therefore Strong AI is false.

The second premise is supported by the Chinese Room thought experiment. The conclusion of this narrow argument is that running a program cannot create understanding. The wider argument includes the claim that the thought experiment shows more generally that one cannot get semantics (meaning) from syntax (formal symbol manipulation). That and related issues are discussed in the section The Larger Philosophical Issues.

4. Replies to the Chinese Room Argument

Criticisms of the narrow Chinese Room argument against Strong AI have often followed three main lines, which can be distinguished by how much they concede:

(1) Some critics concede that the man in the room doesn't understand Chinese, but hold that at the same time there is some other thing that does understand. These critics object to the inference from the claim that the man in the room does not understand Chinese to the conclusion that no understanding has been created. There might be understanding by a larger, or different, entity. This is the strategy of The Systems Reply and the Virtual Mind Reply. These replies hold that there could be understanding in the original Chinese Room scenario.

(2) Other critics concede Searle's claim that just running a natural language processing program as described in the CR scenario does not create any understanding, whether by a human or a computer system. But these critics hold that a variation on the computer system could understand. The variant might be a computer embedded in a robotic body, having interaction with the physical world via sensors and motors (“The Robot Reply”), or it might be a system that simulated the detailed operation of an entire brain, neuron by neuron (“the Brain Simulator Reply”).

(3) Finally, some critics do not concede even the narrow point against AI. These critics hold that the man in the original Chinese Room scenario might understand Chinese, despite Searle's denials, or that the scenario is impossible. For example, critics have argued that our intuitions in such cases are unreliable. Other critics have held that it all depends on what one means by “understand”—points discussed in the section on the Intuition Reply. Others (e.g. Sprevak 2007) object to the assumption that any system (e.g. Searle in the room) can run any computer program. And finally some have argued that if it is not reasonable to attribute understanding on the basis of the behavior exhibited by the Chinese Room, then it would not be reasonable to attribute understanding to humans on the basis of similar behavioral evidence (Searle calls this last the “Other Minds Reply”).

In addition to these responses specifically to the Chinese Room scenario and the narrow argument to be discussed here, critics also independently argue against Searle's larger claim, and hold that one can get semantics (that is, meaning) from syntactic symbol manipulation, including the sort that takes place inside a digital computer, a question discussed in the section on syntax and semantics.

4.1 The Systems Reply

In the original BBS article, Searle identified and discussed several responses to the argument that he had come across in giving the argument in talks at various places. As a result, these responses have received the most attention. What Searle calls “perhaps the most common reply” is the Systems Reply.

The Systems Reply, which Searle says was originally associated with Yale, concedes that the man in the room does not understand Chinese. But, the reply continues, the man is but a part, a central processing unit (CPU), in a larger system. The larger system includes the memory (scratchpads) containing intermediate states, and the instructions—the complete system that is required for answering the Chinese questions. While the man running the program does not understand Chinese, the system as a whole does.

Ned Block was one of the first to press the Systems Reply, along with many others including Jack Copeland, Daniel Dennett, Jerry Fodor, John Haugeland, Ray Kurzweil and Georges Rey. Rey (1986) says the person in the room is just the CPU of the system. Kurzweil (2002) says that the human being is just an implementer and of no significance (presumably meaning that the properties of the implementer are not necessarily those of the system). Kurzweil hews to the spirit of the Turing Test and holds that if the system displays the apparent capacity to understand Chinese “it would have to, indeed, understand Chinese”—Searle is contradicting himself in saying in effect, “the machine speaks speaks Chinese but doesn't understand Chinese”.

Margaret Boden (1988) raises levels considerations. “Computational psychology does not credit the brain with seeing bean-sprouts or understanding English: intentional states such as these are properties of people, not of brains” (244). “In short, Searle's description of the robot's pseudo-brain (that is, of Searle-in-the-robot) as understanding English involves a category-mistake comparable to treating the brain as the bearer, as opposed to the causal basis, of intelligence”. Boden (1988) and Cole (1984) point out that the room operator is a conscious agent, while the CPU in a computer is not—the Chinese Room scenario asks us to take the perspective of the implementer, and not surprisingly fails to see the larger picture.

Searle's response to the Systems Reply is simple: in principle, the man can internalize the entire system, memorizing all the instructions, doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax. (See below the section on Syntax and Semantics).

In his 2002 paper “The Chinese Room from a Logical Point of View”, Jack Copeland considers Searle's response to the Systems Reply and argues that a homunculus inside Searle's head might understand even though the room operator himself does not, just as modules in minds solve tensor equations that enable us to catch cricket balls. Copeland then turns to consider the Chinese Gym, and again appears to endorse the Systems Reply: “…the individual players [do not] understand Chinese. But there is no entailment from this to the claim that the simulation as a whole does not come to understand Chinese. The fallacy involved in moving from part to whole is even more glaring here than in the original version of the Chinese Room Argument”. Copeland denies that connectionism implies that a room of people can simulate the brain.

John Haugeland writes (2002) that Searle's response to the Systems Reply is flawed: “…what he now asks is what it would be like if he, in his own mind, were consciously to implement the underlying formal structures and operations that the theory says are sufficient to implement another mind”. According to Haugeland, his failure to understand Chinese is irrelevant: he is just the implementer. The larger system implemented would understand—there is a level-of-description fallacy.

Shaffer 2009 examines modal aspects of the logic of the CRA and argues that familiar versions of the System Reply are question-begging. But, Shaffer claims, a modalized version of the System Reply succeeds because there are possible worlds in which understanding is an emergent property of complex syntax manipulation.

Stevan Harnad has defended Searle's argument against Systems Reply critics in two papers. In his 1989 paper, Harnad writes “Searle formulates the problem as follows: Is the mind a computer program? Or, more specifically, if a computer program simulates or imitates activities of ours that seem to require understanding (such as communicating in language), can the program itself be said to understand in so doing?” (Note the specific claim: the issue is taken to be whether the program itself understands.) Harnad concludes: “On the face of it, [the CR argument] looks valid. It certainly works against the most common rejoinder, the ”Systems Reply“….”

The Virtual Mind reply concedes, as does the System Reply, that the operator of the Chinese Room does not understand Chinese merely by running the paper machine. However, unlike the System Reply, the Virtual Mind reply holds that a running system may create new entities that are distinct from the system, as well as its subparts such as the CPU or operator. In particular, a running system might create a distinct mind that understands Chinese. This virtual person would be distinct from both the room operator and the entire system. The psychological traits, including linguistic abilities, of any mind created by artificial intelligence will depend upon the programming, and will not be identical with the traits and abilities of a CPU or the operator of a paper machine, such as Searle in his scenario. According to the VM reply the mistake in the Chinese Room Argument is to take the claim of strong AI to be “the computer understands Chinese” or “the System understands Chinese”. The claim at issue should be “the computer creates a mind that understands Chinese”. A familiar model is characters in computer or video games. These characters have various abilities and personalities, and the characters are not identical with the hardware or program that creates them. A single running system might even control two robots simultaneously, one of which converses only in Chinese and one of which can converse only in English. Thus the VM reply asks us to distinguish between minds and their realizing physical systems.

Minsky (1980) and Sloman and Croucher (1980) suggested a Virtual Mind reply when the Chinese Room argument first appeared. In his widely-read 1989 paper “Computation and Consciousness”, Tim Maudlin considers minimal physical systems that might implement a computational system running a program. His discussion revolves around his imaginary Olympia machine, a system of buckets that transfers water, implementing a Turing machine. Maudlin's main target is the computationalists' claim that such a machine could have phenomenal consciousness. However in the course of his discussion, Maudlin considers the Chinese Room argument. Maudlin (citing Minsky, and Sloman and Croucher) points out a Virtual Mind reply that the agent that understands could be distinct from the physical system (414). Thus “Searle has done nothing to discount the possibility of simultaneously existing disjoint mentalities” (414–5).

Perlis (1992), Chalmers (1996) and Block (2002) have apparently endorsed versions of a Virtual Mind reply as well, as has Richard Hanley in The Metaphysics of Star Trek (1997). Penrose (2002) is a critic of this strategy, and Stevan Harnad scornfully dismisses such heroic resorts to metaphysics. Harnad defended Searle's position in a “Virtual Symposium on Virtual Minds” (1992) against Patrick Hayes and Don Perlis. Perlis pressed a virtual minds argument derived, he says, from Maudlin. Chalmers (1996) notes that the room operator is just a causal facilitator, a “demon”, so that his states of consciousness are irrelevant to the properties of the system as a whole. Like Maudlin, Chalmers raises issues of personal identity—we might regard the Chinese Room as “two mental systems realized within the same physical space. The organization that gives rise to the Chinese experiences is quite distinct from the organization that gives rise to the demon's experiences”(326).

Cole (1991, 1994) develops the reply most extensively and argues as follows: Searle's argument presupposes, and requires, that the agent of understanding be the computer itself or, in the Chinese Room parallel, the person in the room. Searle's failure to understand Chinese in the room does not show that there is no understanding being created. Cole argues that the mental traits that constitute John Searle, his beliefs and desires, memories and personality traits—are all irrelevant and causally inert in producing the answers to the Chinese questions. If understanding of Chinese were created by running the program, the mind understanding the Chinese would not be the computer, nor, in the Chinese Room, would the person understanding Chinese be the room operator. The person understanding the Chinese would be a distinct person from the room operator. So if the activity in the room creates any understanding, it would not be John Searle's understanding. We can say that a virtual person who understands Chinese would be created by running the program.

Cole (1991) offers an additional argument that the mind doing the understanding is neither the mind of the room operator nor the system consisting of the operator and the program: running a suitably structured computer program might produce answers submitted in Chinese and also answers to questions submitted in Korean. Yet the Chinese answers might apparently display completely different knowledge and memories, beliefs and desires than the answers to the Korean questions—along with a denial that the Chinese answerer knows any Korean, and vice versa. Thus the behavioral evidence would be that there were two non-identical minds (one understanding Chinese only, and one understanding Korean only). Since these might have mutually exclusive psychological properties, they cannot be identical, and ipso facto, cannot be identical with the mind of the implementer in the room. Analogously, a video game might include a character with one set of cognitive abilities (smart, understands Chinese) as well as another character with an incompatible set (stupid, English monoglot). These inconsistent cognitive traits cannot be traits of the XBOX system that realizes them. The implication seems to be that minds generally are more abstract than the systems that realize them (see Mind and Body in the Larger Philosophical Issues section).

In short, the Virtual Mind argument is that since the evidence that Searle provides that there is no understanding of Chinese was that he wouldn't understand Chinese in the room, the Chinese Room Argument cannot refute a differently formulated equally strong AI claim, asserting the possibility of creating understanding using a programmed digital computer. Maudlin (1989) says that Searle has not adequately responded to this criticism.

Others however have replied to the Virtual Mind claim, including Stevan Harnad and mathematical physicist Roger Penrose. Penrose is generally sympathetic to the points Searle raises with the Chinese Room argument, and has argued against the Virtual Mind reply. Penrose does not believe that computational processes can account for consciousness, both on Chinese Room grounds, as well as because of limitations on formal systems revealed by Kurt Gödel's incompleteness proof. (Penrose has two books on mind and consciousness; Chalmers and others have responded to Penrose's appeals to Gödel.) In his 2002 article “Consciousness, Computation, and the Chinese Room” that specifically addresses the Chinese Room argument, Penrose argues that the Chinese Gym variation—with a room expanded to the size of India, with Indians doing the processing—shows it is very implausible to hold there is “some kind of disembodied ‘understanding’ associated with the person's carrying out of that algorithm, and whose presence does not impinge in any way upon his own consciousness” (230–1). Penrose concludes the Chinese Room argument refutes Strong AI. Christian Kaernbach (2005) reports that he subjected the virtual mind theory to an empirical test, with negative results.

The Robot Reply concedes Searle is right about the Chinese Room scenario: it shows that a computer trapped in a computer room cannot understand language, or know what words mean. The Robot reply is responsive to the problem of knowing the meaning of the Chinese word for hamburger—Searle's example of something the room operator would not know. It seems reasonable to hold that we know what a hamburger is because we have seen one, and perhaps even made one, or tasted one, or at least heard people talk about hamburgers and understood what they are by relating them to things we do know by seeing, making, and tasting. Given this is how one might come to know what hamburgers are, the Robot Reply suggests that we put a digital computer in a robot body, with sensors, such as video cameras and microphones, and add effectors, such as wheels to move around with, and arms with which to manipulate things in the world. Such a robot—a computer with a body—could do what a child does, learn by seeing and doing. The Robot Reply holds that such a digital computer in a robot body, freed from the room, could attach meanings to symbols and actually understand natural language. Margaret Boden, Tim Crane, Daniel Dennett, Jerry Fodor, Stevan Harnad, Hans Moravec and Georges Rey are among those who have endorsed versions of this reply at one time or another. The Robot Reply in effect appeals to “wide content” or “externalist semantics”. This can agree with Searle that syntax and internal connections are insufficient for semantics, while holding that suitable causal connections with the world can provide content to the internal symbols.

Searle does not think this reply to the Chinese Room argument is any stronger than the Systems Reply. All the sensors do is provide additional input to the computer—and it will be just syntactic input. We can see this by making a parallel change to the Chinese Room scenario. Suppose the man in the Chinese Room receives, in addition to the Chinese characters slipped under the door, a stream of numerals that appear, say, on a ticker tape in a corner of the room. The instruction books are augmented to use the numbers from the tape as input, along with the Chinese characters. Unbeknownst to the man in the room, the numbers in the tape are the digitized output of a video camera (and possibly other sensors). Searle argues that additional syntactic inputs will do nothing to allow the man to associate meanings with the Chinese characters. It is just more work for the man in the room.

Jerry Fodor, with Hilary Putnam and David Lewis, was a principle architect of the computational theory of mind that Searle's wider argument attacks. In his original 1980 reply to Searle, Fodor allows Searle is certainly right that “instantiating the same program as the brain does is not, in and of itself, sufficient for having those propositional attitudes characteristic of the organism that has the brain.” But Fodor holds that Searle is wrong about the robot reply. A computer might have propositional attitudes if it has the right causal connections to the world—but those are not ones mediated by a man sitting in the head of the robot. We don't know what the right causal connections are. Searle commits the fallacy of inferring from “the little man is not the right causal connection” to conclude that no causal linkage would succeed. There is considerable empirical evidence that mental processes involve “manipulation of symbols”; Searle gives us no alternative explanation (this is sometimes called Fodor's “Only Game in Town” argument for computational approaches). Since this time, Fodor has written extensively on what the connections must be between a brain state and the world for the state to have intentional (representational) properties.

In a later piece, “Yin and Yang in the Chinese Room” (in Rosenthal 1991 pp.524–525), Fodor substantially revises his 1980 view. He distances himself from his earlier version of the robot reply, and holds instead that “instantiation” should be defined in such a way that the symbol must be the proximate cause of the effect—no intervening guys in a room. So Searle in the room is not an instantiation of a Turing Machine, and “Searle's setup does not instantiate the machine that the brain instantiates.” He concludes: “…Searle's setup is irrelevant to the claim that strong equivalence to a Chinese speaker's brain is ipso facto sufficient for speaking Chinese.” Searle says of Fodor's move, “Of all the zillions of criticisms of the Chinese Room argument, Fodor's is perhaps the most desperate. He claims that precisely because the man in the Chinese room sets out to implement the steps in the computer program, he is not implementing the steps in the computer program. He offers no argument for this extraordinary claim.” (in Rosenthal 1991, p. 525)

In a 1986 paper, Georges Rey advocated a combination of the system and robot reply, after noting that the original Turing Test is insufficient as a test of intelligence and understanding, and that the isolated system Searle describes in the room is certainly not functionally equivalent to a real Chinese speaker sensing and acting in the world. In a 2002 second look, “Searle's Misunderstandings of Functionalism and Strong AI”, Rey again defends functionalism against Searle, and in the particular form Rey calls the “computational-representational theory of thought—CRTT”. CRTT is not committed to attributing thought to just any system that passes the Turing Test (like the Chinese Room). Nor is it committed to a conversation manual model of understanding natural language. Rather, CRTT is concerned with intentionality, natural and artificial (the representations in the system are semantically evaluable—they are true or false, hence have aboutness). Searle saddles functionalism with the “blackbox” character of behaviorism, but functionalism cares how things are done. Rey sketches “a modest mind”—a CRTT system that has perception, can make deductive and inductive inferences, makes decisions on basis of goals and representations of how the world is, and can process natural language by converting to and from its native representations. To explain the behavior of such a system we would need to use the same attributions needed to explain the behavior of a normal Chinese speaker.

Tim Crane discusses the Chinese Room argument in his 1991 book, The Mechanical Mind . He cites the Churchlands' luminous room analogy, but then goes on to argue that in the course of operating the room, Searle would learn the meaning of the Chinese: “…if Searle had not just memorized the rules and the data, but also started acting in the world of Chinese people, then it is plausible that he would before too long come to realize what these symbols mean.”(127). Crane appears to end with a version of the Robot Reply: “Searle's argument itself begs the question by (in effect) just denying the central thesis of AI—that thinking is formal symbol manipulation. But Searle's assumption, none the less, seems to me correct … the proper response to Searle's argument is: sure, Searle-in-the-room, or the room alone, cannot understand Chinese. But if you let the outside world have some impact on the room, meaning or ‘semantics' might begin to get a foothold. But of course, this concedes that thinking cannot be simply symbol manipulation.” (129)

Margaret Boden 1988 also argues that Searle mistakenly supposes programs are pure syntax. But programs bring about the activity of certain machines: “The inherent procedural consequences of any computer program give it a toehold in semantics, where the semantics in question is not denotational, but causal.” (250) Thus a robot might have causal powers that enable it to refer to a restaurant.

Stevan Harnad also finds important our sensory and motor capabilities: “Who is to say that the Turing Test, whether conducted in Chinese or in any other language, could be successfully passed without operations that draw on our sensory, motor, and other higher cognitive capacities as well? Where does the capacity to comprehend Chinese begin and the rest of our mental competence leave off?” Harnad believes that symbolic functions must be grounded in “robotic” functions that connect a system with the world. And he thinks this counts against symbolic accounts of mentality, such as Jerry Fodor's, and, one suspects, the approach of Roger Schank that was Searle's original target.

Consider a computer that operates in quite a different manner than the usual AI program with scripts and operations on strings of linguistic symbols. The Brain Simulator reply asks us to suppose instead the program simulates the actual sequence of nerve firings that occur in the brain of a native Chinese language speaker when that person understands Chinese—every nerve, every firing. Since the computer then works the very same way as the brain of a native Chinese speaker, processing information in just the same way, it will understand Chinese. Paul and Patricia Churchland have set out a reply along these lines, discussed below.

In response to this, Searle argues that it makes no difference. He suggests a variation on the brain simulator scenario: suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker's brain. The program now tells the man which valves to open in response to input. Searle claims that it is obvious that there would be no understanding of Chinese. A simulation of brain activity is not the real thing. Searle's response is close to the scenarios in Leibniz’ Mill and the Chinese Gym (see also Maudlin 1989).

Under the rubric “The Combination Reply”, Searle also considers a system with the features of all three of the preceding: a robot with a digital brain simulating computer in its cranium, such that the system as a whole behaves indistinguishably from a human. Since the normal input to the brain is from sense organs, it is natural to suppose that most advocates of the Brain Simulator Reply have in mind such a combination of brain simulation, Robot, and Systems Reply. Some (e.g. Rey 1986) argue it is reasonable to attribute intentionality to such a system as a whole. Searle agrees that it would be reasonable to attribute understanding to such an android system—but only as long as you don't know how it works. As soon as you know the truth—it is a computer, uncomprehendingly manipulating symbols on the basis of syntax, not meaning—you would cease to attribute intentionality to it.

(One assumes this would be true even if it were one's spouse, with whom one had built a life-long relationship, that was revealed to hide a silicon secret. Science fiction stories, including episodes of Rod Serling's television series The Twilight Zone , have been based on such possibilities; Steven Pinker (1997) mentions one episode in which the secret was known from the start, but the protagonist developed a romantic relationship with the android.)

On its tenth anniversary the Chinese Room argument was featured in the general science periodical Scientific American . Leading the opposition to Searle's lead article in that issue were philosophers Paul and Patricia Churchland. The Churchlands agree with Searle that the Chinese Room does not understand Chinese, but hold that the argument itself exploits our ignorance of cognitive and semantic phenomena. They raise a parallel case of “The Luminous Room” where someone waves a magnet and argues that the absence of resulting visible light shows that Maxwell's electromagnetic theory is false. The Churchlands advocate a view of the brain as a connectionist system, a vector transformer, not a system manipulating symbols according to structure-sensitive rules. The system in the Chinese Room uses the wrong computational strategies. Thus they agree with Searle against traditional AI, but presumably would endorse what Searle calls “the Brain Simulator Reply”, arguing that, as with the Luminous Room, our intuitions fail us when considering such a complex system, and it is a fallacy to move from part to whole: “… no neuron in my brain understands English, although my whole brain does.”

In his 1991 book, Microcognition . Clark holds that Searle is right that a computer running Schank's program does not know anything about restaurants, “at least if by ‘know’ we mean anything like ‘understand’”. But Searle thinks that this would apply to any computational model, while Clark holds that Searle is wrong about connectionist models. Clark's interest is thus in the brain-simulator reply. The brain thinks in virtue of its physical properties. What physical properties of the brain are important? Clark answers that what is important about brains are “variable and flexible substructures” which conventional AI systems lack. But that doesn't mean computationalism or functionalism is false. It depends on what level you take the functional units to be. Clark defends “microfunctionalism”—one should look to a fine-grained functional description, e.g. neural net level. Clark cites William Lycan approvingly contra Block's absent qualia objection—yes, there can be absent qualia, if the functional units are made large. But that does not constitute a refutation of functionalism generally. So Clark's views are not unlike the Churchlands', conceding that Searle is right about Schank and symbolic-level processing systems, but holding that he is mistaken about connectionist systems.

Similarly Ray Kurzweil (2002) argues that Searle's argument could be turned around to show that human brains cannot understand—the brain succeeds by manipulating neurotransmitter concentrations and other mechanisms that are in themselves meaningless. In criticism of Searle's response to the Brain Simulator Reply, Kurzweil says: “So if we scale up Searle's Chinese Room to be the rather massive ”room“ it needs to be, who's to say that the entire system of a hundred trillion people simulating a Chinese Brain that knows Chinese isn't conscious? Certainly, it would be correct to say that such a system knows Chinese. And we can't say that it is not conscious anymore than we can say that about any other process. We can't know the subjective experience of another entity…”

Related to the preceding is The Other Minds Reply: “How do you know that other people understand Chinese or anything else? Only by their behavior. Now the computer can pass the behavioral tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers. ”

Searle's reply to this is very short: we presuppose that other people have minds in our dealings with them, just as in physics we presuppose the existence of objects. Critics hold that if the evidence we have that humans understand is the same as the evidence we might have that a visiting alien understands, which is the same as the evidence that a robot understands, the presuppositions we may make in the case of our own species are not relevant, for presuppositions are sometimes false. For similar reasons, Turing, in proposing the Turing Test, is specifically worried about our presuppositions and chauvinism. If the reasons for the presuppositions regarding humans are pragmatic, in that they enable us to predict the behavior of humans and to interact effectively with them, perhaps the presupposition could apply equally to computer (similar considerations are pressed by Dennett, in his discussions of what he calls the Intentional Stance).

Hans Moravec, director of the Robotics laboratory at Carnegie Mellon University, and author of Robot: Mere Machine to Transcendent Mind, argues that Searle's position merely reflects intuitions from traditional philosophy of mind that are out of step with the new cognitive science. Moravec endorses a version of the Other Minds reply. It makes sense to attribute intentionality to machines for the same reasons it makes sense to attribute them to humans; his “interpretative position” is similar to the views of Daniel Dennett. Moravec goes on to note that one of the things we attribute to others is the ability to make attributions of intentionality, and then we make such attributions to ourselves. It is such self-representation that is at the heart of consciousness.

Presumably the reason that Searle thinks we can disregard the evidence in the case of robots and computers is that we know that their processing is syntactic, and this fact trumps all other considerations. Indeed, Searle believes this is the larger point that the Chinese Room merely illustrates. This larger point is addressed in the Syntax and Semantics section below.

Many responses to the Chinese Room argument have noted that, as with Leibniz’ Mill, the argument appears to be based on intuition: the intuition that a computer (or the man in the room) cannot think or have understanding. For example, Ned Block (1980) in his original BBS commentary says “Searle's argument depends for its force on intuitions that certain entities do not think.” But, Block argues, (1) intuitions sometimes can and should be trumped and (2) perhaps we need to bring our concept of understanding in line with a reality in which certain computer robots belong to the same natural kind as humans. Similarly Margaret Boden (1988) points out that we can't trust our untutored intuitions about how mind depends on matter; developments in science may change our intuitions. Indeed, elimination of bias in our intuitions was what motivated Turing (1950) to propose the Turing Test, a test that was blind to the physical character of the system replying to questions. Some of Searle's critics in effect argue that he has merely pushed the reliance on intuition back, into the room.

Critics argue that our intuitions regarding both intelligence and understanding may be unreliable, and perhaps incompatible even with current science. With regard to understanding, Steven Pinker, in How the Mind Works (1997), holds that “… Searle is merely exploring facts about the English word understand …. People are reluctant to use the word unless certain stereotypical conditions apply…” But, Pinker claims, nothing scientifically speaking is at stake. Pinker objects to Searle's appeal to the “causal powers of the brain” by noting that the apparent locus of the causal powers is the “patterns of interconnectivity that carry out the right information processing”. Pinker ends his discussion by citing a science fiction story in which Aliens, anatomically quite unlike humans, cannot believe that humans think with meat in their heads. The Aliens' intuitions are unreliable—presumably ours may be so as well.

Other critics are also concerned with how our understanding of understanding bears on the Chinese Room argument. In their paper “A Chinese Room that Understands” AI researchers Simon and Eisenstadt (2002) argue that whereas Searle refutes “logical strong AI”, the thesis that a program that passes the Turing Test will necessarily understand, Searle's argument does not impugn “Empirical Strong AI”—the thesis that it is possible to program a computer that convincingly satisfies ordinary criteria of understanding. They hold however that it is impossible to settle these questions “without employing a definition of the term ‘understand’ that can provide a test for judging whether the hypothesis is true or false”. They cite W.V.O. Quine's Word and Object as showing that there is always empirical uncertainty in attributing understanding to humans. The Chinese Room is a Clever Hans trick (Clever Hans was a horse who appeared to clomp out the answers to simple arithmetic questions, but it was discovered that Hans could detect unconscious cues from his trainer). Similarly, the man in the room doesn't understand Chinese, and could be exposed by watching him closely. (Simon and Eisenstadt do not explain just how this would be done, or how it would affect the argument.) Citing the work of Rudolf Carnap, Simon and Eisenstadt argue that to understand is not just to exhibit certain behavior, but to use “intensions” that determine extensions, and that one can see in actual programs that they do use appropriate intensions. They discuss three actual AI programs, and defend various attributions of mentality to them, including understanding, and conclude that computers understand; they learn “intensions by associating words and other linguistic structure with their denotations, as detected through sensory stimuli”. And since we can see exactly how the machines work, “it is, in fact, easier to establish that a machine exhibits understanding that to establish that a human exhibits understanding….” Thus, they conclude, the evidence for empirical strong AI is overwhelming.

Similarly, Daniel Dennett in his original 1980 response to Searle's argument called it “an intuition pump”. In a later paper, Dennett elaborated on concerns about our intuitions regarding intelligence. Dennett 1987 (“Fast Thinking”) expressed concerns about the slow speed at which the Chinese Room would operate, and he has been joined by several other commentators, including Tim Maudlin, David Chalmers, and Steven Pinker. The operator of the Chinese Room may eventually produce appropriate answers to Chinese questions. But slow thinkers are stupid, not intelligent—and in the wild, they may well end up dead. Dennett argues that “speed … is ‘of the essence’ for intelligence. If you can't figure out the relevant portions of the changing environment fast enough to fend for yourself, you are not practically intelligent, however complex you are” (326). Thus Dennett relativizes intelligence to processing speed relative to current environment. Tim Maudlin (1989) disagrees. Maudlin considers the time-scale problem pointed to by other writers, and concludes, contra Dennett, that the extreme slowness of a computational system does not violate any necessary conditions on thinking or consciousness.

Steven Pinker (1997) also holds that Searle relies on untutored intuitions. Pinker endorses the Churchlands' (1990) counterexample of an analogous thought experiment of waving a magnet and not generating light, noting that this outcome would not disprove Maxwell's theory that light consists of electromagnetic waves. Pinker holds that the key issue is speed: “The thought experiment slows down the waves to a range to which we humans no longer see them as light. By trusting our intuitions in the thought experiment, we falsely conclude that rapid waves cannot be light either. Similarly, Searle has slowed down the mental computations to a range in which we humans no longer think of it as understanding (since understanding is ordinarily much faster)” (94–95). Howard Gardiner, a supporter of Searle's conclusions regarding the room, makes a similar point about understanding. Gardiner addresses the Chinese Room argument in his book The Mind's New Science (1985, 171–177). Gardiner considers all the standard replies to the Chinese Room argument and concludes that Searle is correct about the room: “…the word understand has been unduly stretched in the case of the Chinese room ….” (175).

Thus several in this group of critics argue that speed affects our willingness to attribute intelligence and understanding to a slow system, such as that in the Chinese Room. The result may simply be that our intuitions regarding the Chinese Room are unreliable, and thus the man in the room, in implementing the program, may understand Chinese despite intuitions to the contrary (Maudlin and Pinker). Or it may be that the slowness marks a crucial difference between the simulation in the room and what a fast computer does, such that the man is not intelligent while the computer system is (Dennett).

5. The Larger Philosophical Issues

Searle believes the Chinese Room argument supports a larger point, which explains the failure of the Chinese Room to produce understanding. Searle argued that programs implemented by computers are just syntactical. Computer operations are “formal” in that they respond only to the explicit form of the strings of symbols, not to the meaning of the symbols. Minds on the other hand have states with meaning, mental contents. We associate meanings with the words or signs in language. We respond to signs because of their meaning, not just their physical appearance. In short, we understand. But, and according to Searle this is the key point, “Syntax is not by itself sufficient for, nor constitutive of, semantics.” So although computers may be able to manipulate syntax to produce appropriate responses to natural language input, they do not understand the sentences they receive or output, for they cannot associate meanings with the words.

Searle (1984 etc.) presents a three premise argument that because syntax is not sufficient for semantics, programs cannot produce minds.

  • Programs are purely formal (syntactic).
  • Human minds have mental contents (semantics).
  • Syntax by itself is neither constitutive of, nor sufficient for, semantic content.
  • Therefore, programs by themselves are not constitutive of nor sufficient for minds.

The Chinese Room thought experiment itself is the support for the third premise. The claim that syntactic manipulation is not sufficient for meaning or thought is a significant issue, with wider implications than AI, or attributions of understanding. Prominent theories of mind hold that human cognition generally is computational. In one form, it is held that thought involves operations on symbols in virtue of their physical properties. On an alternative connectionist account, the computations are on “subsymbolic” states. If Searle is right, not only Strong AI but also these main approaches to understanding human cognition are misguided.

As we have seen, Searle holds that the Chinese Room scenario shows that one cannot get semantics from syntax alone. In formal systems, rules are given for syntax, and this procedure appears to be quite independent of semantics. One specifies the basic symbol set and some rules for manipulating strings to produce new ones. These rules are purely formal or syntactic—they are applied to strings of symbols solely in virtue of their syntax or form. A semantics, if any, for the symbol system must be provided separately. And if one wishes to show that interesting additional relationships hold between the syntactic operations and semantics, such as that the symbol manipulations preserve truth, one must provide sometimes complex meta-proofs to show this. So on the face of it, semantics is quite independent of syntax for artificial languages, and one cannot get semantics from syntax alone. “Formal symbols by themselves can never be enough for mental contents, because the symbols, by definition, have no meaning (or interpretation, or semantics) except insofar as someone outside the system gives it to them” (Searle 1989, 45).

Searle's point is clearly true of the formal systems of logicians. When we move from formal systems to computational systems, the situation is more complex. As many of Searle's critics (e.g. Cole 1984, Dennett 1987, Boden 1988, and Chalmers 1996) have noted, a computer running a program is not the same as “syntax alone”. A computer is a causal system that changes state in accord with a program. The states are syntactically specified by programmers, but they are fundamentally states of a complex causal system embedded in the real world. This is quite different from the abstract formal systems that logicians study. Dennett notes that no “computer program by itself” (Searle's language)—e.g. a program lying on a shelf—can cause anything, even simple addition, let alone mental states. The program must be running. Chalmers (1996) offers a parody in which it is reasoned that recipes are syntactic, syntax is not sufficient for crumbliness, cakes are crumbly, so implementation of a recipe is not sufficient for making a cake. Dennett (1987) sums up the issue: “Searle's view, then, comes to this: take a material object (any material object) that does not have the power of causing mental phenomena; you cannot turn it in to an object that does have the power of producing mental phenomena simply by programming it—reorganizing the conditional dependencies of transitions between its states.” Dennett's view is the opposite: programming “is precisely what could give something a mind”. But Dennett claims that in fact it is “empirically unlikely that the right sorts of programs can be run on anything but organic, human brains” (325–6).

A further related complication is that it is not clear that computers perform syntactic operations in quite the same sense that a human does—it is not clear that a computer understands syntax or syntactic operations. A computer does not know that it is manipulating 1's and 0's. A computer does not recognize that its binary data strings have a certain form, and thus that certain syntactic rules may be applied to them, unlike the man inside the Chinese Room. Inside a computer, there is nothing that literally reads input data, or that “knows” what symbols are. Instead, there are millions of transistors that change states. A sequence of voltages causes operations to be performed. We may choose to interpret these voltages as binary numerals and the voltage changes as syntactic operations, but a computer does not interpret its operations as syntactic or any other way. So perhaps a computer does not need to make the move from syntax to semantics that Searle objects to; it needs to move from complex causal connections to semantics. Furthermore, any causal system is describable as performing syntactic operations—if we interpret a light square as logical “0” and a dark square as logical “1”, then a kitchen toaster may be described as a device that rewrites logical “0”s as logical “1”s.

In the 1990s, Searle began to use considerations related to these to argue that computational views are not just false, but lack a clear sense. Computation, or syntax, is “observer-relative”, not an intrinsic feature of reality—“…you can assign a computational interpretation to anything” (Searle 2002b, p. 17), even the molecules in the paint on the wall. Since nothing is intrinsically computational, one cannot have a scientific theory that reduces the mental, which is not observer-relative, to computation, which is. “Computation exists only relative to some agent or observer who imposes a computational interpretation on some phenomenon. This is an obvious point. I should have seen it ten years ago, but I did not.” (Searle 2002b, p.17, originally published 1993).

Critics note that walls are not computers; unlike a wall, a computer goes through state-transitions that are counterfactually described by a program (Chalmers 1996, Block 2002, Haugeland 2002). In his 2002 paper, Block addresses the question of whether a wall is a computer (in reply to Searle's charge that anything that maps onto a formal system is a formal system, whereas minds are quite different). Block denies that whether or not something is a computer depends entirely on our interpretation. Block notes that Searle ignores the counterfactuals that must be true of an implementing system. Haugeland (2002) makes the similar point that an implementation will be a causal process that reliably carries out the operations—and they must be the right causal powers. Block concludes that Searle's arguments fail, but he concedes that they “do succeed in sharpening our understanding of the nature of intentionality and its relation to computation and representation” (78).

Rey (2002) also addresses Searle's recent arguments that syntax and symbols are observer-relative properties, not physical. Searle infers this from the fact that they are not defined in physics; it does not follow that they are observer-relative. Rey argues that Searle also misunderstands what it is to realize a program. Rey endorses Chalmers' reply to Putnam: a realization is not just a structural mapping, but involves causation, supporting counterfactuals. “This point is missed so often, it bears repeating: the syntactically specifiable objects over which computations are defined can and standardly do possess a semantics; it's just that the semantics is not involved in the specification.” States of a person have their semantics in virtue of computational organization and their causal relations to the world. Rey concludes: Searle “simply does not consider the substantial resources of functionalism and Strong AI.” (222) A plausibly detailed story would defuse negative conclusions drawn from the superficial sketch of the system in the Chinese Room.

John Haugeland (2002) argues that there is a sense in which a processor must intrinsically understand the commands in the programs it runs: it executes them in accord with the specifications. “The only way that we can make sense of a computer as executing a program is by understanding its processor as responding to the program prescriptions as meaningful” (385). Thus operation symbols have meaning to a system. Haugeland goes on to draw a distinction between narrow and wide system. He argues that data can have semantics in the wide system that includes representations of external objects produced by transducers. In passing, Haugeland makes the unusual claim, argued for elsewhere, that genuine intelligence and semantics presuppose “the capacity for a kind of commitment in how one lives” which is non-propositional—that is, love (cp. Steven Spielberg's 2001 film Artificial Intelligence: AI ).

To Searle's claim that syntax is observer-relative, that the molecules in wall might be interpreted as implementing the Wordstar program (an early word processing program) because “there is some pattern in the molecule movements which is isomorphic with the formal structure of Wordstar” (Searle 1990b, p. 27), Haugeland counters that “the very idea of a complex syntactical token … presupposes specified processes of writing and reading….” The tokens must be systematically producible and retrievable. So no random isomorphism or pattern somewhere (e.g. on some wall) is going to count, and hence syntax is not observer-relative.

With regard to the question of whether one can get semantics from syntax, AI futurist ( The Age of Spiritual Machines ) Ray Kurzweil holds in a 2002 follow-up book that it is red herring to focus on traditional symbol-manipulating computers. Kurzweil agrees with Searle that existent computers do not understand language—as evidenced by the fact that they can't engage in convincing dialog. But that failure does not bear on the capacity of future computers based on different technology. Kurzweil claims that Searle fails to understand that future machines will use “chaotic emergent methods that are massively parallel”. This claim appears to be similar to that of connectionists, such as Andy Clark, and the position taken by the Churchlands in their 1990 Scientific American article.

Apart from Haugeland's claim that processors understand program instructions, Searle's critics can agree that computers no more understand syntax than they understand semantics, although, like all causal engines, a computer has syntactic descriptions. And while it is often useful to programmers to treat the machine as if it performed syntactic operations, it is not always so: sometimes the characters programmers use are just switches that make the machine do something, for example, make a given pixel on the computer screen turn red. Thus it is not clear that Searle is correct when he says a digital computer is just “a device which manipulates symbols”. Computers are complex causal engines, and syntactic descriptions are useful in order to structure the causal interconnections in the machine. AI programmers face many tough problems, but one can hold that they do not have to get semantics from syntax. If they are to get semantics, they must get it from causality.

Two main approaches have developed that explain meaning in terms of causal connections. The internalist approaches, such as Schank's conceptual representation approach, and Conceptual Role Semantics, hold that a state of a physical system gets its semantics from causal connections to other states of the system. Thus a state of a computer might represent “kiwi” because it is connected to “bird” and “flightless” nodes, and perhaps also to images of prototypical kiwis. The state that represents the property of being “flightless” might get its content from a Negation-operator modifying a representation of “capable of airborne self-propulsion”, and so forth, to form a vast connected conceptual network, a kind of mental dictionary.

Externalist approaches developed by Dennis Stampe, Fred Dretske, Hilary Putnam, Jerry Fodor, Ruth Millikan, and others, hold that states of a physical system get their content through causal connections to the external reality they represent. Thus, roughly, a system with a KIWI concept is one that has a state it uses to represent the presence of kiwis in the external environment. This kiwi-representing state can be any state that is appropriately causally connected to the presence of kiwis. Depending on the system, the kiwi representing state could be a state of a brain, or of an electrical device such as a computer, or even of a hydraulic system. The internal representing state can then in turn play a causal role in the determining the behavior of the system. For example, Rey (1986) endorses an indicator semantics along the lines of the work of Dennis Stampe (1977) and Fodor's Psychosemantics . Semantics that emphasize causal connection with the world fit well with the Robot Reply. A computer in a robot body might have just the causal connections that could allow its inner syntactic states to have the semantic property of representing states of things in its environment.

Thus there are at least two families of theories (and marriages of the two, as in Block 1986) about how semantics might depend upon causal connections. Both of these attempt to provide accounts that are substance neutral: states of suitably organized causal systems can have content, no matter what the systems are made of. On these theories a computer could have states that have meaning. It is not necessary that the computer be aware of its own states and know that they have meaning, nor that any outsider appreciate the meaning of the states. On either of these accounts meaning depends upon the (possibly complex) causal connections, and digital computers are systems designed to have states that have just such complex causal dependencies. It should be noted that Searle does not subscribe to these theories of semantics. Instead, Searle's discussions of linguistic meaning have often centered on the notion of intentionality .

Intentionality is the property of being about something, having content. In the 19th Century, psychologist Franz Brentano re-introduced this term from Medieval philosophy and held that intentionality was the “mark of the mental”. Beliefs and desires are intentional states: they have propositional content (one believes that p, one desires that p). Searle's views regarding intentionality are complex; of relevance here is that he makes a distinction between the original or intrinsic intentionality of genuine mental states, and the derived intentionality of language. A written or spoken sentence only has derivative intentionality insofar as it is interpreted by someone. It appears that on Searle's view, original intentionality can at least potentially be conscious. Searle then argues that the distinction between original and derived intentionality applies to computers. We can interpret the states of a computer as having content, but the states themselves do not have original intentionality. Many philosophers, including Fodor 2009, endorse this intentionality dualism.

In a section of her 1988 book, Computer Models of the Mind , Margaret Boden notes that intentionality is not well-understood—reason to not put too much weight on arguments that turn on intentionality. Furthermore, insofar as we understand the brain, we focus on informational functions, not unspecified causal powers of the brain: “…from the psychological point of view, it is not the biochemistry as such which matters but the information-bearing functions grounded in it.” (241) Responders to Searle have argued that he displays substance chauvinism, in holding that brains understand but systems made of silicon with comparable information processing capabilities cannot, even in principle. Papers on both sides of the issue appeared, such as J. Maloney's 1987 paper “The Right Stuff”, defending Searle, and R. Sharvy's 1985 critique, “It Ain't the Meat, it's the Motion”. AI proponents such as Kurzweil (1999, see also Richards 2002) have continued to hold that AI systems can potentially have such mental properties as understanding, intelligence, consciousness and intentionality, and will exceed human abilities in these areas.

Other critics of Searle's position take intentionality more seriously than Boden does, but deny his distinction between original and derived intentionality, taking two main tacks. Dennett (1987, e.g.) argues that all intentionality is derived. Attributions of intentionality—to animals, other people, even ourselves—are instrumental and allow us to predict behavior, but they are not descriptions of intrinsic properties. As we have seen, Dennett is concerned about the slow speed of things in the Chinese Room, but he argues that once a system is working up to speed, it has all that is needed for intelligence and derived intentionality—and derived intentionality is the only kind that there is, according to Dennett. A machine can be an intentional system because intentional explanations work in predicting the machine's behavior. Dennett also suggests that Searle conflates intentionality with awareness of intentionality. In his syntax-semantic arguments, “Searle has apparently confused a claim about the underivability of semantics from syntax with a claim about the underivability of the consciousness of semantics from syntax” (336).

Others have noted that Searle's discussion has shown a shift from issues of intentionality and understanding to issues of consciousness. Searle links intentionality to awareness of intentionality, in that intentional states are at least potentially conscious. In his 1996 book, The Conscious Mind , David Chalmers notes that although Searle originally directs his argument against machine intentionality, it is clear from later writings that the real issue is consciousness, which Searle holds is a necessary condition of intentionality. It is consciousness that is lacking in digital computers. Chalmers uses thought experiments to argue that it is implausible that one system has some basic mental property (such as having qualia) that another system lacks, if it is possible to imagine transforming one system into the other, either gradually (as replacing neurons one at a time by digital circuits), or all at once, switching back and forth between flesh and silicon.

A second strategy regarding the attribution of intentionality is taken by externalist critics who in effect argue that intentionality is an intrinsic feature of states of physical systems that are causally connected with the world in the right way, independently of interpretation (see the preceding Syntax and Semantics section). Fodor's semantic externalism is influenced by Fred Dretske, but they come to different conclusions with regard to the semantics of states of computers. Over a period of years, Dretske has developed an historical account of meaning or mental content that would preclude attributing beliefs and understanding to most machines. But Dretske (1985) agrees with Searle that adding machines don't literally add; we do the adding, using the machines (recently Searle (2002) has been making the same point in terms of observer relative attributions of intentionality). Dretske emphasizes the crucial role of natural selection and learning in producing states that have genuine content. Human built systems will be, at best, like Swampmen (beings that result from a lightning strike in a swamp and by chance happen to be a molecule by molecule copy of some human being, say, you)—they appear to have intentionality or mental states, but do not, because such states require the right history. AI states will generally be counterfeits of real mental states; like counterfeit money, they may appear perfectly identical but lack the right pedigree. But Dretske's account of belief appears to make it distinct from conscious awareness of the belief or intentional state (if that is taken to require a higher order thought), and so would allow attribution of intentionality to systems that can learn.

Howard Gardiner endorses Zenon Pylyshyn's criticisms of Searle's view of the relation of brain and intentionality, as supposing that intentionality is somehow a stuff “secreted by the brain”, and Pylyshyn's own counter-thought experiment in which one's neurons are replaced one by one with integrated circuit workalikes (see also Cole and Foelber (1984) and Chalmers (1996), for explorations of neuron replacement scenarios). Gardiner holds that Searle owes us a more precise account of intentionality than Searle has given so far, and until then it is an open question whether AI can produce it, or whether it is beyond its scope. Gardiner concludes with the possibility that the dispute between Searle and his critics is not scientific, but (quasi?) religious.

Several critics have noted that there are metaphysical issues at stake in the original argument. The Systems Reply draws attention to the metaphysical problem of the relation of mind to body. It does this in holding that understanding is a property of the system as a whole, not the physical implementer. The Virtual Mind Reply holds that minds or persons—the entities that understand and are conscious—are more abstract than any physical system, and that there could be a many-to-one relation between minds and physical systems. Thus larger issues about personal identity and the relation of mind and body are in play in the debate between Searle and some of his critics.

Searle's view is that the problem the relation of mind and body “has a rather simple solution. Here it is: Conscious states are caused by lower level neurobiological processes in the brain and are themselves higher level features of the brain.” (Searle 2002b, p.9 ) In his early discussion of the CR, Searle spoke of the causal powers of the brain. Thus his view appears to be that brain states cause consciousness and understanding, and “consciousness is just a feature of the brain” (ibid).

Consciousness and understanding are features of persons, so it appears that Searle accepts a metaphysics in which I, my conscious self, am identical with my brain—a form of mind-brain identity theory. This very concrete metaphysics is reflected in Searle's original presentation of the CR argument, in which Strong AI was described by him as the claim that “the appropriately programmed computer really is a mind” (Searle 1980). This is an identity claim, and has odd consequences. If A and B are identical, any property of A is a property of B. Computers are physical objects. Some computers weigh 6 lbs and have stereo speakers. So the claim that Searle called Strong AI would entail that some minds weigh 6 lbs and have stereo speakers. However it seems to be clear that while humans may weigh 150 pounds; human minds do not do not weigh 150 pounds. This suggests that neither bodies nor machines can literally be minds. It appears that minds are more abstract than that, and at least one version of the claim that Searle calls Strong AI is metaphysically untenable on the face of it, apart from any thought-experiments.

Searle's CR argument was thus directed against the claim that a computer is a mind, that a suitably programmed digital computer understands language, or that its program does. Searle's thought experiment appeals to our strong intuition that someone who did exactly what the computer does would not thereby come to understand Chinese. As noted above, many critics have held that Searle is quite right on this point—no matter how you program a computer, the computer will not literally be a mind and the computer will not understand natural language. This however cannot show that something else understands—it cannot show that AI cannot produce understanding of natural language, for this is a different claim. It is not the claim that the computer understands language, or that the program or even the system does. It is the claim that AI creates understanding, with the thing doing the understanding unspecified. This understanding mind might not be identical with the computer, the program, nor the system consisting of computer and program. Hauser (2002) accuses Searle of Cartesian bias in his inference from “it seems to me quite obvious that I understand nothing” to the conclusion that I really understand nothing. Normally, if one understands English or Chinese, one knows that one does—but not necessarily. Searle lacks the normal introspective awareness of understanding—but this, while abnormal, is not conclusive. Cole (1984) makes similar points.

The alternative metaphysics that was developing in the two decades before Searle introduced the Chinese Room argument was functionalism (q.v.). Functionalism is an alternative to the identity theory that is implicit in much of Searle's discussion. Functionalists hold that a mental state is what a mental state does —the causal (or “functional”) role that the state plays determines what state it is. A functionalist might hold that pain, for example, is a state that is typically caused by damage to the body, is located in a body-image, and is aversive. Functionalists distance themselves both from behaviorists and identity theorists. In contrast with the former, functionalists hold that the internal causal processes are important for the possession of mental states. Thus functionalists may reject the Turing Test. In contrast with identity theorists, functionalists hold that mental states might be had by a variety of physical systems (or non-physical, cf. Cole and Foelber 1984, in which a mind changes from a material to an immaterial implementation, neuron by neuron). Thus while an identity theorist will identify pain with certain neuron firings, a functionalist will identify pain with something more abstract and higher level, a functional role that might be had by many different types of underlying system. Functionalists accuse identity theorists of substance chauvinism. However, functionalism remains controversial: functionalism is vulnerable to the Chinese Nation type objections discussed above, and functionalists notoriously have trouble explaining qualia, a problem highlighted by the apparent possibility of an inverted spectrum, where qualitatively different states might have the same functional role (e.g. Block 1978, Maudlin 1989, Cole 2000 (Other Internet Resources)).

These controversial metaphysical issues bear on the central inference in the Chinese Room argument. From the intuition that in the CR thought experiment he would not understand Chinese by running a program, Searle infers that there is no understanding created by running a program. Clearly, whether that inference is valid or not turns on a metaphysical question about the identity of persons and minds. If the person understanding is not identical with the room operator, then the inference is unsound.

5.4 Simulation, duplication and evolution

In discussing the CR, Searle argues that there is an important distinction between simulation and duplication. No one would mistake a computer simulation of the weather for weather, or a computer simulation of digestion for real digestion. It is just as serious a mistake to confuse a computer simulation of understanding with understanding.

On the face of it, this seems true. But two problems emerge. It is not clear that we can always make the distinction between simulations and the real thing. Hearts are biological, if anything is. Are artificial hearts simulations of hearts? Or are they functional duplicates of hearts, hearts made from different materials? Walking is a biological phenomenon performed using limbs. Do those with artificial limbs walk? Or do they simulate walking? If the properties that are needed to be certain kind of thing are high-level properties, anything sharing those properties will be a thing of that kind, even if it differs in its lower level properties. Chalmers (1996) offers a principle governing when simulation is replication. Chalmers suggests that, contra Searle and Harnad (1989), a simulation of X can be an X , namely when the property of being an X is an organizational invariant, a property that depends only on the functional organization of the underlying system, and not on any other details.

Copeland (2002) argues that the Church-Turing thesis does not entail that the brain (or every machine) can be simulated by a universal Turing machine, for the brain (or other machine) might have primitive operations that are not simple clerical routines that can be carried out by hand. Sprevak 2007 raises a related point. Turing's 1938 Princeton thesis described such machines (“O-machines”). If the brain is such a machine, then: “There is no possibility of Searle's Chinese Room Argument being successfully deployed against the functionalist hypothesis that the brain instantiates an O-machine….” (120). Copeland then turns to the Brain Simulator Reply. He argues that Searle correctly notes that one cannot infer from X simulates Y , and Y has property P , to the conclusion that therefore X has Y 's property P for arbitrary P . But Copeland claims that Searle himself commits the simulation fallacy in extending the CR argument from traditional AI to apply against computationalism. The contrapositive of the inference is logically equivalent— X simulates Y , X does not have P therefore Y does not—where P is understands Chinese. The faulty step is: the CR operator S simulates a neural net N , it is not the case that S understands Chinese, therefore it is not the case that N understands Chinese. Copeland also notes results by Siegelmann and Sontag (1994) showing that some connectionist networks cannot be simulated by a universal Turing Machine (in particular, where connection weights are real numbers).

There is another problem with the simulation-duplication distinction, arising from the the process of evolution. Searle wishes to see original intentionality and genuine understanding as properties only of certain biological systems, presumably the product of evolution. Computers merely simulate these properties. At the same time, in the Chinese Room scenario, Searle maintains that a system can exhibit behavior just as complex as human behavior, simulating any degree of intelligence and language comprehension that one can imagine, and simulating any ability to deal with the world, yet not understand a thing. He also says that such behaviorally complex systems might be implemented with very ordinary materials, for example with tubes of water and valves.

This creates a biological problem, beyond the Other Minds problem noted by early critics of the CR argument. While we may presuppose that others have minds, evolution makes no such presuppositions. The selection forces that drive biological evolution select on the basis of behavior. Evolution can select for the ability to use information about the environment creatively and intelligently, as long as this is manifest in the behavior of the organism. If there is no overt difference in behavior in any set of circumstances between a system that understands and one that does not, evolution cannot select for genuine understanding. And so it seems that on Searle's account, minds that genuinely understand meaning have no advantage over less mysterious creatures that merely process information, using purely computational processes that we know exist on independent grounds. Thus a position that implies that simulations of understanding can be just as biologically well-adapted as the real thing, leaves us with a puzzle about how and why systems with “genuine” understanding could evolve. Original intentionality and genuine understanding become epiphenomenal.

Leibniz and Searle had similar intuitions about the systems they consider in their respective thought experiments, Leibniz’ Mill and the Chinese Room. In both cases they consider a complex system composed of relatively simple operations, and note that it is impossible to see how understanding or consciousness could result. These simple arguments do us the service of highlighting the serious problems we face in understanding meaning and minds. The many issues raised by the Chinese Room argument may not be settled until there is a consensus about the nature of meaning, its relation to syntax, and about the nature of consciousness. There continues to be significant disagreement about what processes create meaning, understanding, and consciousness, as well as what can be proven a priori by thought experiments.

  • Block, N., 1978, ‘Troubles with Functionalism’, in C. W. Savage (ed.), Perception and Cognition: Issues in the Foundations of Psychology , Minneapolis: University of Minnesota Press. (Reprinted in many anthologies on philosophy of mind and psychology.)
  • –––, 1986, ‘Advertisement for a Semantics for Psychology’, Midwest Studies in Philosophy (Volume X), P.A. French, et al . (eds.), Minneapolis: University of Minnesota Press, 615–678.
  • –––, 2002, ‘Searle's Arguments Against Cognitive Science’, in Preston and Bishop (eds.) 2002.
  • Boden, M., 1988, Computer Models of the Mind , Cambridge: Cambridge University Press; pp. 238–251 were excerpted and published as ‘Escaping from the Chinese Room’, in The Philosophy of Artificial Intelligence , ed M. A. Boden, New York: Oxford University Press, 1990.
  • Cam, P., 1990, ‘Searle on Strong AI’, Australasian Journal of Philosophy , 68: 103–8.
  • Chalmers, D., 1992, ‘Subsymbolic Computation and the Chinese Room’, in J. Dinsmore (ed.), The Symbolic and Connectionist Paradigms: Closing the Gap , Hillsdale, NJ: Lawrence Erlbaum.
  • –––, 1996, The Conscious Mind , Oxford: Oxford University Press.
  • –––, 1996b, ‘Minds, machines, and mathematics’, Psyche , 2: 11–20.
  • Churchland, P., 1985, ‘Reductionism, Qualia, and the Direct Introspection of Brain States’, The Journal of Philosophy , LXXXII: 8–28.
  • Churchland, P. and Churchland, P., 1990, ‘Could a machine think?’, Scientific American , 262(1): 32–37.
  • Clark, A., 1991, Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing , Cambridge, MA: MIT Press.
  • Cole, D., 1984, ‘Thought and Thought Experiments’, Philosophical Studies , 45: 431–44.
  • –––, 1991a, ‘Artificial Intelligence and Personal Identity’, Synthese , 88: 399–417.
  • –––, 1991b, ‘Artificial Minds: Cam on Searle’, Australasian Journal of Philosophy , 69: 329–33.
  • –––, 1994, ‘The Causal Powers of CPUs’, in E. Dietrich (ed.), Thinking Computers and Virtual Persons , New York: Academic Press
  • Cole, D. and Foelber, R., 1984, ‘Contingent Materialism’, Pacific Philosophical Quarterly , 65(1): 74–85.
  • Copeland, J., 2002, ‘The Chinese Room from a Logical Point of View’, in Preston and Bishop (eds.) 2002, 104-122.
  • Crane, Tim., 1996, The Mechanical Mind : A Philosophical Introduction to Minds, Machines and Mental Representation , London: Penguin.
  • Davis, Lawrence, 2001, ‘Functionalism, the Brain, and Personal Identity’, Philosophical Studies , 102(3): 259–279.
  • Dennett, D., 1978, ‘Toward a Cognitive Theory of Consciousness’, in Brainstorms: Philosophical Essays on Mind and Psychology , Cambridge, MA: MIT Press.
  • –––, 1981, ‘Where am I?’ in Brainstorms: Philosophical Essays on Mind and Psychology , Cambridge, MA: MIT Press, pp. 310–323.
  • –––, 1987, ‘Fast Thinking’, in The Intentional Stance , Cambridge, MA: MIT Press, 324–337.
  • Double, R., 1983, ‘Searle, Programs and Functionalism’, Nature and System , 5: 107–14.
  • Dretske, F. 1985, ‘Presidential Address’ (Central Division Meetings of the American Philosophical Association), Proceedings and Addresses of the American Philosophical Association , 59(1): 23–33.
  • Fodor, J., 1987, Psychosemantics , Cambridge, MA: MIT Press.
  • –––, 1991, ‘Yin and Yang in the Chinese Room’, in D. Rosenthal (ed.), The Nature of Mind , New York: Oxford University Press.
  • –––, 1992, A Theory of Content and other essays , Cambridge, MA: MIT Press.
  • –––, 2009, ‘Where is my Mind?’, London Review of Books , (31)3: 13–15.
  • Gardiner, H., 1987, The Mind's New Science: A History of the Cognitive Revolution , New York: Basic Books.
  • Hanley, R., 1997, The Metaphysics of Star Trek , New York: Basic Books.
  • Harnad, S., 1989, ‘Minds, Machines and Searle’, Journal of Experimental and Theoretical Artificial Intelligence , 1: 5–25.
  • –––, 2002, ‘Minds, Machines, and Searle2: What's Right and Wrong about the Chinese Room Argument’, in Preston and Bishop (eds.) 2002, 294–307.
  • Haugeland, J., 2002, ‘Syntax, Semantics, Physics’, in Preston and Bishop (eds.) 2002, 379–392.
  • Hauser, L., 1997, ‘Searle's Chinese Box: Debunking the Chinese Room Argument’, Minds and Machines , 7: 199–226.
  • –––, 2002, ‘Nixin’ Goes to China’, in Preston and Bishop (eds.) 2002, 123–143.
  • Hayes, P., Harnad, S., Perlis, D. & Block, N., 1992, ‘Virtual Symposium on Virtual Mind’, Minds and Machines , 2(3): 217–238.
  • Hofstadter, D., 1981, ‘Reflections on Searle’, in Hofstadter and Dennett (eds.), The Mind's I , New York: Basic Books, pp. 373–382.
  • Jackson, F., 1986, ‘What Mary Didn't Know’, Journal of Philosophy , LXXXIII: 291–5.
  • Kaernbach, C., 2005, ‘No Virtual Mind in the Chinese Room’, Journal of Consciousness Studies , 12(11): 31–42.
  • Kurzweil, R., 2000, The Age of Spiritual Machines: When Computers Exceed Human Intelligence , New York: Penguin.
  • –––, 2002, ‘Locked in his Chinese Room’, in Richards 2002, 128–171.
  • Maloney, J., 1987, ‘The Right Stuff’, Synthese , 70: 349–72.
  • Maudlin, T., 1989, ‘Computation and Consciousness’, Journal of Philosophy , LXXXVI: 407–432.
  • Millikan, R., 1984, Language, Thought, and other Biological Categories , Cambridge, MA: MIT Press.
  • Moravec, H., 1999, Robot: Mere Machine to Transcendent Mind , New York: Oxford University Press.
  • Penrose, R., 2002, ‘Consciousness, Computation, and the Chinese Room’ in Preston and Bishop (eds.) 2002, 226–249.
  • Pinker, S., 1997, How the Mind Works , New York: Norton.
  • Preston, J. and M. Bishop (eds.), 2002, Views into the Chinese Room: New Essays on Searle and Artificial Intelligence , New York: Oxford University Press.
  • Rapaport, W., 1984, ‘Searle's Experiments with Thought’, Philosophy of Science , 53: 271–9.
  • Rey, G., 1986, ‘What's Really Going on in Searle's “Chinese Room”’, Philosophical Studies , 50: 169–85.
  • –––, 2002, ‘Searle's Misunderstandings of Functionalism and Strong AI’, in Preston and Bishop (eds.) 2002, 201–225.
  • Richards, J. W. (ed.), 2002, Are We Spiritual Machines: Ray Kurzweil vs. the Critics of Strong AI , Seattle: Discovery Institute.
  • Rosenthal, D. (ed), 1991, The Nature of Mind , Oxford and NY: Oxford University Press.
  • Schank, R. and Abelson, R., 1977, Scripts, Plans, Goals, and Understanding , Hillsdale, NJ: Lawrence Erlbaum.
  • Schank, R. and P. Childers, 1985, The Cognitive Computer: On Language, Learning, and Artificial Intelligence , New York: Addison-Wesley.
  • Searle, J., 1980, ‘Minds, Brains and Programs’, Behavioral and Brain Sciences , 3: 417–57 [ Preprint available online ]
  • –––, 1984, Minds, Brains and Science , Cambridge, MA: Harvard University Press.
  • –––, 1989, ‘Artificial Intelligence and the Chinese Room: An Exchange’, New York Review of Books , 36: 2 (February 16, 1989).
  • –––, 1990a, ‘Is the Brain's Mind a Computer Program?’, Scientific American , 262(1): 26–31.
  • –––, 1990b, ‘Presidential Address’, Proceedings and Addresses of the American Philosophical Association , 64: 21–37.
  • –––, 1998, ‘Do We Understand Consciousness?’ (Interview with Walter Freeman), Journal of Consciousness Studies , 6: 5–6.
  • –––, 1999, ‘The Chinese Room’, in R.A. Wilson and F. Keil (eds.), The MIT Encyclopedia of the Cognitive Sciences , Cambridge, MA: MIT Press.
  • –––, 2002a, ‘Twenty-one Years in the Chinese Room’, in Preston and Bishop (eds.) 2002, 51–69.
  • –––, 2002b, ‘The Problem of Consciousness’, in Consciousness and Language , Cambridge: Cambridge University Press, 7–17.
  • Shaffer, M., 2009, ‘A Logical Hole in the Chinese Room’, Minds and Machines , 19(2): 229–235.
  • Sharvy, R., 1985, ‘It Ain't the Meat It's the Motion’, Inquiry , 26: 125–134.
  • Simon, H. and Eisenstadt, S., 2002, ‘A Chinese Room that Understands’, in Preston and Bishop (eds.) 2002, 95–108.
  • Sloman, A. and Croucher, M., 1980, ‘How to turn an information processor into an understanding’, Brain and Behavioral Sciences , 3: 447–8.
  • Sprevak, M., 2007, ‘Chinese Rooms and Program Portability’, British Journal for the Philosophy of Science , 58(4): 755–776.
  • Stampe, Dennis, 1977, ‘Towards a Causal Theory of Linguistic Representation’, in P. French, T. Uehling, H. Wettstein, (eds.) Contemporary Perspectives in the Philosophy of Language , (Midwest Studies in Philosophy, Volume 2), Minneapolis: University of Minnesota Press, pp. 42–63.
  • Thagard, P., 1986, ‘The Emergence of Meaning: An Escape from Searle's Chinese Room’, Behaviorism , 14: 139–46.
  • Turing, A., 1948, ‘Intelligent Machinery: A Report’, London: National Physical Laboratory.
  • –––, 1950, ‘Computing Machinery and Intelligence’, Mind , 59: 433–460.
  • Weiss, T., 1990, ‘Closing the Chinese Room’, Ratio , 3: 165–81.
  • Failures of Computationalism (Searle's reply to Harnad, and Harnad's response)
  • Cole, D., 2000, ‘ Inverted Spectrum Arguments ,’, unpublished manuscript.
  • Chinese Room , entry in the Dictionary of Philosophy of Mind , Chris Eliasmith (ed.).
  • Annotated Chinese Room Bibliography, by L. Hauser.

consciousness: and intentionality | emergent properties | epiphenomenalism | functionalism | information: biological | mental content: causal theories of | mental representation | multiple realizability | neuroscience, philosophy of | other minds | thought experiments | Turing, Alan | Turing test

The logic of Searle’s Chinese room argument

  • Original Paper
  • Published: 05 August 2006
  • Volume 16 , pages 163–183, ( 2006 )

Cite this article

chinese room thought experiment summary

  • Robert I. Damper 1  

2084 Accesses

4 Citations

Explore all metrics

John Searle’s Chinese room argument (CRA) is a celebrated thought experiment designed to refute the hypothesis, popular among artificial intelligence (AI) scientists and philosophers of mind, that “the appropriately programmed computer really is a mind”. Since its publication in 1980, the CRA has evoked an enormous amount of debate about its implications for machine intelligence, the functionalist philosophy of mind, theories of consciousness, etc. Although the general consensus among commentators is that the CRA is flawed, and not withstanding the popularity of the systems reply in some quarters, there is remarkably little agreement on exactly how and why it is flawed. A newcomer to the controversy could be forgiven for thinking that the bewildering collection of diverse replies to Searle betrays a tendency to unprincipled, ad hoc argumentation and, thereby, a weakness in the opposition’s case. In this paper, treating the CRA as a prototypical example of a ‘destructive’ thought experiment, I attempt to set it in a logical framework (due to Sorensen), which allows us to systematise and classify the various objections. Since thought experiments are always posed in narrative form, formal logic by itself cannot fully capture the controversy. On the contrary, much also hinges on how one translates between the informal everyday language in which the CRA was initially framed and formal logic and, in particular, on the specific conception(s) of possibility that one reads into the logical formalism.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

chinese room thought experiment summary

Formalizing Kant’s Rules

How do mental processes preserve truth husserl’s discovery of the computational theory of mind.

chinese room thought experiment summary

Introduction: Externalist Perspectives on Ignorance and Cognition

Explore related subjects.

  • Artificial Intelligence

Abelson, R. P. (1980). Searle’s argument is just a set of Chinese symbols. Behavioral and Brain Sciences, 3 (3), 424–425. (Peer commentary on Searle, 1980).

Google Scholar  

Anderson, D. (1987). Is the Chinese room the real thing? Philosophy, 62 (3), 389–393.

Article   Google Scholar  

Arthur, R. (1999). On thought experiments as a priori science. International Studies in the Philosophy of Science, 13 (3), 215–229.

Ben-Yami, H. (1993). A note on the Chinese room. Synthese, 95 (2), 169–172.

Bennett, J. (2003). A philosophical guide to conditionals . New York, NY: Oxford University Press.

Brooks, D. H. M. (1994). The method of thought experiment. Metaphilosophy, 25 (1), 71–83.

Brooks, R. A. (1999). Cambrian intelligence . Cambridge, MA: Bradford Books/MIT Press.

MATH   Google Scholar  

Brooks, R. A. (2002). Robot: The future of flesh and machines . London, UK: Penguin.

Brown, J. R. (1991). The laboratory of the mind: Thought experiments in the natural sciences . London and New York: Routledge, 1993 paperback edition.

Bunzl, M. (1996). The logic of thought experiments. Synthese, 106 (2), 227–240.

Clark, A. (1987). Being there: Why implementation matters to cognitive science. Artificial Intelligence Review, 1 (4), 231–244.

Cole, D. (1984). Thought and thought experiments. Philosophical Studies, 45 (3), 431–444.

Cole, D. (1991). Artificial intelligence and personal identity. Synthese, 88 (3), 399–417.

Copeland, B. J. (1993). Artificial intelligence: A philosophical introduction . Oxford, UK: Blackwell.

Copeland, B. J. (2000). The Turing test. Minds and Machines, 10 (4), 519–539.

Copeland, B. J. (2002a). The Chinese room from a logical point of view. In J. Preston, & M. Bishop (Eds.). Views into the Chinese room: Essays on Searle and artificial intelligence (pp. 109–122). Oxford, UK: Clarendon Press.

Copeland, B. J. (2002b). Hypercomputation. Minds and Machines, 12 (4), 461–502.

Article   MATH   Google Scholar  

Damper, R. I. (2004). The Chinese room argument: Dead but not yet buried. Journal of Consciousness Studies, 11 (5–6), 159–169.

Damper, R. I. (2006). Thought experiments can be harmful. The Pantaneto Forum , Issue 26. http://www.pantaneto.co.uk.

Dennett, D. (1980). The milk of human intentionality. Behavioral and Brain Sciences, 3 (3), 428–430. (Peer commentary on Searle, 1980).

Dennett, D. (1991). Consciousness explained . Boston, MA: Little, Brown and Company.

DeRose, K. (1991). Epistemic possibilities. Philosophical Review, 100 (4), 581–605.

Dietrich, E. (1990). Computationalism. Social Epistemology, 4 (2), 135–154.

French, R. M. (1990). Subcognition and the limits of the Turing test. Mind, 99 (393), 53–65.

MathSciNet   Google Scholar  

French, R. M. (2000a). The Chinese room: Just say “no”! In Proceedings of 22nd annual cognitive science society conference (pp. 657–662). Philadelphia, PA: Lawrence Erlbaum Associates, Mahwah, NJ.

French, R. M. (2000b). The Turing test: The first 50 years. Trends in Cognitive Science, 4 (3), 115–122.

Gabbay, D. (1998). Elementary logics: A procedural perspective . Hemel Hempstead, UK: Prentice Hall Europe.

Gendler, T. S. (2000). Thought experiment: On the powers and limits of imaginary cases . New York, NY: Garland Press.

Gendler, T. S., & Hawthorne, J. (Eds.) (2002). Conceivability and possibility . Oxford, UK: Clarendon Press.

Gomila, A. (1991). What is a thought experiment? Metaphilosophy, 22 (1–2), 84–92.

Hacking, I. (1967). Possibility. Philosophical Review, 76 (2), 143–168.

Hacking, I. (1975). All kinds of possibility. Philosophical Review, 84 (3), 321–337.

Häggqvist, S. (1996). Thought experiments in philosophy . Stockholm, Sweden: Almqvist & Wiksell.

Harnad, S. (1989). Minds, machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence, 1 (1), 5–25.

Harnad, S. (2002). Minds, machines and Searle 2: What’s wrong and right about the Chinese room argument. In J. Preston, & M. Bishop (Eds.). Views into the Chinese room: Essays on searle and artificial intelligence (pp. 294–307). Oxford, UK: Clarendon Press.

Haugeland, J. (2002). Syntax, semantics, physics. In J. Preston, & M. Bishop (Eds.). pp. 379–392.

Hofstadter, D. (1980). Reductionism and religion. Behavioral and Brain Sciences, 3 (3), 433–434. (Peer commentary on Searle, 1980).

Hofstadter, D. R., & Dennett, D. C. (1981). The mind’s I: Fantasies and reflections on self and soul . Brighton, UK: Harvester Press.

Horowitz, T., & Massey, G. (Eds.) (1991). Thought experiments in science and philosophy . Lanham, MD: Rowman and Littlefield.

Jacquette, D. (1989). Adventures in the Chinese room. Philosophy and Phenomenology, 49 (4), 605–623.

Lewis, C. I. (1918). A survey of symbolic logic . Berkeley, CA: University of California Press.

Lewis, D. (1973). Counterfactuals . Cambridge, MA: Harvard University Press.

Lycan, W. (1980). The functionalist reply (Ohio State). Behavioral and Brain Sciences, 3 (3), 434–435. (Peer commentary on Searle, 1980).

Maloney, J. C. (1987). The right stuff. Synthese, 70 (3), 349–372.

McCarthy, J. (1979). Ascribing mental qualities to machines. In M. Ringle (Ed.), Philosophical perspectives in artificial intelligence (pp. 161–195). Atlantic Highlands, NJ: Humanities Press.

McFarland, D., & Bösser, T. (1993). Intelligent behavior in animals and robots . Cambridge, MA: Bradford Books/MIT Press.

Melnyk, A. (1996). Searle’s abstract argument against strong AI. Synthese, 108 (3), 391–419.

Article   MATH   MathSciNet   Google Scholar  

Moor, J. H. (1976). An analysis of the Turing test. Philosophical Studies, 30 (4), 249–257.

Moural, J. (2003). The Chinese room argument. In B. Smith (Ed.). John Searle (pp. 214–260). Cambridge, UK: Cambridge University Press.

Newell, A. (1973). Artificial intelligence and the concept of mind. In R. C. Shank, & K. M. Colby (Eds.), Computer models of thought and language (pp. 1–60). San Francisco, CA: Freeman.

Newell, A. (1980). Physical symbol systems. Cognitive Science, 4 (2), 135–183.

Norton, J. (1996). Are thought experiments just what you always thought? Canadian Journal of Philosophy, 26 (3), 333–366.

Peijnenburg, J., & Atkinson, D. (2003). When are thought experiments poor ones? Journal for General Philosophy of Science, 34 (2), 305–322.

Pfeifer, R., & Scheirer, C. (1999). Understanding intelligence . Cambridge, MA: MIT Press.

Preston, J. (2002). Introduction. In J. Preston, & M. Bishop (Eds.). Views into the Chinese room: Essays on searle and artificial intelligence (pp. 1–50). Oxford, UK: Clarendon Press.

Preston, J., & Bishop, M. (Eds.) (2002). Views into the Chinese room: Essays on Searle and artificial intelligence . Oxford, UK: Clarendon Press.

Putnam, H. (1975). The meaning of ‘meaning’. In K. Gunderson (Ed.), Language, mind and knowledge (pp. 131–193). Minneapolis, MN: University of Minnesota Press.

Rapaport, W. J. (1986). Searle’s experiments with thought. Philosophy of Science, 53 (2), 271–279.

Reiss, J. (2002). Causal inference in the abstract or seven myths about thought experiments. Technical Report CTR 03/02, Centre for Philosophy of Natural and Social Science, London School of Economics, London, UK.

Russow, L.-M. (1984). Unlocking the Chinese room. Nature and System, 6 , 221–227.

Saygin, A. P., Cicekli, I., & Akman, A. (2000). Turing test: 50 years later. Minds and Machines, 10 (4), 463–518.

Schank, R. C., & Abelson, R. P. (1977). Scripts, plans, goals, and understanding . Hillsdale, NJ: Lawrence Erlbaum Associates.

Scheutz, M. (Ed.) (2002). Computationalism: New directions . Cambridge, MA: Bradford Books/MIT Press.

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3 (3), 417–457. (Including peer commentary).

Searle, J. R. (1983). Intentionality: An essay in the philosophy of mind . Cambridge, UK: Cambridge University Press.

Searle, J. R. (1984). Minds, brains and science: The 1984 Reith lectures . London, UK: Penguin.

Searle, J. R. (1997). The mystery of consciousness . London, UK: Granta.

Searle, J. R. (2002). Twenty one years in the Chinese room. In J. Preston, & M. Bishop (Eds.). Views into the Chinese room: Essays on searle and artificial intelligence (pp.␣51–59). Oxford, UK: Clarendon Press.

Seddon, G. (1972). Logical possibility. Mind, 81 (324), 481–494.

Siegelmann, H. T. (1999). Neural networks and analog computation: Beyond the Turing limit . Boston, MA: Birkhäuser.

Sloman, A., & Croucher, M. (1980). How to turn an information processor into an understander. Behavioral and Brain Sciences, 3 (3), 447–448. (Peer commentary on Searle, 1980).

Smith, B. (Ed.) (2003). John Searle . Cambridge, UK: Cambridge University Press.

Sorensen, R. A. (1992). Thought experiments . New York, NY: Oxford University Press.

Sorensen, R. A. (1998). Review of Sören Häggqvist’s “Thought experiments in philosophy”. Theoria, 64 (1), 108–118.

Souder, L. (2003). What are we to think about thought experiments? Argumentation, 17 (2), 203–217.

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59 (236), 433–460.

Wakefield, J. C. (2003). The Chinese room argument reconsidered: Essentialism, indeterminacy and strong AI. Minds and Machines, 13 (2), 285–319.

Weiss, T. (1990). Closing the Chinese room. Ratio, 3 (2), 165–181.

Wilensky, R. (1983). Planning and understanding: A computational approach to human reasoning . Reading, MA: Addison-Wesley.

Wilkes, K. V. (1988). Real people: Personal identity without thought experiments . Oxford, UK: Clarendon.

Wilks, Y. (1982). Searle’s straw men. Behavioral and Brain Sciences, 5 (2), 343–344. (Continuing peer commentary on Searle, 1980).

Yablo, S. (1993). Is conceivability a guide to possibility? Philosophy and Metaphysical Research, 53 (1), 1–42.

Download references

Acknowledgements

I am indebted to David Atkinson, Alan Bundy, Martin Bunzl, Jack Copeland, Robert French, Sören Häggqvist, Stevan Harnad, Kieron O’Hara, Jeanne Peijnenburg, Adam Prügel-Bennett, Nigel Shadbolt and Aaron Sloman for critical comments on this paper, which helped me to improve it in clarity and presentation. These acknowledgements should not be taken to imply endorsement of the content.

Author information

Authors and affiliations.

Electronics & Computer Science, University of Southampton, Southampton, SO17 1BJ, UK

Robert I. Damper

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Robert I. Damper .

Additional information

Based on a paper presented at International Congress on Thought Experiments Rethought , Centre for Logic and Philosophy of Science, Ghent University, Belgium, 24–25 September 2004.

Rights and permissions

Reprints and permissions

About this article

Damper, R.I. The logic of Searle’s Chinese room argument. Minds & Machines 16 , 163–183 (2006). https://doi.org/10.1007/s11023-006-9031-5

Download citation

Received : 18 March 2006

Accepted : 05 July 2006

Published : 05 August 2006

Issue Date : May 2006

DOI : https://doi.org/10.1007/s11023-006-9031-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Chinese room argument
  • Modal logic
  • Philosophy of mind
  • Thought experiments
  • Find a journal
  • Publish with us
  • Track your research

Quick Links

Current Courses Don Berkich: Medical Ethics Philosophy of Love and Sex Resources Reading Philosophy Writing Philosophy

  • This Semester
  • Next Semester
  • Past Semesters
  • Descriptions
  • Two-Year Rotation
  • Double-Major
  • Don Berkich
  • Stefan Sencerz
  • Glenn Tiller
  • Administration
  • Philosophy Club
  • Finding Philosophy
  • Reading Philosophy
  • Writing Philosophy
  • Philosophy Discussions
  • The McClellan Award
  • Undergraduate Journals
  • Undergraduate Conferences

The Chinese Room

The classic argument against the possibility of a machine understanding what it is doing is Searle's Chinese Room Thought Experiment.

To find out what a machine might understand, Searle puts himself in the machine's position and asks, what would I understand in this context?

The Chinese Room Thought Experiment

Searle imagines himself in a locked room where he is given pages with Chinese writing on them. He does not know Chinese. He does not even recognize the writing as Chinese per se. To him, these are meaningless squiggles. But he also has a rule-book, written in English, which dictates just how he should group the Chinese pages he has with any additional Chinese pages he might be given. The rules in the rule-book are purely formal. They tell him that a page with squiggles of this sort should be grouped with a page with squiggles of that sort but not with squiggles of the other sort. The new groupings mean no more to Searle than the original ordering. It's all just symbol-play, so far as he is concerned. Still, the rule-book is very good. To the Chinese-speaker reading the Searle-processed pages outside the room, whatever is in the room is being posed questions in Chinese and is answering them quite satisfactorily, also in Chinese.

The analogy, of course, is that a machine is in exactly the same position as Searle. Compare, for instance, Searle to R2D2. The robot is good at matching key features of faces with features stored in its database. But the matching is purely formal in exactly the same way that Searle's matching of pages is purely formal. It could not be said that the robot recognizes, say, Ted any more than it could be said that Searle understands Chinese. Even if R2D2 is given a mouth and facial features such that it smiles when it recognizes a friend and frowns when it sees a foe, so that to all outward appearances the robot understands its context and what it is seeing , the robot is not seeing at all. It is merely performing an arithmetical operation - matching pixels in one array with pixels in another array according to purely formal rules - in almost exactly the same way that Searle is matching pages with pages. R2D2 does not understand what it is doing any more than Searle understands what he is doing following the rulebook.

It is precisely because R2D2 has no capacity to understand what it is doing that the thought of putting the robot in a psychiatric hospital is absurd. Moreover, if Searle is correct, no amount of redesigning will ever result in a robot which understands what it is doing, since no matter how clever or complicated the rule-book, it is still just a rule-book. Yet if a machine cannot, in principle, understand what it is doing, then it cannot be intelligent.

 
  1 If it is possible for machines to be intelligent, then machines must understand what it is that they are doing.  
  2 Nothing which operates only according to purely formal rules can understand what it is doing.  
  3 Necessarily, machines operate only according to purely formal rules.  
4 Machines cannot understand what it is that they are doing. 2&3
5 Machines cannot be intelligent. 1&4
 

Of course, there have been many, many criticisms of Searle's thought experiment. In the same article, Searle presents replies to some of these criticisms. Suffice it to say that the Chinese Room Thought Experiment poses a serious challenge to the possibility of Artificial Intelligence.

  • College of Liberal Arts
  • Bell Library
  • Academic Calendar

Send comments to [email protected]. I sometimes make changes suggested in them. - John McCarthy

The number of hits on this page since 2001 September 28.

SEPA

Searle and the Chinese Room Argument

David Leech Anderson : Text Author, Storyboards Robert Stufflebeam : Animations, Storyboards Kari Cox: Animations

PART ONE: The Argument

Is it possible for a machine to be intelligent? to understand a language? If it is to understand a language, our “intelligent machine” must be able to grasp the meanings of words and sentences (for example, the sentence: “It will rain on Friday.”) And if it can do that, then it can have beliefs (“I believe that it will rain on Friday"). And if it can have beliefs then possibly it can have other mental states like hopes (“I hope that it will rain on Friday”) and fears (“I fear that it will rain on Friday”). But what is required to grasp the meanings of words?

Beliefs, hopes, fears, and even pains are all mental states. A thing that can have such mental states is said to have a “mind.” So what we are really asking is: “Is it possible for a machine to have a mind?” While there are many films that show us robots that behave as if they have minds, that is an illusion fashioned by Hollywood. The illusion is accomplished either by human beings who do have minds pretending to be robots or by machines that don't have minds being made (through special effects) to behave as if they do. We want to know whether a machine might one day genuinely have a mind.

Whether a machine could have a mind depends, of course, on what a "mind" is. Through the ages, a variety of different theories have been advanced which claim to explain the essential nature of "minds." One theory which has gone the furthest to encourage people to believe that a machine could have a mind is the theory known as "functionalism." [A general introduction to Functionalism is available here.] According to this theory, a mental state (like my belief that "It will rain today") is characterized by the function or purpose that it services within the life of the individual. This is often described as its "causal role." There are several different varieties of functionalism, but the most influential is computational functionalism which is sometimes called the "computational theory of mind." Mental states, on this account, are analogous to the software states of a computer. Your brain is the hardware and your mind is the software. If computational functionalism is true, then certainly it is possible for a machine to have mental states. All that is required is that the machine run the right kind of software.

Functionalism is one of the influences that has encouraged people to believe in the possibility of intelligent machines. Another such influence is the conviction that the only fair measure of intelligence (the only reasonable criteria) is a performance test. If something can behave as intelligently as a human being, then it should be credited with possessing that property. ("If it walks like a duck and talks like a duck, then it’s a duck.") It wouldn't be fair to deny it intelligence just because its body is made of metal instead of organic materials, would it? In the past some groups of people have claimed that other groups are of lesser intelligence and are less worthy of respect simply because of their ancestry, their skin color, or their gender. We should not want to make the same mistake with machines, if indeed the only relevant difference is the "stuff" out which they are made.

Alan Turing was the first person to offer a test for machine intelligence: The Turing Test . While Turing himself denied that his test could determine whether a machine could actually "think" or have a "mind" (Turing thought these questions were too vague to answer), he did believe that his was the only concrete way to ask a meaningful version of that question. His new question was something like: "Could a machine carry on a conversation sufficienty well to fool humans into thinking that it was human?" This is a practical question for which a test can be designed. While the Turing Test is controversial, there are many who find it a reasonable test for intelligence and for language understanding.

John Searle is not among this group. He rejects functionalism and does not believe that the Turing Test is a reliable test for intelligence. In fact, he believes that he has an argument that shows that no classical artificial intelligence program [see Computer Types: Classical vs. Non-classical ] running on a digital computer will give a machine the capacity to understand a language. He calls his argument the "Chinese Room Argument." [NOTE: Searle actually believes that his argument works against "non-classical" computers as well, but it is best to start with the digital computers with which we are all most familiar.]

The Chinese Room

Searle asks you to imagine the following scenario** : There is a room. Sometimes people come to the room with a piece of paper which they slip into the room through a slot. Then they wait a while until the same piece of paper comes back out of a room through a second slot. You soon discover that the people slipping the paper into the room are native Chinese speakers who are sending questions into the room. Let's imagine that they have written the following question:

This question in Chinese means " What brings happiness?"

When the paper is later passed out of the room, the Chinese speakers discover that an answer has been written below the question. The answer is also in Chinese and the native speakers determine that it is, in fact, a wise answer to the question. (We made up the question, but we borrowed a line from The Tao Te Ching by Lao Tsu hoping for a "wise" answer.)

The answer to the question is highlighted in red. This sentence is from Lao-Tzu's Tao Te Ching. It means "Be the stream of the universe."

When the Chinese speakers receive intelligent answers to their questions, they reasonably conclude that there is an intelligent person inside the room who understands Chinese. But, in this thought experiment we are to imagine that the only person inside the room understands no Chinese and speaks only English. For the sake of discussion, we will assume it is Searle himself; you can assume (if you like) that it is you in the room. The room is full of books. Messages are passed into the room with symbols written on them. Searle's job is to look through the books until he finds the string of symbols that look exactly like the ones written on the piece of paper. When he finds that string of symbols, the book will tell him (in English) what new string of symbols he is to write on the bottom of the page, below the first string of symbols.

** NOTE: We have modified Searle's story slightly, leaving out some unnecessary details and adding a few embellishments to help clarify the main points. (For example, we have introduced a specific Chinese question and answer which Searle did not.) We recommend that you read the to get Searle's version of the story.

The book that Searle is reading in this picture is a book written in English. Note that the only sentences on the page are sentences that you, as an English speaker, can understand. This book is one of countless volumes that together tell Searle what output (in the form of Chinese symbols) should be given in response to virtually any input (of Chinese symbols) that comes through the slot into the room. Each volume covers only a small percentage of the possible inputs. That's why there must be so many volumes. This particular volume tells what output to give in response to virtually any input of Chinese symbols that begins with the first two Chinese symbols written on the piece of paper.

Searle does not recognize any of the symbols. They are simply meaningless shapes to him. For all he knows, they may be nothing more than patterns for making wall-paper and not a language at all. As it turns out, the symbols do have meanings. They are Chinese symbols. More than that, they are questions in Chinese being asked by intelligent speakers.

The books function as a computer program. Each page gives instructions for how to manipulate symbols. The instructions at no point make any reference to the meaning of the symbols. (That is, nowhere will you find a sentence that gives the English translation of any of the Chinese symbols.) None of these books is anything like a Chinese-English dictionary. Instead, like a computer program itself, it instructs the reader how to manipulate the symbols based on their formal properties (their shape and position) not their meaning. If you see symbol "X" here, then write symbol "Y" there.

It is time to see the Chinese Room in action. Below is a Flash animation where a representation of John Searle is busy at work inside the Chinese Room. Click "Begin".

This virtual lab was originally an interactive Flash animation. Flash was retired at the end of 2020. This video preseves all the content of the original. Read all of the text and watch all of the animation. Pause the video as needed.

Let's imagine that the books in the Chinese Room prove to be an effective AI program that can pass the Turing Test in Chinese. In the animation you just viewed, the Chinese characters that were passed into the room asked the question: "What brings happiness?". And the Chinese characters that Searle wrote in response to that question are from the Tao Te Ching by Lao Tsu (although Searle doesn't know that) and the answer means,: "Be the stream of the universe." If the Chinese Room can pass the Turing Test, should we say that the room "understands" Chinese? "NO!" says Searle. If questions written in English were passed into the room, Searle would be able to read and understand them and write answers to them. But his relationship to the Chinese symbols is quite different. He could spend years, taking in thousands of pieces of paper with Chinese written on them, and he could write thousands of intelligent-sounding answers in Chinese, and yet at no point would he ever understand Chinese. Searle explains in his own words:

Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view—that is, from tile point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don’t speak a word of Chinese. Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. From the external point of view—from the point of view of someone reading my "answers"—the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program. Now the claims made by strong AI are that the programmed computer understands the stories and that the program in some sense explains human understanding. But we are now in a position to examine these claims in light of our thought experiment. 1. As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank’s computer understands nothing of any stories, whether in Chinese, English, or whatever, since in the Chinese case the computer is me, and in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing. ["Schank's computer" is a reference to a computer running an AI program written by Roger Schank called SAM (Script Applier Mechanism) which "reads" stories and answers questions about them.]

Searle believes that he has demonstrated that no computer program that manipulates symbols based solely on their formal "syntactic" properties (e.g., their shape and their position) can ever be said to understand a language . . even if it does pass The Turing Test. It is important to note that Searle is not saying that no machine can understand a language. He is a naturalist who believes that human beings are biological machines and that we understand a language. He is not even saying that no computer could understand a language. Again, humans are computers in the sense that certain operations of the human brain can properly be described as "computing," as "implementing functions." Searle is not insisting that humans do not compute and do not implement functions. He is insisting, however, that genuine thought and understand require something more than mere computation. In understanding a language we do not merely manipulate symbols based on their formal properties. We do something (he doesn't pretend to say what) in addition to manipulating symbols in virtue of which we actually understand the meaning of the symbols -- which the Chinese Room does not.

Responses to the Argument

It is probably safe to say that no argument in the philosophy of mind (or in any area dealing with the nature of thought and cognition) has generated the level of anger and the vitriolic attacks that the Chinese Room argument has. People do not merely accept or reject the argument: Often, they passionately embrace it or they belligerently mock it. The reason feelings run so high, I think, is that the argument has been very successful. In classrooms, when students with no background in the controversy first hear the argument, a majority (often a vast majority) immediately accept it. Many philosophers and scientists also accept the argument, although in that demographic the percentage of defenders is not nearly so large. Among those working in AI, cognitive science, and related fields, the feelings run strong because critics of the argument believe that it has had a negative impact on the field -- convincing people to abandon pausible and empirically powerful theories of the mind because of (what they consider to be) one dangerously misleading thought-experiment.

This is an important argument. Much discussed and much debated. The objective of this module is not to take sides. Instead, the goal is (1) to explain the structure of the argument (so as better to evaluate it), (2) to explain the structure of a few representative objections to the argument, and (3) to try to diagnose why there is such heated disagreement about whether the argument is or is not effective. In the end, we hope the reader will come away impressed by how deep the debate about the nature of the mind goes and how the Chinese Room argument might best be seen as a way to bring to light a person's fundamental assumptions about the nature of human thought and experience in much the same way that a Rorschach Test is supposed to reveal a person's deepest psychological traits.

chinese room thought experiment summary

PLATO - Philosophy Learning and Teaching Organization - New Logo

Searle’s Chinese Room: Do computers think?

Print Friendly, PDF & Email

Lesson Plan

Can a computer think? John Searle’s Chinese Room argument can be used to argue that computers do not “think,” that computers do not understand the symbols that they process. For example, if you’re typing an email to your friend on the computer, the computer does not understand what your message to your friend means. This Chinese Room thought experiment was a response to the Turing Test.

In the Chinese Room argument from his publication, “Minds, Brain, and Programs,” Searle imagines being in a room by himself, where papers with Chinese symbols are slipped under the door. He has an instruction book in English that tells him what Chinese symbols to slip back out of the room. He does this all day long, manipulating one Chinese symbol to another Chinese symbol. After doing it for a while, he gets faster and faster at manipulating the Chinese symbols. He gets so good that he can memorize the symbols that come in and what symbols to send out, and he can manipulate symbols instantly. Does he understand Chinese? Most people would say, ‘no.’ This is analogous to a computer that takes in strings of symbols, manipulates them, and outputs other symbols. If you do not think the person in the Chinese Room understands Chinese, then a computer does not think either.

Here’s a stimulus for discussion:

Imagine that you work for a secret spy organization, and you have small office all to yourself, where your job is to receive and send messages for this organization. The messages you receive have special codes that look like Chinese characters, and you have a highly-classified book that “decodes” the messages. You don’t know Chinese, so you don’t know if these codes are Chinese or some other language. First, you take the code you receive, and then you find the code in the book, which will then give you a new code to send out. So, what you’re doing taking codes in and then sending out new codes. After you do this for a while, you begin to decode faster and faster, and after doing it for a long time, you have the codes memorized. So, when you receive one code, you instantly know what code to send out. Even though you instantly know what codes to send out after you get one in, do you understand what the codes mean? Do you understand the messages that your secret spy organization is sending and receiving?

(Allow students to respond. Most will say, “No.”)

If student say, “no,” then inquire, “Like the spy room, computers take in our commands and put out what we want. For example, when I push the power button, I can turn on the computer, or when I click on the internet browser icon, it brings up the internal search box. If you think that the person in the spy room doesn’t understand the messages, then does a computer understand what is happening?”

(Allow students to respond. Again, most will answer, “No.”)

A student might still think that the person in the Spy room doesn’t understand, and one might either find hole in the analogy or argue that the computer is thinking or consciousness nonetheless. For example, one might argue that the person in the room is only part of the system and doesn’t understand the symbols, but the whole system (room and person with its inputs and outputs) understands the messages. Or one might argue about the definition of thinking or understanding. There could be different levels or degrees of understanding. Or one can concede that a calculate or computer doesn’t understand, but what about a sophisticated robot that is programmed to act just like a human- walk and talk, eat and sleep, and even feel emotion?

  • Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457
  • Stanford Encyclopedia of Philosophy: The Chinese Room Argument

If you would like to change or adapt any of PLATO's work for public use, please feel free to contact us for permission at [email protected] .

Related Tools

chinese room thought experiment summary

Connect With Us!

chinese room thought experiment summary

Stay Informed

chinese room thought experiment summary

PLATO is part of a global UNESCO network that encourages children to participate in philosophical inquiry. As a partner in the UNESCO Chair on the Practice of Philosophy with Children, based at the Université de Nantes in France, PLATO is connected to other educational leaders around the world.

Damian K. F. Pang M.Sc.

Artificial Intelligence

What a mysterious chinese room can tell us about consciousness, how a simple thought experiment changed our views on computer and ai sentience..

Posted August 3, 2023 | Reviewed by Devon Frye

  • The Chinese room argument is a thought experiment by the American philosopher John Searle.
  • It has been used to argue against sentience by computers and machines.
  • While objections have been raised, it remains one an influential way to think about AI and cognition.
  • Consciousness is mysterious, but computers don’t need to be sentient to produce meaningful language outputs.

Imagine you were locked inside a room full of drawers that are stacked with papers containing strange and enigmatic symbols. In the centre of the room is a table with a massive instruction manual in plain English that you can easily read.

Although the door is locked, there is a small slit with a brass letterbox flap on it. Through it, you receive messages with the same enigmatic symbols that are in the room. You can find the symbols for each message you receive in the enormous instruction manual, which then tells you exactly which paper to pick from the drawers and send out through the letterbox as a response.

Leon Gao | Unsplash

Unbeknownst to the person trapped inside the room, the enigmatic symbols are actually Chinese characters. The person inside has unknowingly held a coherent conversation with people outside simply by following the instruction manual but without understanding anything or even being aware of anything other than messages being passed in and out.

Franks Valli | Wikimedia Commons

This story was conceived by the American philosopher John Searle 1 in 1980 and has become one of the most influential and most cited papers in the cognitive sciences and the philosophy of mind with huge implications for how we see computers, artificial intelligence (AI), and machine sentience (Cole, 2023).

Searle (1980) used this thought experiment to argue that computer programs—which also manipulate symbols according to set rules—do not truly understand language or require any form of consciousness even when giving responses comparable to those of humans.

Is AI Sentient?

A Google engineer made headlines in 2022 by claiming that the AI program he was working on was sentient and alive (Tiku, 2022). The recent advance of language-based AI, like ChatGPT, has made many people interact with it just as they would with real people (see " Why Does ChatGPT Feel So Human? ").

It is not surprising then, that many users truly believe that AI has become sentient (Davalos & Lanxon, 2023). However, most experts don’t think that AI is conscious (Davalos & Lanxon, 2023; Pang, 2023a), not least because of the influence of Searle’s Chinese room argument.

Consciousness is a difficult concept that is hard to define and fully understand (see " What is Consciousness? " and " The Many Dimensions of Consciousness "; Pang, 2023b; Pang, 2023c). AI programs like ChatGPT employ large language models (LLM) that use statistical analyses of billions of sentences written by humans to create outputs based on predictive probabilities (Pang, 2023a). In this sense, it is a purely mathematical approach based on a huge amount of data.

While this is a tremendous achievement and a hugely complex task, in its essence, AI follows instructions to create an output based on an input, just like the person stuck in the Chinese room thought experiment. Sentience is not required to have sophisticated outputs or even to pass the Turing test—where a human evaluator cannot tell the difference between communicating with a machine or with another human (Oppy & Dowe, 2021).

Joshua Woroniecki | Unsplash

But there is another more troubling implication of Searle’s thought experiment: There is a conscious human being inside the Chinese room who is completely unaware of the communications going on in Chinese. Although we have no evidence suggesting that AI is conscious, let’s assume for a moment that it was: The conscious part is unlikely to understand its own language model and while sentient, may have no idea about the meaning of its own language-based output—just like the person inside the Chinese room.

If AI was conscious, it may be suffering from a kind of locked-in syndrome (see " The Mysteries of a Mind Locked Inside an Unresponsive Body "; Pang, 2023c). It is not clear if this barrier could ever be overcome.

Another implication of the Chinese room argument is that language production does not necessarily have to be linked to consciousness. This is not just true for machines but also for humans: Not everything people say or do is done consciously.

Searle’s influential essay has not been without its critics. In fact, it had an extremely hostile reception after its initial publication, with 27 simultaneously published responses that wavered between antagonistic and rude (Searle, 2009). Everyone seemed to agree that the argument was wrong but there was no clear consensus on why it was wrong (Searle, 2009).

chinese room thought experiment summary

While the initial responses may have been reactionary and emotional, new discussions have appeared constantly over the past four decades since its publication. The most cogent response is that while no individual component inside the room understands Chinese, the system as a whole does (Block, 1981; Cole, 2023). Searle responded that the person could theoretically memorize the instructions and thus, embody the whole system while still not being able to understand Chinese (Cole, 2023). Another possible response is that understanding is fed into the system through the person (or entity) that wrote the instruction manual, which is now detached from the system.

Another objection is that AI is no longer just following instructions but is self-learning (LeCun et al., 2015). Moreover, when AI is embodied as a robot, the system could ground bodily regulation, emotion , and feelings just like humans (Ziemke, 2016). The problem is that we still don’t understand how consciousness works in humans and it is not clear why having a body or a self-learning software would suddenly generate conscious awareness.

Many other replies and counterarguments have been proposed. While still controversial, the Chinese room argument has been and still is hugely influential in the cognitive sciences, AI studies, and the philosophy of mind.

1 John Searle is one of the most influential contemporary philosophers of mind. His stellar academic career at Oxford and UC Berkeley has been tainted by allegations of sexual assault: A lawsuit filed in 2019 reached a confidential settlement and an internal investigation by UC Berkeley resulted in his emeritus status being revoked (Atkins, 2018; Weinberg, 2019).

Atkins, D. (2018, October 16). Berkeley Prof can't avoid harassment settlement, judge told. Law 360. https://jacksonkernion.com/files/Law360%20Article.pdf

Block, N. (1981). Psychologism and Behaviorism. Philosophical Review, 90 (1), 5-43. https://doi.org/10.2307/2184371

Cole, D. (2023). The Chinese Room Argument. In E. N. Zalta & U. Nodelman (Eds.) Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/chinese-room/

Davalos, J., & Lanxon, N. (2023, April 19). AI isn’t sentient. Blame its creators for making people think it is. Bloomberg. https://www.bloomberg.com/news/newsletters/2023-04-19/ai-sentience-debate-chatgpt-highlights-risks-of-humanizing-chatbots

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521 (7553), 436-444. https://doi.org/10.1038/nature14539

Oppy, G., & Dowe, D. (2021). The Turing Test. Standford Encyclopedia of Philosophy . https://plato.stanford.edu/entries/turing-test/

Pang, D. K. F. (2023a). Why does ChatGPT feel so human? Psychology Today. https://www.psychologytoday.com/intl/blog/consciousness-and-beyond/202305/why-does-chatgpt-feel-so-human

Pang, D. K. F. (2023b). What is consciousness? Psychology Today . https://www.psychologytoday.com/intl/blog/consciousness-and-beyond/202305/what-is-consciousness

Pang, D. K. F. (2023c). The many dimensions of consciousness. Psychology Today . https://www.psychologytoday.com/intl/blog/consciousness-and-beyond/202305/the-many-dimensions-of-consciousness

Pang, D. K. F. (2023d). The mysteries of a mind locked inside an unresponsive body. Psychology Today . https://www.psychologytoday.com/intl/blog/consciousness-and-beyond/202307/the-mysteries-of-a-mind-locked-inside-an-unresponsive-body

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3 (3), 417-457. https://doi.org/10.1017/S0140525X00005756

Searle, J. R. (2009). Chinese Room argument. Scholarpedia, 4 (8), 3100. http://dx.doi.org/10.4249/scholarpedia.3100

Tiku, N. (2022, June 11). The Google engineer who thinks the company’s AI has come to life. The Washington Post. https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

Weinberg, J. (2019, June 21). Searle found to have violated sexual harassment policies (Updated with further details and statement from Berkeley). Daily Nous. https://dailynous.com/2019/06/21/searle-found-violated-sexual-harassment-policies/

Ziemke, T. (2016). The body of knowledge: On the role of the living body in grounding embodied cognition. Biosystems, 148 , 4-11. https://doi.org/10.1016/j.biosystems.2016.08.005

Damian K. F. Pang M.Sc.

Damian K. F. Pang, M.Sc., is a researcher focusing on consciousness, perception, and memory as well as the philosophy of mind and the similarities and differences between human cognition and AI.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • International
  • New Zealand
  • South Africa
  • Switzerland
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

July 2024 magazine cover

Sticking up for yourself is no easy task. But there are concrete skills you can use to hone your assertiveness and advocate for yourself.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

March 9, 2021

Quantum Mechanics, the Chinese Room Experiment and the Limits of Understanding

All of us, even physicists, often process information without really knowing what we’re doing

By John Horgan

chinese room thought experiment summary

Artem Peretiatko Getty Images

Like great art, great thought experiments have implications unintended by their creators. Take philosopher John Searle’s Chinese room experiment . Searle concocted it to convince us that computers don’t really “think” as we do; they manipulate symbols mindlessly, without understanding what they are doing.

Searle meant to make a point about the limits of machine cognition. Recently, however, the Chinese room experiment has goaded me into dwelling on the limits of human cognition. We humans can be pretty mindless too, even when engaged in a pursuit as lofty as quantum physics.

Some background. Searle first proposed the Chinese room experiment in 1980. At the time, artificial intelligence researchers, who have always been prone to mood swings , were cocky. Some claimed that machines would soon pass the Turing test, a means of determining whether a machine “thinks.”

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Computer pioneer Alan Turing proposed in 1950 that questions be fed to a machine and a human. If we cannot distinguish the machine’s answers from the human’s, then we must grant that the machine does indeed think. Thinking, after all, is just the manipulation of symbols, such as numbers or words, toward a certain end.

Some AI enthusiasts insisted that “thinking,” whether carried out by neurons or transistors, entails conscious understanding. Marvin Minsky espoused this “strong AI” viewpoint when I interviewed him in 1993 . After defining consciousness as a record-keeping system, Minsky asserted that LISP software, which tracks its own computations, is “extremely conscious,” much more so than humans. When I expressed skepticism, Minsky called me “racist.”

Back to Searle, who found strong AI annoying and wanted to rebut it. He asks us to imagine a man who doesn’t understand Chinese sitting in a room. The room contains a manual that tells the man how to respond to a string of Chinese characters with another string of characters. Someone outside the room slips a sheet of paper with Chinese characters on it under the door. The man finds the right response in the manual, copies it onto a sheet of paper and slips it back under the door.

Unknown to the man, he is replying to a question, like “What is your favorite color?,” with an appropriate answer, like “Blue.” In this way, he mimics someone who understands Chinese even though he doesn’t know a word. That’s what computers do, too, according to Searle. They process symbols in ways that simulate human thinking, but they are actually mindless automatons.

Searle’s thought experiment has provoked countless objections. Here’s mine. The Chinese room experiment is a splendid case of begging the question (not in the sense of raising a question, which is what most people mean by the phrase nowadays, but in the original sense of circular reasoning). The meta-question posed by the Chinese Room Experiment is this: How do we know whether any entity, biological or non-biological, has a subjective, conscious experience?

When you ask this question, you are bumping into what I call the solipsism problem . No conscious being has direct access to the conscious experience of any other conscious being. I cannot be absolutely sure that you or any other person is conscious, let alone that a jellyfish or smartphone is conscious. I can only make inferences based on the behavior of the person, jellyfish or smartphone.

Now, I assume that most humans, including those of you reading these words, are conscious, as I am. I also suspect that Searle is probably right, and that an “intelligent” program like Siri only mimics understanding of English. It doesn’t feel like anything to be Siri, which manipulates bits mindlessly. That’s my guess, but I can’t know for sure, because of the solipsism problem.

Nor can I know what it’s like to be the man in the Chinese room. He may or may not understand Chinese; he may or may not be conscious. There is no way of knowing, again, because of the solipsism problem. Searle’s argument assumes that we can know what’s going on, or not going on, in the man’s mind, and hence, by implication, what’s going on or not in a machine. His flawed initial assumption leads to his flawed, question-begging conclusion.

That doesn’t mean the Chinese room experiment has no value. Far from it. The Stanford Encyclopedia of Philosophy calls it “the most widely discussed philosophical argument in cognitive science to appear since the Turing Test.” Searle’s thought experiment continues to pop up in my thoughts. Recently, for example, it nudged me toward a disturbing conclusion about quantum mechanics, which I’ve been struggling to learn over the last year or so.

Physicists emphasize that you cannot understand quantum mechanics without understanding its underlying mathematics. You should have, at a minimum, a grounding in logarithms, trigonometry, calculus (differential and integral) and linear algebra. Knowing Fourier transforms wouldn’t hurt.

That’s a lot of math, especially for a geezer and former literature major like me. I was thus relieved to discover Q Is for Quantum by physicist Terry Rudolph. He explains superposition, entanglement and other key quantum concepts with a relatively simple mathematical system, which involves arithmetic, a little algebra and lots of diagrams with black and white balls falling into and out of boxes.

Rudolph emphasizes, however, that some math is essential. Trying to grasp quantum mechanics without any math, he says, is like “having van Gogh’s ‘Starry Night’ described in words to you by someone who has only seen a black and white photograph. One that a dog chewed.”

But here’s the irony. Mastering the mathematics of quantum mechanics doesn’t make it easier to understand and might even make it harder. Rudolph, who teaches quantum mechanics and co-founded a quantum-computer company , says he feels “cognitive dissonance” when he tries to connect quantum formulas to sensible physical phenomena.

Indeed, some physicists and philosophers worry that physics education focuses too narrowly on formulas and not enough on what they mean. Philosopher Tim Maudlin complains in Philosophy of Physics: Quantum Theory that most physics textbooks and courses do not present quantum mechanics as a theory, that is, a description of the world; instead, they present it as a “recipe,” or set of mathematical procedures, for accomplishing certain tasks.

Learning the recipe can help you predict the results of experiments and design microchips, Maudlin acknowledges. But if a physics student “happens to be unsatisfied with just learning these mathematical techniques for making predictions and asks instead what the theory claims about the physical world, she or he is likely to be met with a canonical response: Shut up and calculate!”

In his book, Maudlin presents several attempts to make sense of quantum mechanics, including the pilot-wave and many-worlds models . His goal is to show that we can translate the Schrödinger equation and other formulas into intelligible accounts of what’s happening in, say, the double-slit experiment. But to my mind, Maudlin’s ruthless examination of the quantum models subverts his intention. Each model seems preposterous in its own way.

Pondering the plight of physicists, I’m reminded of an argument advanced by philosopher Daniel Dennett in From Bacteria to Bach and Back: The Evolution of Minds . Dennett elaborates on his long-standing claim that consciousness is overrated, at least when it comes to doing what we need to do to get through a typical day. We carry out most tasks with little or no conscious attention.

Dennett calls this “competence without comprehension.” Adding insult to injury, Dennett suggests that we are virtual “zombies.” When philosophers refer to zombies, they mean not the clumsy, grunting cannibals of The Walking Dead but creatures that walk and talk like sentient humans but lack inner awareness.

When I reviewed Dennett’s book , I slammed him for downplaying consciousness and overstating the significance of unconscious cognition. Competence without comprehension may apply to menial tasks like brushing your teeth or driving a car but certainly not to science and other lofty intellectual pursuits. Maybe Dennett is a zombie, but I’m not! That, more or less, was my reaction.

But lately I’ve been haunted by the ubiquity of competence without comprehension. Quantum physicists, for example, manipulate differential equations and matrices with impressive competence—enough to build quantum computers!—but no real understanding of what the math means. If physicists end up like information-processing automatons, what hope is there for the rest of us? After all, our minds are habituation machines, designed to turn even complex tasks—like being a parent, husband or teacher—into routines that we perform by rote, with minimal cognitive effort.

The Chinese room experiment serves as a metaphor not only for physics but also for the human condition. Each of us sits alone within the cell of our subjective awareness. Now and then we receive cryptic messages from the outside world. Only dimly comprehending what we are doing, we compose responses, which we slip under the door. In this way, we manage to survive, even though we never really know what the hell is happening .

Further Reading :

Is the Schrödinger Equation True ?

Will Artificial Intelligence Ever Live Up to Its Hype ?

Can Science Illuminate Our Inner Dark Matter

Philosophy Now: a magazine of ideas

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

AI & Mind

Arguing with the chinese room, michael debellis says searle’s famous argument about computers not having understanding does not compute..

Many readers of this magazine will be familiar with John Searle’s classic ‘Chinese Room’ argument against ascribing consciousness to Artificial Intelligence. Due to my experience building AI systems for business applications, I have a different take on Searle’s argument than most others. But first let’s look at his argument.

The Chinese Room

Searle introduced the Chinese Room in a paper published in 1980, called ‘Minds, Brains, and Programs’ ( Behavioral and Brain Sciences , vol.3, no.3). The paper begins with the following thought experiment:

Professor Searle is locked in a room. He can’t read Chinese or even distinguish Chinese characters from Japanese. He’s given four sets of paper. The people giving him them have labels for each set, although Searle is not aware of their labels. I’ll put the labels at the beginning of each numbered item, along with Searle’s description in quotes:

1. Script : “A large batch of Chinese writing”

2. Story : “A second batch of Chinese [text]”

3. Questions : “A third batch of Chinese symbols”

4. Program : “Instructions… in English, that enable me to correlate elements of [3] with [1] and [2]. These rules instruct me how to give back [5]”

5. Answers : “Certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in [3]”

The idea is that the instructions [4] tell Searle how to respond to certain sets of Chinese symbols [3] by outputting other Chinese symbols in specific ways [5]. In this way Searle gives coherent Chinese answers to Chinese questions without understanding a word of Chinese. The final part of Searle’s thought experiment is to “Suppose [that] I get so good at following the instructions… and the programmers get so good at writing the programs that from… the point of view of somebody outside the room… my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don’t speak a word of Chinese.”

Searle points out that this system does the same thing as AI programs. His implication is clear, just because a computer program gives good answers to questions, that doesn’t mean it understands what is going on. Later in that paper he also equates this with passing the Turing Test, concerned with determining whether one’s interlocutor is conscious or not.

Since Searle has no understanding of Chinese even though he is able to process the questions by following an algorithm (the instructions), he asserts that in the same way there need be no understanding in AI systems, because what they are doing is equivalent to what he is doing.

John Searle

Problems & Agreements with Searle

Let me now start by describing where I agree with Searle, then mention some fairly minor problems, then go on to what I think is the key issue.

I agree with Searle that the way Roger Schank and other early AI researchers described their progress was over-optimistic. One of the most infamous examples is from Marvin Minsky, who in 1970 stated, “In from three to eight years we will have a machine with the general intelligence of an average human being.” Schank wasn’t quite as extreme, but some of the ways he discusses the consciousness of a computer program - one able to solve a very narrow set of linguistic tasks - were inflated. I think probably most AI researchers would now agree with that. However, there is a difference between deflating the significance of an idea, and claiming that all work that follows a similar methodology is completely vacuous.

Beginning with the less significant counterarguments: the scenario Searle describes would never actually work. Of course, the natural response is ‘It’s a thought experiment: it doesn’t have to be something that can actually be implemented’. While it’s true that certain details can be waved away for a thought experiment, there are other details that can’t simply be dismissed.

So why do I maintain that Searle’s system couldn’t work, and why does that matter? Because the Chinese Room could never approach the speed of a native Chinese speaker, and speed is an issue for passing the Turing Test.

The sort of mechanism Searle describes in his thought experiment is a model known as a Finite State Machine (FSM). Noam Chomsky defined a hierarchy of languages based on the complexity of the phrases they could generate, and the FSM family of languages is the simplest type. Here the input to the system is a set of symbols, and the system uses a set of rules to correlate the input symbols with another set of symbols, which are the output. A thermostat is a classic FSM. It regularly takes readings, and if the temperature is below a threshold it turns on the heat and leaves it on until the temperature is above another threshold. The crucial missing element in an FSM is memory. There is no mechanism where symbols can be stored so that they can be resolved later based on context.

In Syntactic Structures (1957), Chomsky proved that an FSM is incapable of parsing natural languages. An intuitive argument for why FSMs can’t process natural language can be seen by considering a simple English sentence that Chomsky often uses: ‘I saw the man on the hill with a telescope’. Who has the telescope? Is it me, or the man on the hill? There is no way to determine the referent from this single sentence. This is known as the problem of anaphora in linguistics: sentences that use pronouns such as ‘I’ or noun phrases such as ‘the man on the hill’ often need the context of sentences that came before or after to disambiguate who the referent is.

To process anaphora (and many other features of natural languages), the system doing the processing needs memory as well as rules. Unidentified variables need to be stored somewhere so that they can be resolved by context that comes before or after. But an FSM such as Searle’s Room has no memory. It just takes symbols as input, and moves to different states as a result of applying rules to the input. It can’t interpret ambiguity.

However, once one begins to add memory, the rules become much more complex and the chances for error become exponentially greater. The Turing Test includes speed of response as part of the test. If it takes a system much longer than it would take a human to answer simple questions about (say) a short story, any reasonable judge would determine that it was a computer and not a person. For a program to pass the Turing Test, it would also need to be able to handle extended discourse, humor, metaphor, etc. To date no system that I’m aware of has even come close to passing the test. This gets back to Searle’s claim that AI researchers exaggerated the significance of their results.

Searle’s Definition of Strong AI is (Mostly) a Strawman

As a result of his argument Searle asserts that “the claims made by strong AI are false.” According to Searle the three claims made by proponents of strong (ie humanlike) AI are:

AI Claim 1: ‘‘that the programmed computer understands the stories.’’

AI Claim 2: ‘‘that the program in some sense explains human understanding.’’

AI Claim 3: Strong AI is about software not hardware (ie, it ignores the brain as a possibly unique site of consciousness).

However, these claims that Searle ascribes to strong AI are for the most part too strong, and not held by the vast majority of AI researchers then or now.

Claim 1, the idea that AI programs understand text, hinges on our definition of ‘understand’. I will discuss this idea at the end because I think it is the most important question.

Claim 2 can be supported from our perspective in the twenty-first century looking back on the impact of Schank’s research, and similar AI research of that time. Schank’s work was also relevant to early work in applied AI.

In the 1980s I was a member of the AI group that was a part of Accenture’s Technology Services Organization in Chicago. One of the first systems we developed was the Financial Statement Analyzer, a system that utilized a concept of Schank’s to analyze the yearly financial statements that corporations are required by the government to file. These statements were shared with the public, especially with shareholders, so corporations often spent significant effort on the presentation of the reports, with elaborate graphics. While the government required specific information in these reports, they left it open to each corporation to determine how to format the documents. Thus, a normal computer system that could parse tables fairly easily was not able to automatically process these statements. The Accenture AI group developed a system that could analyze the reports, find the relevant ‘frames’ (e.g., debt to equity ratio) and use rule-based heuristics to determine which reports would benefit from further analysis by an expert. (‘FSA: Applying AI Techniques to the Familiarization Phase of Financial Decision Making’, IEEE Expert , Chunka Mui and William McCarthy, Sept. 1987.)

Our system in reading these reports in a sense did some of the work that a human understanding the reports would have done. Not that Schank (or anyone to date) has provided a complete theory of human language. Rather, the work of Schank and others led to other productive work on language and other problems of cognitive science, that is, of ‘human understanding’.

Concerning Claim 3 – that strong AI is only about software not hardware – Searle distinguishes between machines and programs, and says that strong AI is only about programs, and that the nature of the machine running it (the computer, or brain) is irrelevant: it is only the program that matters. This is a strawman, in that Searle confuses a simplifying assumption – that the mind can be studied as a system independent of the physical brain – with the truth, that all the minds we know of are associated with brains.

Even in computer science it has only been fairly recently that software can be packaged so that it is (mostly) independent of the hardware platform. At the time of the Chinese Room argument – 1980 – AI software was tightly coupled to the specific programming language and operating system that the researchers were using. Only in the last decade or so, thanks to Virtual Machines such as Java and Docker, could software be packaged in a way that’s independent of hardware. This is the results of decades of engineering effort.

The brain, however, is not designed from scratch in the way environments such as the Java Virtual Machine are. The human brain is the result of one hack upon another, adding whatever small random mutations happen to increase reproductive success. It would be ridiculous for anyone who truly understands computers to think that this same level of engineering could be achieved by nature. We can see this by examining the brain architecture for functions such as vision, which we understand much better than language. In vision, information is processed in the primary visual cortex. There are modules going from low level visual processing (e.g., edge and surface detectors) to high level (e.g., face detectors in primates, or bug detectors in frogs). In a computer system, each level would have a small number of well-defined interfaces to the level above or below it (and few to more than one level away). In the brain, however, there are many significant collections of neurons that connect layers with other layers two or more levels away, as well as major connections to other areas of the brain. Clearly, then, no complete understanding of the visual system can be had without understanding the complex biology of the brain. At the same time, it is possible to study the visual system in the abstract; for instance, simply defining the various levels and the kind of information that is communicated between each level. This vision model, originally developed by David Marr, which abstracts away from its implementation in a brain or computer, led to great advances in both computer and human vision. Later research was able to (partially) map these abstract functions onto the topology of the brain.

While researchers in cognitive science often talk about mental functions without describing the specific areas of the brain in which they occur, this is only a simplifying assumption. It is not a criticism of researchers that they make such assumptions, since science would be impossible without them. A simple example from physics is the equations for computing the force of gravity. Computing the force on an object with mass X dropped from height Y or launched with force F is trivial. However, when we do this, we never are calculating the true force of gravity. That would require we include the gravitational pull of the Moon, the other planets, even the stars. The math for calculating the gravitational force on three interacting bodies is significantly complex, and the complexity increases exponentially with each body added to the calculations. However, for most purposes we can get by with the simplifying assumption that just the mass of the Earth and the object matter.

Searle’s Argument is Based on a Logical Fallacy

Searle’s argument can be summarized as:

1. Strong AI maintains that a symbol processing system that passed the Turing Test understands human language

2. The Chinese Room argument demonstrates that a symbol processing system could pass the Turing Test and still not understand human language

3. Thus, no symbol processing system that passes the Turing Test understands human language

This is an invalid argument. All Searle has proven is that it is possible that a symbol processing system could pass the Turing Test and not understand language. This is not a proof that every symbol processing system that passed the Turing Test does not understand natural language.

Searle might respond by saying that what strong AI claims is that any system that can pass the Turing Test understands human language. However, I’m not aware of anyone in AI that makes this claim. They simply don’t bother to point out that not every system that can be imagined in a thought experiment that seems to understand language necessarily understands language.

To see this, consider another thought experiment: Professor Nietzsche has constructed a quantum computer with memory that exceeds conventional memory in both space and speed by several orders of magnitude. He programs his computer with a simple table consisting of zettabytes (10 21 ) of information. The first column in the table contains short stories in Chinese; the second column, questions in Chinese about those stories; and the third column are the answers to those questions. The program then takes Chinese stories and questions as input, and looks up the pair in the first two columns of the array that best matches them (using simple pattern-matching algorithms), then returns the third value in that row of the array as the answer.

Such a system could perform much better than the Chinese Room ever could. Yet, no one in AI would consider this to be relevant to the myriad problems of natural language understanding, because such a system would still be restricted to a very narrow subset of natural language possibilities. Also, the idea of a system based on predefined questions and answers contradicts what Chomsky with good reason calls the creative aspect of language use.

The Definition of ‘Understanding’: Do Submarines Swim?

Returning to claim one, the final, and most important, idea is that AI systems in some sense understand natural language. This requires us to examine Turing’s original paper on his Test. The paper opens as follows:

“I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think.’ The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.” (‘Computing Machinery and Intelligence’, Mind , Volume LIX, Issue 236, October 1950)

Even setting aside all the issues I’ve already raised, this is the essence of the problem with the Chinese Room argument: Turing is explicitly not trying to answer the question ‘Can machines think?’ by appealing to the definitions we use in everyday language, as Searle is. Turing is trying to provide a scientific definition of thinking that abstracts away from the natural assumptions most people bring to such discourse. Thus the question that the Chinese Room is really addressing is not the question that Turing posed, which is: ‘‘What is a rational definition of ‘understanding’ that could apply to both machines and people?’’ Rather, what Searle is arguing is that our commonsense notion of ‘understanding’ can’t be applied to computers. But as Turing said, the way people normally use words like ‘understanding’ and ‘thinking’ is not relevant to a scientific theory of cognition. Chomsky agrees with Turing, and says that asking if computers can think (in the commonsense, Searlean sense) is like asking ‘Can submarines swim?’ ( Chomsky and His Critics , 2008, p.279). In English they don’t, but in Japanese they do. In English we don’t use the word ‘swim’ to describe what a submarine does; but Japanese does use the same word for the movement of humans and submarines through water. That doesn’t tell us anything about oceanography or ship design – just as thought experiments about ‘understanding’ in everyday language use don’t tell us anything useful about cognitive science.

© Michael DeBellis 2023

Michael DeBellis is a retired Deloitte Consulting partner who now does independent research in AI and related fields. He can be reached at [email protected] . His website is michaeldebellis.com .

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy . X

Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center

Discover how John Searle's Chinese room argument challenges the Turing test

Discover how John Searle's Chinese room argument challenges the Turing test

AIM

  • Conferences
  • Last updated February 2, 2024
  • In AI Origins & Evolution

Chinese Room Experiment – What was the Core Finding

chinese room thought experiment summary

  • Published on February 8, 2019
  • by Ram Sagar

Join AIM in Whatsapp

The Turing Test is one of the few things that comes to our mind when we hear about reasoning and consciousness in artificial intelligence. But apart from the Turing Test, there is one more thought process which shook the world of cognitive sciences not so long ago. Four decades ago, John Searle , an American philosopher, presented the Chinese problem, directed at the AI researchers.

The Chinese Room conundrum argues that a computer cannot have a mind of its own and attaining consciousness is an impossible task for these machines. They can be programmed to mimic the activities of a conscious human being but they can’t have an understanding of what they are simulating on their own.

“A human mind has meaningful thoughts, feelings, and mental contents generally. Formal symbols by themselves can never be enough for mental contents, because the symbols, by definition, have no meaning,” said Searle when questioned about his argument.

What Is The Chinese Room Conundrum?

Searle explained the concept eloquently by drawing an analogy using Mandarin. The definition hinges on the thin line between actually having a mind and simulating a mind.

Searle’s thought experiment goes like this:

Suppose a closed room has a non-Chinese speaker with a list of Mandarin characters and an instruction book. This book explains in detail the rules according to which the strings (sequences) of characters may be formed — but without giving the meaning of the characters.

Suppose now that we pass to this man through a hole in the wall a sequence of Mandarin characters which he is to complete by following the rules he has learned. We may call the sequence passed to him from the outside a “question” and the completion an “answer.”

Now, this non-Chinese speaker masters this sequencing game so much that even a native Chinese person will not be able to spot any difference in the answers given by this man in an enclosed room.

chinese room thought experiment summary

But the fact remains that not only is he not Chinese, but he does not even understand Chinese, far less think in it.

Now, the argument goes on, a machine, even a Turing machine, is just like this man, in that it does nothing more than follow the rules given in an instruction book (the program). It does not understand the meaning of the questions given to it nor its own answers, and thus cannot be said to be thinking .

Making a case for Searle, if we accept that a book has no mind of its own, we cannot then endow a computer with intelligence and remain consistent.

The Following Questions

How can one verify that this man in the room is thinking in English and not in Chinese? Searle’s experiment builds on the assumption that this fictitious man indeed thinks in English and then uses the extra information from the hole in the wall and masters those Chinese sequences.

But Turing’s idea was that no such assumptions should be made and that comprehension or intelligence should be judged in an objective manner.

The whole point of Searle’s experiment is to make a non-Chinese man simulate a native Chinese speaker in such a way that there wouldn’t be any distinction between these two individuals.

If we ask the computer in our language if it understands us, it will say that it does, since it is imitating a clever student. This corresponds to talking to the man in the closed room in Chinese, and we cannot communicate with a computer in a way that would correspond to our talking to the man in English.

The texts or the set of instructions cannot be dissociated from the man in the experiment because this instruction, in turn, is prepared by some native Chinese person. So, when the Chinese expert on the other end of the room is verifying the answers, he actually is communicating with another mind which thinks in Chinese. So, when a computer responds to some tricky questions by a human, it can be concluded, in accordance with Searle, that we are communicating with the programmer, the person who gave the computer, a certain set of instructions to perform.

Any theory that says minds are computer programs, is best understood as perhaps the last gasp of the dualist tradition that attempts to deny the biological character of mental phenomena. Searle’s speculation in spite of its inadequacies tests the boundaries of AI and makes an attempt to dispel the weak ideas of pseudo-intellectual futurists. Searle in negating the capabilities of AI, has, in fact, exposed the blind spots in our pursuit of General AI and made it more robust.

📣 Want to advertise in AIM? Book here

Picture of Ram Sagar

  • Chinese Room Experiment , general AI , JOhn Searle , Turing test

chinese room thought experiment summary

Perhaps the most critical challenge that LLM developers face is the lack of robust methods for verifying the outputs of these models.

Toss a Stone in Bangalore and It will Land on a Generative AI Leader

Top Editorial Picks

xAI’s Grok-2 Ranks Second on the Chatbot Arena Leaderboard, Competing with Gemini 1.5 and GPT-4o Mohit Pandey

Andrej Karpathy Praises Cursor Over GitHub Copilot Mohit Pandey

Redis 8 Launches with AI Capabilities, Expands Developer Access Siddharth Jindal

Salesforce Launches Two New AI Sales Agents: Einstein SDR and Einstein Sales Coach Siddharth Jindal

Chevron Invests $1 Billion in Bengaluru for Largest Global Innovation Hub Vidyashree Srinivas

Subscribe to The Belamy: Our Weekly Newsletter

Biggest ai stories, delivered to your inbox every week., "> "> flagship events.

discord icon

Discover how Cypher 2024 expands to the USA, bridging AI innovation gaps and tackling the challenges of enterprise AI adoption

© Analytics India Magazine Pvt Ltd & AIM Media House LLC 2024

  • Terms of use
  • Privacy Policy

chinese room thought experiment summary

  • Putting ethics at the centre of everyday life.
  • LIVING OUR ETHICS
  • ANNUAL REPORTS
  • Articles, podcasts, videos, research & courses tackling the issues that matter.
  • WHAT IS ETHICS?
  • Events and interactive experiences exploring ethics of being human.
  • UPCOMING EVENTS
  • PAST EVENTS
  • PAST SPEAKERS
  • FESTIVAL OF DANGEROUS IDEAS
  • Counselling and bespoke consulting programs to help you make better decisions and navigate complexity.
  • CONSULTING & LEADERSHIP
  • Our work is only made possible because of you. Join us!
  • BECOME A MEMBER
  • SUPPORT OUR WORK
  • THE ETHICS ALLIANCE
  • BANKING + FINANCIAL SERVICES OATH
  • RESIDENCY PROGRAM 2024
  • YOUNG WRITERS’ COMPETITION
  • YOUTH ADVISORY COUNCIL
  • ENGAGE AN EXPERT
  • MEDIA CENTRE
  • HIRE OUR SPACE

chinese room thought experiment summary

Thought experiment: “Chinese room” argument

Explainer science + technology, by the ethics centre 10 mar 2023, if a computer responds to questions in an intelligent way, does that mean it is genuinely intelligent.

Since its release to the public in November 2022, ChatGPT has taken the world by storm. Anyone can log in, ask a series of questions, and receive very detailed and reasonable responses.

Given the startling clarity of the responses, the fluidity of the language and the speed of the response, it is easy to assume that ChatGPT “understands” what it’s reporting back. The very language used by ChatGPT, and the way it types out each word individually, reinforces the feeling that we are “chatting” with another intelligent being.

But this raises the question of whether ChatGPT, or any other large language model (LLM) like it, is genuinely capable of “understanding” anything, at least in the way that humans do. This is where a thought experiment concocted in the 1980s becomes especially relevant today.

“The Chinese room”

Imagine you’re a monolingual native English speaker sitting in a small windowless room surrounded by filing cabinets with drawers filled with cards, each featuring one or more Chinese characters. You also have a book of detailed instructions written in English on how to manipulate those cards.

Given you’re a native English speaker with no understanding of Chinese, the only thing that will make sense to you will be the book of instructions.

Now imagine that someone outside the room slips a series of Chinese characters under the door. You look in the book and find instructions telling you what to do if you see that very series of characters. The instructions culminate by having you pick out another series of Chinese characters and slide them back under the door.

You have no idea what the characters mean but they make perfect sense to the native Chinese speaker on the outside. In fact, the series of characters they originally slid under the door formed a question and the characters you returned formed a perfectly reasonable response. To the native Chinese speaker outside, it looks, for all intents and purposes, like the person inside the room understands Chinese. Yet you have no such understanding.

This is the “Chinese room” thought experiment proposed by the philosopher John Searle in 1980 to challenge the idea that a computer that simply follows a program can have a genuine understanding of what it is saying. Because Searle was American, he chose Chinese for his thought experiment. But the experiment would equally apply to a monolingual Chinese speaker being given cards written in English or a Spanish speaker given cards written in Cherokee, and so on.

Functionalism and Strong AI

Philosophers have long debated what it means to have a mind that is capable of having mental states, like thoughts or feelings. One view that was particularly popular in the late 20th century was called “functionalism”.

Functionalism states that a mental state is not defined by how it’s produced, such as requiring that it must be the product of a brain in action. It is also not defined by what it feels like, such as requiring that pain have a particular unpleasant sensation. Instead, functionalism says that a mental state is defined by what it does .

This means that if something produces the same aversive response that pain does in us, even if it is done by a computer rather than a brain, then it is just as much a mental state as it is when a human experiences pain.

Functionalism is related to a view that Searle called “Strong AI”. This view says that if we produce a computer that behaves and responds to stimuli in exactly the same way that a human would, then we should consider that computer to have genuine mental states. “Weak AI”, on the other hand, simply claims that all such a computer is doing is simulating mental states.

Searle offered the Chinese room thought experiment to show that being able to answer a question intelligently is not sufficient to prove Strong AI. It could be that the computer is functionally proficient in speaking Chinese without actually understanding Chinese.

ChatGPT room

While the Chinese room remained a much-debated thought experiment in philosophy for over 40 years, today we can all see the experiment made real whenever we log into Chat GPT. Large language models like ChatGPT are the Chinese room argument made real. They are incredibly sophisticated versions of the filing cabinet, reflecting the corpus of text upon which they’re trained, and the instructions, representing the probabilities used to decide how to pick which character or word to display next.

So even if we feel that ChatGPT – or a future more capable LLM – understands what it’s saying, if we believe that the person in the Chinese room doesn’t understand Chinese, and that LLMs operate in much the same way as the Chinese room, then we must conclude that it doesn’t really understand what it’s saying.

This observation has relevance for ethical considerations as well. If we believe that genuine ethical action requires the actor to have certain mental states, like intentions or beliefs, or that ethics requires the individual to possess certain virtues, like integrity or honesty – then we might conclude that a LLM is incapable of being genuinely ethical if it lacks these things.

A LLM might still be able to express ethical statements and follow prescribed ethical guidelines imposed by its creators – as has been the case in the creators of ChatGPT limiting its responses around sensitive topics such as racism, violence and self-harm – but even if it looks like it has its own ethical beliefs and convictions, that could be an illusion similar to the Chinese room.

Ethics in your inbox.

Get the latest inspiration, intelligence, events & more.

  • Everyday Ethics Monthly
  • Professional Ethics Quarterly

By signing up you agree to our privacy policy

You might be interested in…

Opinion + Analysis Relationships, Science + Technology

Injecting artificial intelligence with human empathy

Opinion + Analysis Science + Technology

Why ethics matters for autonomous cars

Opinion + Analysis Business + Leadership, Relationships, Science + Technology, Society + Culture

Who does work make you? Severance and the etiquette of labour

Bladerunner, westworld and sexbot suffering.

chinese room thought experiment summary

BY The Ethics Centre

The ethics centre is a not-for-profit organisation developing innovative programs, services and experiences, designed to bring ethics to the centre of professional and personal life..

COMMENTS

  1. Chinese room

    The centerpiece of Searle's argument is a thought experiment known as the Chinese room. [ 3 ] The argument is directed against the philosophical positions of functionalism and computationalism , [ 4 ] which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental ...

  2. The Chinese Room Argument

    The Chinese Room thought experiment itself is the support for the third premise. The claim that syntactic manipulation is not sufficient for meaning or thought is a significant issue, with wider implications than AI, or attributions of understanding. Prominent theories of mind hold that human cognition generally is computational.

  3. Chinese Room Argument

    The Chinese room argument is a thought experiment of John Searle. It is one of the best known and widely credited counters to claims of artificial intelligence (AI), that is, to claims that computers do or at least can (or someday might) think. According to Searle's original presentation, the argument is based on two key claims: brains cause ...

  4. Chinese room argument

    Chinese room argument, thought experiment by the American philosopher John Searle, first presented in his journal article "Minds, Brains, and Programs" (1980), designed to show that the central claim of what Searle called strong artificial intelligence (AI)—that human thought or intelligence can be realized artificially in machines that exactly mimic the computational processes ...

  5. Chinese Room Paradox: Explanation and Examples

    The thought experiment goes like this: There's a person who doesn't know Chinese sitting in a room. They get Chinese writing through a slot in the door, and by following a set of instructions in their own language, they send back the right Chinese responses. From the outside, it seems like there's a Chinese-understanding person in the room.

  6. The Chinese Room Argument

    The Chinese Room Argument. First published Fri Mar 19, 2004; substantive revision Tue Sep 22, 2009. The Chinese Room argument, devised by John Searle, is an argument against the possibility of true artificial intelligence. The argument centers on a thought experiment in which someone who knows only English sits alone in a room following English ...

  7. Chinese Room Argument

    CHINESE ROOM ARGUMENT In 1980 the philosopher John R. Searle published in the journal Behavioral and Brain Sciences a simple thought experiment that he called the "Chinese Room Argument" against "Strong Artificial Intelligence (AI)." The thesis of Strong AI has since come to be called "computationalism," according to which cognition is just computation, hence mental states are just ...

  8. Chinese room argument

    The Chinese Room thought experiment illustrates this truth. The purely syntactical operations of the computer program are not by themselves sufficient either to constitute, nor to guarantee the presence of, semantic content, of the sort that is associated with human understanding. The purpose of the Chinese Room thought experiment was to ...

  9. The logic of Searle's Chinese room argument

    John Searle's Chinese room argument (CRA) is a celebrated thought experiment designed to refute the hypothesis, popular among artificial intelligence (AI) scientists and philosophers of mind, that "the appropriately programmed computer really is a mind". Since its publication in 1980, the CRA has evoked an enormous amount of debate about its implications for machine intelligence, the ...

  10. The Chinese Room

    The Chinese Room Thought Experiment. Searle imagines himself in a locked room where he is given pages with Chinese writing on them. He does not know Chinese. He does not even recognize the writing as Chinese per se. To him, these are meaningless squiggles. But he also has a rule-book, written in English, which dictates just how he should group ...

  11. JOHN SEARLE'S CHINESE ROOM ARGUMENT (10-Jun-2007)

    In his Scientific American article on the Chinese room Searle makes an interesting mistake, though not a new mistake. He writes that a transcript of the Chinese conversation could equally well represent the score of a chess game or stock market predictions. This will only be true if the Chinese conversation is very short; perhaps it would have to be less that 20 characters - or maybe it's 100 ...

  12. The Mind Project

    When the Chinese speakers receive intelligent answers to their questions, they reasonably conclude that there is an intelligent person inside the room who understands Chinese. But, in this thought experiment we are to imagine that the only person inside the room understands no Chinese and speaks only English.

  13. Searle's Chinese Room: Do computers think?

    This Chinese Room thought experiment was a response to the Turing Test. In the Chinese Room argument from his publication, "Minds, Brain, and Programs," Searle imagines being in a room by himself, where papers with Chinese symbols are slipped under the door. He has an instruction book in English that tells him what Chinese symbols to slip ...

  14. Chinese Room Argument

    Ned Block: In 1978, he proposed the "Block's Chinese Nation" thought experiment, which is closely related to the Chinese Room Argument. The experiment involves an entire nation simulating the ...

  15. What a Mysterious Chinese Room Can Tell Us About Consciousness

    The Chinese room argument is a thought experiment by the American philosopher John Searle. It has been used to argue against sentience by computers and machines. While objections have been raised ...

  16. Quantum Mechanics, the Chinese Room Experiment and the Limits of

    The Chinese room experiment is a splendid case of begging the question (not in the sense of raising a question, which is what most people mean by the phrase nowadays, but in the original sense of ...

  17. Arguing with the Chinese Room

    Searle introduced the Chinese Room in a paper published in 1980, called 'Minds, Brains, and Programs' ( Behavioral and Brain Sciences, vol.3, no.3). The paper begins with the following thought experiment: Professor Searle is locked in a room. He can't read Chinese or even distinguish Chinese characters from Japanese. He's given four ...

  18. What is John Searle's Chinese room argument?

    American philosopher and Rhodes Scholar John Searle certainly can. In 1980, he proposed the Chinese room thought experiment in order to challenge the concept of strong artificial intelligence, and not because of some '80s design fad. He imagines himself in a room with boxes of Chinese characters he can't understand and a book of instructions ...

  19. Chinese Room Experiment

    The Chinese Room conundrum argues that a computer cannot have a mind of its own and attaining consciousness is an impossible task for these machines. They can be programmed to mimic the activities of a conscious human being but they can't have an understanding of what they are simulating on their own. "A human mind has meaningful thoughts ...

  20. The Chinese Room Argument

    The Chinese room argument - John Searle's (1980a) thought experiment and associated (1984) derivation - is one of the best known and widely credited counters to claims of artificial intelligence (AI), i.e., to claims that computers do or at least can (someday might) think. According to Searle's original presentation, the argument is based on ...

  21. Thought experiment: "Chinese room" argument

    This is the "Chinese room" thought experiment proposed by the philosopher John Searle in 1980 to challenge the idea that a computer that simply follows a program can have a genuine understanding of what it is saying. Because Searle was American, he chose Chinese for his thought experiment. But the experiment would equally apply to a ...

  22. John Searle's Chinese Room Thought Experiment

    John Searle rejected any form of functionalism within the Philosophy of Mind claiming that an argument attempting to reduce the human mind to that of a compu...