Chinese Room Argument(CRA) (15) has continuously bothered logicians and Computer Scientist for almost 30 years now (12). The argument was given by prominent Philosopher John Searle in 1980. By CRA Searle has proved that Strong AI can’t exist. He said that computers can only simulate things(which he called “weak AI”) and never be able to show actual intelligence i.e. learning. There has already been enormous amount of debate on CRA by many prominent Philosophers of mind and AI scientist. The general consensus among literature is that CRA is wrong but with little agreement upon exactly how and why it is wrong. (4) In this course project I attempt to give some new logical thought against the famous argument and try to prove it wrong.

Keywords: Chinese room argument, Weak AI, Strong AI, Turing test, Logic, Mind, Thought.


CRA has stirred enormous amount of debate and controversy among AI scientists and engineers, philosophers of mind and cognitive scientists. Gomila describes the literature on the CRA are nearly infinite, and the editor at the time (Stevan Harnad) has since described it as BBs most influential target article.

Chinese room argument means to state that the purely formal or abstract or syntactical process of the implemented computer program could not by themselves be sufficient to guarantee the presence of mental content or semantic content of the sort that is essential to human cognition (14). According to Searle The Chinese room Argument rests on two fundamental logic truths
1. Syntax is not semantics: The implemented syntactical or formal program of a computer is not constitutive of nor otherwise sufficient to guarantee the presence of semantic content.
2. Simulation is not duplication: We can simulate the cognitive process of the human mind but it doesn’t mean by any means that we have achieved the creation of mental process. As we can simulate rain storms, digestion, or anything else we can describe precisely. But it doesn’t mean that the simulation of digestion on a computer could thereby actually digest pizza. So it is ridiculous to think that a system that had a simulation of consciousness and other mental process thereby had a mental process

Plan of the paper: First I will introduce The Chinese Room Argument and other background materials. Then I will give other people work on trying to disprove CRA and reply by Searle on each counter argument. Then I will introduce my own idea under “Fallacy of CRA” section which is based on logical argument. After that there is a section of Discussion on CRA which is informal. Finally conclusion of whole idea is made. References and Acknowledgments are given in last.
Turing Test: Consider two closed room and a judge who doesn’t know what is inside the rooms. Now put a computer in one room and a person in other room. If judge start chatting with both person and computer (running natural language processing program) and is not able to distinguish clearly which one is computer and which is real person then the computer program is said to pass Turing Test. The idea behind the test is simple since we don’t know exactly how mind/brain works how people think so this is an easy way out to show that a human made machine running some human made program is able to talk same like us. There may be problem with voice recognition so only text chatting is allowed because we are just checking for thought and not all human capabilities.
Strong AI: If a machine passes Turing test then we say that it has intelligence and is able to think(has mind).
Weak AI: According to Searle Computers can only simulate things. So, it can simulate thinking which can be used to fool people but it can never do actual thinking.
Chinese Room Experiment: The argument was really strong in the sense that even if we don’t know how brain works, what do we mean by intelligence, what is thinking or even how children learn language, it tries to disprove existence of Strong AI. The argument is simple and based on just symbol manipulation (7). For visualizing the experiment consider a native English person who doesn’t know Chinese at all he can’t even tell whether given symbol is Chinese or not. He can make sense of Chinese symbols only based on their shape/sketch. But he has good knowledge of English and can read, write, understand it efficiently. Call this person P. Now consider a closed room full of Chinese database with English instruction to match them with other set of Chinese symbols. The room has two slots at two ends. Assume this person P is inside the room and questionnaire in Chinese are passed to him (call this Questionnaire input of the program) along with some instructions in English(call these instructions the computer program). Now the person P inside the room matches the symbols on the questionnaire with the help of English instruction to the Chinese database and write the corresponding Chinese symbols in other paper (call it the output)and pass it to other slot. Now Chinese persons who are passing the questionnaire (input) receive this output and thinks that the person inside the room is good at Chinese because he is able to answer their all query in Chinese. So the room is able to pass Turing test and still the person inside the room who is doing all manipulation doesn’t know single world of Chinese.
Chinese Room and Functionalism: Function is defined by what something or somebody does e.g. a calculator operates on numbers given as input, our digestive system digest food.
Desire, belief can also be defined as functional form e.g. “I am hungry” or I desire food. So here if I get food(input) i will be contend(output) no longer hungry or desire food.
Functionalism defined as inputs and outputs continued. Comparing two things with functionalism.
If both acts similar then they are same.
If on same input both give same result then they are same.
For mind also we can say the same. So if computer can act like mind that is it can think, thought, talk, decides. In similar situation gives similar results. Just like person takes part in linguistic exchange then we say that computer is as intelligent as a person. It has a mind.
Right now there are multiple chat bot on internet which tries to talk like human. e.g A.L.I.C.E. (1), Ella (2),Jabberwock, Carpenter George, Elbot (13).
In 1991 Dr. Hugh Loebner started the annual Loebner Prize competition, to the author of the computer program that performs the best on a Turing Test. (8)
Searle’s Chinese Room passes the Turing Test. So acts like an intelligent person with brain/mind. But in actual Searle has only conducted symbol manipulation, with no understanding, yet he passes the Turing Test in Chinese. So, Searle claims that passing the Turing Test does not ensure understanding. In other words, although Searle’s Chinese Room functions like a mind, he knows (and we in an analogous foreign-language room experiment would know) it is not a mind, and therefore functionalism is wrong.

Past Works

There have already been many attempt by many Computer Scientist and Philosopher to counter CRA but they all are not able to convince Searle about the fallacy of Chinese room. I am listing some famous counter examples and also Searle argument against them here:

The Systems Reply The systems reply says that the person inside the room may not be able to learn Chinese but he is just part of the closed room and since the room as a whole is interacting as a unit with outside world so the room as whole must be seen as the subject of argument and not the person alone. The systems reply claims that the room as a whole know Chinese. Searle has disprove this argument saying that consider a person which remembers everything inside the room in his head so assume that he has very good memory and he remember each page of Chinese database even if it doesn’t make any sense to him he also remember all English instruction. Now remove the room from the picture and consider this person as subject he is still able to write answer of Questionnaire as it was done previously by doing all lookup searching inside his head. But still he doesn’t know Chinese. So, the system also doesn’t understand Chinese since there is nothing in system which is not in him.

The Robot Reply: In place of Chinese room consider a robot with sensors, motors etc just like human being. Then it can act and show intelligence as human being. So people with this argument think that Searle makes an error in the CRA by viewing strong AI as question-answering and symbol manipulation only. So disembodied room is not availing wide spectrum of other resources that we human get which is responsible for cognition and is link between world. (10) In reply to this Searle believes that the addition of sensors and all changes nothing. The program inside the robot continues to do same symbol manipulation and hence understand no Chinese. There is good discussion about this in an unpublished report by

Cutrona, Jr., Dr. Louis J. (2005) (3) So if we use a Zombie1 in Chinese room which is exactly similar to its counterpart in all respect but thinking. Still it will not learn Chinese.

The Virtual Mind Reply: (11) This states that a running system like Chinese room create an entity which is different from the person inside the room or the room itself. So the virtual entity is different from physical entity. This virtual entity will understand Chinese in case of Chinese room. So it states that the claim that the person inside Chinese room doesn’t understand Chinese has nothing to do with strong AI existence.

The Brain Simulator Reply: (5)

It says that consider a computer which is quite different from the computer we see/encounter or we are able to create our new computer will be very similar to brain and does same procedure as our brain does so it will also works upon the idea of neuron firing. Now this computer will do exactly the same firing sequence as a native Chinese speaking person does when he understand some verse of Chinese. Now this computer which is similar in brain expect will understand Chinese as a native Chinese does. For this argument Searle reply is simple that simulation is not doing things in real. So if we replace the Chinese room with all that computer stuff with pipes, valves and tubes and the person inside the room acts as a prosecutor for all the commands which results in brain simulation. Not in this situation also tha person will not understand Chinese while he is able to simulate the brain.

Chinese room a logical Approach:

Let us try to represent our argument with logic by symbols. Let us call T is a theory, E as thought experiment and S is some setting which everyone knows that it is not correct. In terms of propositional logic we can say the following which is simply modus tollens :

(T E ) S


In the Chinese room argument, we can say that T is strong AI is a theory proposed by Turing and E is the CRA the experiment which Searle proposed. The premise is that T and E together imply S by above propositional logic, namely that Searle understands Chinese.
So if strong AI is correct and also assuming that Chinese room argument is sound/valid then it follows that Searle understand Chinese. But, as everyone knows that Searle does not understand Chinese, or at least he(Searle) claims that he doesn’t understand Chinese at all even after successful completion of the experiment.

S . Hence, T and strong AI is false. Some Modal operations are discussed in footnote 2 According to Sorensen (1992, Chapter 6) there are two typical structures for the class of thought experiment that he calls “alethic refuters”. By `refuters’, he is focusing on destructive thought experiments that contradicts(or refutes) some theory. By alethic he refers to the modalities necessary and possible as in S5. The two kinds of structure are necessity `refuters’ and possibility `refuters’. The necessity `refuter’ structure is (Sorensen, 1992, pp. 135-6):

1. S : Modal source statement

2. S I: Modal extractor

3. ( I C ) W : Counterfactual 3

4. W : Absurdity

5. C : Content possibility

Our representation is different than Sorensen in our case symbols have different meaning.

According to Sorensen the subjunctive conditional is necessarily counter-factual, There are lot of materials on this statements and many not agree on the counter-factual. But this discussion will take us away from our original goal of disproving CRA and so we will not discuss about Sorensen arguments.

Sorensen asserts it to be obvious that Statements 1 to 5 are mutually contradictory.

The CRA as a necessity refuter :



In this article I have tried to attack Searle’s long standing Chinese room argument in a new approach. There is already common belief shared by many logician that CRA is not correct. There are many proves against it. The major problem I faced during writing this article is there is no clear cut logical definition or description of either Turing test given by Alan Turing or the Chinese room Argument given by Searle. I have to use Sorensen idea of destructive thought experiment to devise my proof. Although I can also present my ideas without using Sorensen proof but that again will not be logic and I don’t wish to enter in mind/brain argument. I am not qualified enough to do so. My idea is simple and clear. It is based on Turing test. My disagreement with Searle idea of Chinese room is on the point where Searle claims that his Chinese room is able to pass the turing test. I will say that ************************ so further claims and result that strong AI is false is absurd and doesn’t make sense.


If we assume mind as a hardware then there is also some kind of program running in mind which does all the stuff. Since hardware is never assumed a limit in proving thought. there are theories which assumes that mind is also hardware.

As we are seeing the progress of medical science we can see that now it is possible to replace artificial limbs like pacemaker, cornea etc. If the progress continue with same rate the day is not far when we are able to do gene replacement to elongate our life span. Only few genes need to be replaced to extend our life span by twice or thrice because as we see that only few gene difference between chimps and human is able to make such difference in everything and also in life span which is less than 30 years for chimps. There is also another possibility of limb replacement as our limbs grow old. Since we are not able to understand cause of natural death so we can’t say with confidence that whether weary limb replacement be able to elongate our life. Replacement of mind seems to be tough since we need to copy memory also otherwise entity will not remain the same. So, if we are able to do replacement then there will be no difference between machine and human and so between Human AI and machine’s AI.

There is one more argument called Chinese Gym (6) it states that brain function can’t be simulated by chinese room or computer. For computer to just come near brain it must have lot of processors working parallel. So mind is like Chinese Gym where each person is doing some stuff brain has millions of neurons each does signal processing/manipulation of it’s own and pass it to other.


Artificial linguistic internet computer entity (a.l.i.c.e.) http://alicebot.org/about.html.
Kevin Copple.
Dr. Louis J. Cutrona, Jr.
Zombies in searle’s chinese room: Putting the turing test to bed.
I. Robert Damper.
The logic of searle’s chinese room argument.
Minds and Machines, 16:163-183, May 2006.
Eliana Colunga Francisco Calvo.
The statistical brain: Reply to marcusÕ the algebraic mind.
Eamon Fulcher.
Cognitive Psychology.
S. Harnad.
The symbol grounding problem.
Physica D, 42:335-346, 1990.
Loebner prize.
http://plato.stanford.edu/entries/logic modal/.
The stanford encyclopedia of philosophy.
J. Christopher Maloney.
The right stuff.
Synthese, 70, Number 3:349-372, March 1987.
Ayse Pinar Saygin, Ilyas Cicekli, and Varol Akman.
Turing test: 50 years later.
Minds Mach., 10(4):463-518, 2000.
John Preston and Mark Bishop.
Views Into The Chinese Room.
Oxford University Press, University of Reading, 2002.
Fred Roberts.
Elbot : http://www.elbot.com/.
John R. Searle.
Twenty-one years in the chinese room.
John R. Searle.
Minds, brains, and programs.
The Behavioral and Brain Sciences, 3:417-457, 1980.