Monday, December 5, 2011

Can machines think


A Critical approach to Searle’s Chinese room argument

Can machines think, if yes to what extent they can? There are two views about artificial intelligence: one of these is called weak artificial intelligence (AI) and the other: strong AI. The weak AI claims that computers might give some significant information about the human cognition, as we can test our theories on computers effectively, while the strong AI claims that computers are not only machines that can be used for tests but they can have all cognitive states human beings have if they are programmed appropriately. It might be useful to give an example about artificial intelligence and what strong AI says about it. Suppose you have a story: about a man going to a restaurant and ordering a hamburger. The hamburger comes burned and crisp. The man gets angry and leaves the restaurant without paying for the hamburger. Did he eat the hamburger? If you ask a human being to answer this question he would say that he did not eat the hamburger. Schank’s machines are programmed to be able to answer this question in the same way human beings do. In addition to this, the strong AI claims that what a machine has been doing in the above process is not a simulation of human behavior but also
1. that the machine can be literally said to understand the story and provide the answers to questions and
2. that what the machine and its program do explains the human ability to understand the story and answer questions about it.” [Heil, page #236]
Searle does not disagree with weak AI but with strong AI and does not think that Schank’s machines are doing anything relevant to support these claims. He constructs the Chinese room example in order to show that there are some conscious states that human beings can have but machines not. In this paper first I am going to examine his Chinese room argument. After that I will discuss my own objection to Searle’s argument and argue that human beings might not be too different from the computers in contrast to what Searle thinks.
Assume that you know no Chinese and you are in a room with a story in Chinese and a rule book in English which tells you how to correlate the characters in the story and any other set of characters given. If you are given a question in Chinese about the story, you will look at to rule book and know how to respond with Chinese characters without knowing any Chinese character. In this case, what you are doing is not different from what a computer does. If Chinese room set-up is translated into computer’s case, the rule book is the program, questions and the story are input and your answers are the output formed from the input with respect to the rules (program). Moreover assume that the rule book is so good written that your answers are indistinguishable from a native Chinese speaker. Also assume you are in another room where you are given an English text and questions in English. You will be responding to the questions by understanding the story and the questions. So, your answers in Chinese and English will be equally good although in one case you were not interpreting symbols but decomposing and reconstructing sentences according to the directions from the rule book and in the other you would be able to visualize the story and give them the information they are asking for from that visualization.
For the first claim of the strong AI, Searle compares being in a Chinese room and English room (the room where story and questions and answers are English with a native English speaker). He says that there is no understanding of yours in Chinese room while there is in English room, so there is no understanding for computers while there is for us, under some specific circumstances (where there is some understanding for human beings).
For the second claim of strong AI, Searle questions the sufficiency of computer to explain human understanding since as already discussed for first argument computer does not show any sign of understanding.
There are some replies to Searle’s argument. I will be concentrating on the one which says “When I understand a story in English, what I am doing is exactly the same-or perhaps more of the same-as what I was doing in manipulating the Chinese symbols. It is simply more formal symbol manipulation that distinguishes the case in English, where I do understand, from the case in Chinese, where I don’t.”[Heil, page# 238]
Searle’s rejoinder to this opposition is that: even when all the artificial intelligence can be put into him, he could not understand and if understanding was about computations why can he understand in one case and not understand in the other where he has everything for computation.
My argument to Searle will be about the feeling of understanding that comes with visualization and makes him think that he understands it as long as he can visualize it. I will show that visualization is not essential for understanding. As a consequence of this approach I will argue that semantics of a language is useful only to be able to point to specific objects in life and their states but this is also not essential for understanding. To support my claims I will give some examples from math, how we learn it, and what kind of generalizations are there in it which totally abstractify (a made word to say “making abstract” or even more abstract if it is already an abstract concept) mathematical concepts and makes them exist not independently but only with their relations with other mathematical objects. Therefore objects will be a sum of rules defined on it but nothing else. This will say that knowing the rule book, we know everything about the object.
Lets first focus on visualization problem and show that one does not need visualization to understand something. Assume that I am describing something to you as white. It has no other properties except those all white objects have in common. I can not visualize this object since knowing color does not tell me what it is like in shape. As soon as I imagine something for it, I would not be imagining that object since my object will not have any properties of imagined ones except the color. But I can still talk about this object and even tell a story about it, now lets replace whiteness with a more abstract property, say it is xxx. I can still tell a story about it. Let the story be the following: An xxx object can interacted with another object. That another object interacted with another object. This is the story, and it is quite abstract since I have no idea about what being xxx means or what kind of interaction it is. Therefore I can not visualize it, even if I am doing so, this means that I am just assigning something I know for the object, a property I know for xxx, and an action I know for interaction. Notice that this is not visualization of my abstract story but visualization of an illustration of it. Now let us also have a rule telling you that if an xxx object interacts with another object the other object becomes xxx, too. Without knowing anything about these objects or their xxx’ ness I can conclude that the last object appearing in the story is xxx too. Would Searle agree with me that we have an understanding of this story although none of the visualization will be more than an illustration or a simplification of the abstract story given? Moreover I can restrain you from visualizing by saying that it is not the object, property or the interaction you are imagining or you will imagine.
I claim that this story makes very well sense, at least to me. It might be because I am in Math and do not need real life objects or events, states to refer to in order to understand some phenomena. Here comes the crucial question: Is my understanding of such an abstract story different from computers’ understanding of it? I do not think so. The difference between understanding of a computer and me of a story of the type above which does not let visualization is the feeling of understanding that I have but computer does not. That feeling is very likely to be the confidence of internalization of the rule book (referring to the Chinese room). Moreover this feeling might have nothing to do with understanding since many of us at one point of our lives should have experienced that feeling despite misunderstanding or not understanding at all. This feeling has something to do with completion of a process of mind, independent of how successful it was. Let us examine this feeling problem a bit more.
Hypothetically I will assume I am in Chinese room, without knowing any single Chinese character and looking desperately at the story given. I might be given a question in Chinese and spend hours to look through the rule book to find the rule applies to it and from there to form the right answer. I will still feel unconfident although I am able to respond to the question. It is the same feeling I have when I am starting to study a completely new concept in math. When I have to solve a problem about this new concept that is unfamiliar to me, I try to find the theorems that give me results that will lead to other results one of which will be answer to my question. I do not feel like understanding anything at the beginning although I seem to be solving the problem. However as I spend more time on the book reading through pages I start to think that I am learning or understanding it. What am I learning new by reading repeatedly? Or, what do I understand about this super abstract concept? Am I starting to visualize it? No, because they can not be. I am just internalizing the theorems by time and once I am given a question I know which theorems to use because of the familiarity I gained to the subject and this is bringing the confidence which results in the feeling of understanding.
It is the same with Chinese room argument, if the person stays long enough to internalize the rule book, the discomfort coming from not knowing what the symbols are referring to in real life will go away. After a while she will be able to have some feeling of understanding as the story is decoded in light of rules and reduced to something like the following: a is interacting with b, c is in another state, but when all these are together d gains property p… This is what math is about: there are objects and their relations with other objects (story, input) and rules (like rule book, program) that tell you what happens as a result of two or more things satisfying some assumptions together (your response to question asked, output). Like in the first example I gave: an object is xxx, it is interacting with another object, and this other object is interacting with another, and rule book says that each object with property xxx makes the object it interacted xxx too. Do I have to know what this property, or object or interaction is? Not really, there is a subfield of abstract algebra, called category theory which only deals with these kind of stories where objects and interactions are defined in terms of rules and that is all you have about them. Maybe in real life it is helpful to know what we are referring by saying “an object”. For example if I want to buy some bread instead of an elephant it is good to make sure we have the vocabulary that distinguishes them. I might need different words for each object to be able to point to them, but no one can claim that there is no understanding in the case where I left the objects, interactions and properties undefined. By saying an object is xxx, it is interacting with another object, and this other object is interacting with another; I might as well saying either these objects are numbers, interaction is multiplication and xxx is evenness, which would mean: If a number is even and I multiply it with another number the result will be even, and if I multiply this result with another number this third product will be even too. Or I might think off objects as clothes, interaction washing in the same machine and xxx is brownness, this would say that if I wash a brown cloth with another one in the same washer, the second cloth will come out brown and afterwards if I wash this second cloth with another one in the same washer, this last cloth will come out of the washer brown, too. Different semantics might make a difference but it does not disturb the form and our understanding. Semantic just gives an illustration of the abstract story. We are able to understand a sentence of the following form “an object is xxx, it is interacting with another object, and this other object is interacting with another” without visualizing. Understanding is possible without visualization and I do not think we understand more than o computer when the story is as above and we can not use the advantage of knowing the semantics. In this case one can not mention a mental state or understanding that human beings have but computer does not, since they both have the same data and we can not extend this data further as human beings who has some extra visualization skills. Although this is the case one might say that we have more understanding than the computers. I would say that the difference is not understanding, but the feeling of it that we have but very likely computers do not as they do not have the physiology to seat emotions. This feeling is proportional with your confidence, that is why I do not have the feeling of understanding when dealing with a new mathematical concept and rules that apply to it. However this discomfort diminishes as I internalize the concept, that is spend more time on it and assign more brain cells to it. By repeating the definition of this abstract mathematical notion I do not get new information but gain the confidence, which feels like understanding. That is why I insist that understanding is only a feeling which has strength proportional to the amount of brain cells that determines you confidence.
To conclude, Searle’s Chinese room argument intends to show that human beings have more understanding then machines due to knowing the semantic of input, however there is an understanding without knowing the semantic. We discussed that if we exclude the semantic, computers and human beings have equal data. However there is a feeling of understanding which companies human’s mental processes and this comes with confidence. Therefore, there can still be some understanding (with feelings) in Chinese room for a person who does not know Chinese. Since we said that this feeling has nothing to do with understanding but only a result of it, and understanding can happen without visualization, there does not seem to be anything left to differentiate computer’s understanding from human beings”. That is why Searle’s argument, which made sense at first, turns out not to work. And remember that we could eliminate the role of knowing semantics from the picture by thinking off cases where computers and human beings have the same story and there is no possible visualization but there is an extra brain state in human being’s case which is the feeling of understanding but nothing else. To be able to identify that feeling I gave example from my experience in math. I explained how abstract concepts and rules start to make sense to me after spending enough time on them and gain the feeling of understanding from confidence rather than visualization or any extra data known to me but not known or can not be known to a machine. In short, in Chinese room you will not feel that you are understanding the story since you did not have enough time to get familiarity with characters and learn their properties ( what symbols they come together and how they connect to each other, and so on…) mainly to internalize the rule book. You do not have to know what the characters represent in real life to get some understanding like in the case of an abstract story where unspecified things were in states that were not specified either. Therefore, if we eliminate (and can eliminate) the feelings and knowledge of semantics there will be equal understanding for computers and us, and Searle’s argument will lose its significance.

No comments: