ANVIL 15th October Could A Computer Become Conscious

Julie Arliss’ talk went very well. To read a report 

 Julie and I have met and discussed our next meeting and she has sent us some stimulus material and questions for Anvil members to consider.

 For the First Time in the World, A Computer has passed the Turing Test and made the judges think that it is a 13 year old boy. Please have a look at this article. 

REITH LECTURES 1984: Minds, Brains and Science. John Searle
Lecture 2: Can Computers Think? Page 32-33

J.S: To illustrate this point I have devised a certain thought experiment. Imagine that a bunch of computer programmers have written a program that will enable a computer to simulate the understanding of Chinese. So for example if the computer is given a question in Chinese it will match the question against its memory or data base and produce appropriate answers to the questions in Chinese. Suppose for the sake of argument that the computers answers are as good as those of a native Chinese speaker. Now then, does the computer, on the basis of this, understand Chinese, does it literally understand Chinese, in the way that Chinese speakers understand Chinese? Well, imagine that you are locked in a room, and in this room are several baskets full of Chinese symbols. Imagine that you (like me) do not understand a word of Chinese, but that you are given a rule book in English for manipulating these Chinese symbols. The rules specify the manipulations of the symbols purely formally, in terms of their syntax, not their semantics. So the rule might say: ‘Take a squiggle-squiggle sign out of the basket number one and put it next to the a sqoggle-sqoggle sign from basket number two.’ Now suppose that some other Chinese symbols are passed into the room, and that you are given further rules for passing back Chinese symbols out of the room. Suppose that unknown to you the symbols passed into the room are called ‘questions’ by the people outside the room, and the symbols you pass back out of the room are called ‘answers to the questions’. Suppose, furthermore, that the programmers are so good at designing the programs and that you are so good at manipulating the symbols, that very soon your answers are indistinguishable from those of a native Chinese speaker. There you are locked in your room shuffling your Chinese symbols and passing out Chinese symbols in response to incoming Chinese symbols. On the basis of the situation as I have described it, there is no way you could learn any Chinese simply by manipulating these formal symbols.

Now the point of the story is simply this: by virtue of implementing a formal computer program from the point of view of an outside observer, you behave exactly as if you understood Chinese, but all the same you don’t understand a word of Chinese. But if going through the appropriate computer program for understanding chinese is not enough tot give you an understanding of Chinese, then it is not enough to give any other digital computer an understanding of Chinese. And again the reason for this can be stated quite simply. If you don’t understand Chinese, then no other computer could understand Chinese because no digital computer, just by virtue of running a program has anything that you don’t have. All that the computer has, as you have, is a formal program for manipulating uninterpreted Chinese symbols. To repeat, a computer has a syntax but no semantics. The whole point of the parable of the Chinese room is to remind us of the fact that we knew all along. Understanding a language, or indeed, having mental states at all, involves more than just having a bunch of formal symbols. It involves having an interpretation, or a meaning attached to those symbols. And a digital computer, as defined, cannot have more than just formal symbols because the operation of the computer, as I said earlier, is defined in terms of its ability to implement programs. And these programs are purely formally specifiable – that is, they have no semantic [conscious/mental] content.

Lecture 3 Cognitive Science p56

 J.S:  Suppose no one knew how clocks worked. Suppose it was frightfully difficult to figure out how they worked, because, though there were plenty around, no one knew how to build one, and efforts to figure out how they worked tended to destroy the clock. Now suppose a group of researchers said, ‘We will understand how clocks work if we design a machine that is functionally the equivalent of a clock, that keeps time just as well as a clock,’ So they designed an hour glass, and claimed: ‘Now we understand how clocks work,’ or perhaps: ‘If only we could get the hour glass to be just as accurate as a clock we would at last understand how clocks work.’ Substitute ‘brain’ for ‘clock’ in this parable, and substitute ‘digital computer program’ for ‘hour glass’ and the notion of intelligence for the notion of keeping time and you have the contemporary situation in much (not all!) of artificial intelligence and cognitive science.

 Julie adds these questions for us to ponder

 How would you define the ability to ‘think’?

 According to some the brain is just a digital computer and the mind is just a computer program. The mind is to the brain as the program is to the computer hardware. Do you think that the digital computer gives us the right picture of the human mind?

 Can a digital computer think or does it behave AS IF it can think?

 Does a computer have the equivalent of human thought or does it simulate it?

 Can a computer program by itself be sufficient to give a system a mind? Do you think that it is only a matter of time until computer scientists and workers in Artificial Intelligence design the appropriate hardware and programs to duplicate and be the equivalent of human brains and minds?

Comments are closed.