Searle, who is a strong believer that computers are incapable of substituting the human brain, maintains that intentionality (the conscious understanding of ‘symbols’ etc) is fundamental in mentality and consciousness. He created a scenario to prove his point, known as the ‘Chinese room’. In this scenario a participant who is ignorant to Chinese writing (and would be unable to distinguish it from meaningless patterns) is locked in a room with a large batch of Chinese writing. The participant is then given a second batch of Chinese writing along with instructions in English on how to correlate the second batch with the first. The subject can now merely correlate the two as formal symbols alone (formal meaning able to be identified by shape) with no greater understanding of the writing. A third batch is then presented to the participant along with further English instructions on how to correlate the shapes in this third batch with those in the first two and how to give back certain symbols in response to those in the third.
Unknown to the participant, the experimenters call the first batch a script, the second batch a story and the third batch questions. The symbols given in response to the third batch are answers and the English instructions are the program. In time the subject will get so good at following the instructions, and the experimenters will get so good at writing the ‘program’ that to a Chinese speaking third party the answers given by the participant will be indistinguishable from those that would be given by native Chinese speakers. Moreover, it is claimed that if the participant was given stories at the same time in English and questions on them in English, it would appear as if he were equally proficient in answering the English and Chinese questions. In actual fact the English answers would be due to a deep semantic understanding of the information provided, whereas the Chinese answers would be merely the result of manipulating un-interpreted symbols.
The theory behind this scenario is that it is possible for a program to become complex and comprehensive enough to perfectly mirror the output of a system process within the human brain. It is however reached by a fundamentally different, and far less comprehensive process than the brain itself uses. In this example if we substitute the participant with a computer it is clear that the computer could not explain the meaning of any of the Chinese symbols used, it can merely describe the process by which they can be correlated. It cannot even comprehend the meaning of the output it gives (or even estimate such). There is therefore more to understanding and intelligence according to Searle than the mere manipulation of symbols.
Gregory (1987) claimed that this scenario is inherently flawed for what it is trying to prove. He said that if a human baby grew up in the ‘Chinese room’ it would be unable to learn the meaning of the symbols in the same way that a computer would be. He claims that it takes years of active assimilation of knowledge and ‘imputing’ of many varied stimuli and programs for an infant to understand the meanings of all the symbols which it is presented with, it is not due to the existence of intelligence or understanding alone. Boden (1993) attacks Searle from another angle, claiming that he has not taken into account that the human brain, on its most fundamental level also works using a series of ‘programs’ that do not themselves involve any great knowledge or understanding, but rather lead to it. He says that the participant in the Chinese room only ‘understands’ the English as the symbols that it consists of trigger a series of processes (or programs) within the brain that lead to it being recognised and allowing a response to be made. The comprehensive nature of this response (or the ‘understanding’ shown) is only due to the fact that over time, with repetition and learning (more information being added) the program itself has become extremely comprehensive. It is still the case with localised serial systems, though, that ‘you can only get out of a computer what you put in’. That is to say that a computer can only react to programs it has been given, it cannot ‘teach’ itself.
Turing (1950) devised a test to examine whether a computer had the ability to function in the same way as the human mind (and therefore be viable in psychology). The concept behind it was that one questioner had the task of interrogating two unseen subjects by means of typed questions in order to try and guess which was male and which was female. Given the same stipulations, one of the two interviewees could then be replaced by a computer, which is designed to give appropriate human responses to the questions, in its assigned gender. It is suggested that if a computer program manages to effectively fool a human over this period of intense questioning into thinking they are communicating with another human, it has exhibited intelligence and understanding. As yet no computer has managed to pass the ‘Turing test’. This implies that, for cognitive and behavioural psychology at least, computers have not yet reached the level where they can be effectively substituted for the human brain in experiments.
This is not to say that this will never be the case. In fact, the advent of parallel distributed processing systems in computers (also known as connectionism) has brought the technical working of a computer much closer to the biological function of the brain. It contains a complex processing system consisting of many networks of parallel processing computational units, which work in a very similar way to the neural network of the brain with its multiple connections between all of its neurons. Gordon (1989) claims that this is set to revive the status of the computer as a model for the mind. It is said to overcome the aforementioned problem of an inability to be self-taught, as it ‘learns to behave as if it was following rules, despite the fact that no rules are explicitly encoded into them’. Even this parallel distributed processing system is still not perfect, however, as it relies on a stable, fixed relationship between all the units within its neural network. It is modified only in response to specific inputs and responding in a deterministic way (Penrose, 1990). The brain, in contrast, has an inherent indeterminacy in the way that the neurons and their synaptic connections operate. Even the brain system itself is indeterminate, constantly undergoing change through interaction with the natural and social outside world.
This being said, these advancements and others like them bring great hope for the future. It may not be long before a computer can be created that is internally dynamic in composition, reacting to all stimuli presented, and adapting its systems accordingly. Therefore, even though robotics has only very limited relevance in the field of psychology at the moment it has the potential to take a very prominent role.
References (primary):
-
1) Gellatly, A. (1986) The Skilful Mind, Open University Press.
-
2) Gross, R. (1999) Psychology; the Study of Mind and Behaviour, Hodder & Stoughton.
-
3) Self, J. (1988) Artificial Intelligence and Human Learning, Chapman and Hall.
-
4) O’Shea, T., Eisenstadt, M. (1984) Artificial Intelligence; Tools, Techniques and Applications, Harper & Row.
References (secondary):
-
1) Seigler, R. S. (1986) Strategy Choices in Subtraction, Oxford University Press.
-
2) Searle, J. R. (1980) ‘Minds, Brains and Programs’, The Behavioural and Brain Sciences 3.
-
3) Turing, A.M. (1950) ‘Computing Machinery and Intelligence’, Mind 59.
-
4) Garnham, A. (1988) Artificial Intelligence; an introduction, Routledge.
-
5) Boden, M. (1987) Artificial Intelligence and natural man, Harvard University Press.
-
6) Flanigan, O. J. (1984) The science of the mind, MIT Press.
-
7) Gregory, R.L. (1987) ‘In Defence of Artificial Intelligence- a Reply to John Searle’, Mindwaves, Blackwell.
-
8) Boden, M. (1993) The Simulation of Human Intelligence, Blackwell.
-
9) Gordon, I. E. (1989) Theories of Visual Perception, John Wiley & Sons.
-
10)Penrose, R. (1990) The Emperor’s New Mind, Oxford University Press.
1988- Garnham (cited in Gross, 1999)
1986- Seigler (cited in Gellatly, 1986)
1980- Searle (cited in Gross, 1999)
1987- Boden (cited in Gross, 1999)
1884- Flanigan (cited in Gross, 1999)
1980- Searle (cited in Gellatly, 1986)
1987- Gregory (cited in Gross, 1999)
1993- Boden (cited in Gross, 1999)
1950- Turing (cited in Gellatly, 1986)
1989- Gordon (cited in Gross, 1999)
1990- Penrose (cited in Gross, 1999)