In returning to Turing question there are many criticisms that come to light, some people argue via God, stating that God gave man and woman the capacity to think but to nothing else, so it is safe to conclude that no, machines cannot think. Another criticism questions the validity of the imitation game, saying that answering these simplistic questions being convincingly answered by a machine doesn’t answer the question of ‘Can a machine think?. Jefferson said that until a computer could write something and understands what it is that it’s written, be able to feel angry, only then could be claimed that computers can think. In fact in an extreme version of this argument it could be claimed that we can never be sure a machine can think unless we are that machine feeling ourselves thinking. In the same way that to know a man is thinking is to be that man. Whilst being convincing as an argument most people would rather side with the imitation game rather than take this extreme view.
Another criticism is that of behaviour, it states that it isn’t possible to decide a set of rules for every eventuality; there are always exceptions to the rules. An example of traffic lights is used if a green light shows you go, if red shows you stop. However, through some fault both could appear together, in some cases a person may stop, but others the person may feel it better to go, this therefore proves that we are not machines. In other words, if man is governed by a set of rules then he is no better than a machine. However there aren’t rules for every aspect of life therefore he cannot be a machine.
Turing then talks of creating a program that simulates the human adult mind; he says though that it may be better to try to simulate the child mind instead. If given the right education you would eventually end up with the adult mind. The childs brain is blank the idea that it would be easily programmable. Teaching the child is similar to teaching a normal child. It is however uncertain as to what to teach this simulation, how to do so, etc.
As well these potential contributions to philosophy of mind, computational models of mind have a substantial relationship with cognitive science and artificial intelligence. The most prominent of these connections is with Chomsky; he produced a theory on linguistics that changed the way people viewed language learning. Before Chomsky there was a behaviourist view of language learning but Chomsky believed that it wasn’t able to adequately explain how a child picks up grammatical rules and is able to use them in many different contexts. Chomsky speaks of the childs efforts at mastering grammar in terms of formation and then confirmation of a hypotheses. Fodor argued that to do this you would need an inner language of thought. Computationalists believed it was plausible to use these mechanisms that were used for the hypotheses and use them in computational terms.
The computational model of mind is important in philosophical terms as it shows that reasoning could be a causal process but be sensitive to semantic connections between judgements. The problem with this is the belief by many that reasons are not causes. In one way how can a causal process carry on following the values of semantic propositions? To have a device that understands the meanings of mental values of a proposition is the same as having as having an interpreter inside the brain, but then the problem of reason and causation happens for this interpreter and so you end up with endless amounts of interpreters. Another point is how a completely causal process can be called a reasoning process as calling it reasoning implies a series of norms not causes.
Fodor claimed that the computational model of mind was really “the only game in town”. Fodor claims that the only accounts we have of cognitive processes are completely computational. There are many criticisms to the idea that the mind is a machine.
One of the major criticisms of the computational model of mind is by
John Searle he presents the idea of a Chinese Room. Suppose you are in a locked room and you receive a series of Chinese symbols that you don’t understand being English. Then you receive a second set of Chinese writing along with a set of rules to help to understand the Chinese writing. The rules are in English and allow you to form a response to the writing you received. In terms of this you are the processor, the rules are the program, etc, in this way you act as a computer. There isn’t any understanding involved you are acting only as the rules tell you to, you have everything that is put into artificial intelligence but still thee is no understanding. The point to remember here is that the rules you give a computer don’t matter as they don’t help understanding, as seen in the Chinese room experiment. To see what this means you need to consider what it is to understand, in terms of you understand English, you loosely understand German and Chinese not at all. In terms of machines you can say that a calculator understands subtraction but it means little. The way an automatic door understands when to open is not the same as how you understand English.
Newell and Simon claim that contrary to this, computers have the same cognition and understanding as we do, which is an interesting claim. This doesn’t however; seem to hold together very well. In fact Berkeley claims that it is not realistic to have someone with no understanding of Chinese being locked in a room, and not only deciphering it but answering it in Chinese also. He feels it is safe to assume that the understanding isn’t present in the person but in the whole system. If that person then memorised the whole thing, rules and all, then the person now is the system and if he doesn’t understand Chinese then neither does the system. This is key as it says that there is no understanding, therefore humans are not machines. How can the system have understanding if the processor doesn’t?
It seems that despite the computational model of mind being very influential in the field of philosophy of the mind; it’s still very under developed. The studies are very broad and well developed, though many are based upon theory and speculation. These theories have been developed around machines and computers, yet at the time these theories were develop, and in many ways in todays society, technology seems to be against them. For example when talking of producing a simulated mind it’s hard to see how this could be done to any significant standard at this stage. Even in todays society scientists are still struggling to create a mechanical human, the most evolved being a robotic waiter. The thought processes involved in the human mind being seemingly too complex to replicate. At the time of many of the studies computers were in the very early stages, some being little advanced as a calculator. How do you jump from a machine for numbers to the human mind?
In conclusion, the computational model of mind whilst being highly studied has a few gaps that aren’t clearly explained. All the speculation of future possible machines and software holds many of the ideas back. As they are mainly theory based it is easy to criticise their success in a theoretical way. If on the other hand the software was available now, then these criticisms would have little holding. For example, in terms of the imitation game; if someone did manage to replicate the human mind, accurately and therefore this machine was able to feel anger, and to answer questions with a presupposed understanding then the experiment itself may have a little more bearing. As it was at the time Turing wasn’t entirely sure whether there would ever be such a case. On a whole it seems that psychologists and philosophers alike are fascinated by the concept that the mind is a machine, there is certainly some compelling research to confirm this. I feel after weighing up the research that in today’s society and the technological developments we have undergone that we are getting closer to a deeper meaning of the mind as a machine. As it stands at the moment, I feel that when answering whether it stands up to scrutiny, it is necessary to say no, at least not in all areas; There needs to be further developments.
Bibliography
Heil, John Philosophy of the mind, a guide and anthology,
2004