• Join over 1.2 million students every month
  • Accelerate your learning by 29%
  • Unlimited access from just £6.99 per month

Can a computer think? Really think?

Extracts from this document...


At a time when computer technology is advancing at a rapid pace and when software developers are convincingly hawking their products as having artificial intelligence, the inevitable question has begun to take on a certain urgency: Can a computer think? Really think? In one form or another this is actually a very old question, dating back to such philosophers as Plato, Aristotle, and Descartes. And after nearly 3,000 years the most honest answer is still uncertain. After all, what does it mean to think? On the other hand, that is not a very satisfying answer. However, with his paper: Minds, brains and programs published in 1980, John Searle has had a huge impact on the artificial intelligence issue worldwide. This essay will focus on Searle's idea that computers are incapable of being conscious, and then analyse whether Searle is right in terms of his three main efforts: a critique of computationalism and strong Artificial Intelligence (AI); the development of a theory of intentionality; and the formulation of a naturalized theory of consciousness. At the first place, the best-known example of Searle's critique of computationalism and strong AI is his Chinese Room Argument. The argument (1980) goes as follows: Searle supposes that, many years from now; 'we have constructed a computer, which behaves as if it understands Chinese.' In other words, the computer takes Chinese symbols as input, consults a large look-up table (as all computers can be described as doing), and then produces other Chinese symbols as output. ...read more.


and directions of causation (which can be mind-to-world or world-to-mind). An important feature of Searle's theory of intentionality is something he calls 'The Background'. The Background is theorised to be a set of skills, capacities, and presuppositions that, while being non-representational, makes all representation possible. It is clear to everyone that when Searle uses the word 'intentionality', he is not just talking about an obscure technical matter. In this context intentionality is virtually synonymous with mind, soul, spirit, or awareness. However, the comparison is unfair. The programs that Searle demonstrated a very crude kind of understanding at best, and no one in AI seriously claims anything more for them. Even if they were correct in principle, genuine humanlike understanding would require much more powerful machines and much more sophisticated programs. As Dennett (1991) has pointed out, 'For Searle, intentionality is rather like a wonderful substance secreted by the brain the way the pancreas secretes insulin.' And make no mistake: Searle's concept of intentionality does require a biological brain. He explicitly denies that a robot could have intentionality, even if it were equipped with eyes, ears, arms, legs, and all the other accoutrements it needs to move around and perceive the world like a human being. Inside, he said, the robot would still just be manipulating formal symbols. So far as natural selection is concerned, Searle's robot is just as fit for survival as those of us with Searle style intentional brains. ...read more.


We are composed of thousands of different kinds of cells, including thousands of different species of symbiotic visitors, some of whom might be as important to our consciousness as others are to our ability to digest our food. If all that complexity were needed for consciousness to exist, then the task of making a single conscious robot would dwarf the entire scientific and engineering resources of the planet for millennia. Facing up to reality, it is impossible to make a conscious robot under the material and technical conditions now available. Yet it does not mean we cannot do it in the future. Judging from the above, Searle's theory gives us an apparent idea that computers will never think like human beings. But his views about artificial intelligence are too restricted. By using completely humanity measurements, he cannot jump from the intrinsic ideas, which always point to analogies between human being and artificial intelligence. It will not benefit the advance of the human society. On the other hand, till now, nobody can prove that artificial intelligence will never be capable of being conscious in an unquestionable way. Therefore, just like Dennett's idea, we still can say that making conscious artificial intelligence is possible in the future. Reference Daniel, C. Dennett (1991), Consciousness Explained, Penguin Books, New York. Searle, J. (1980) Minds, brains, and programs, Behavioral and Brain Sciences, 1: 417-24. Searle, J. (1983) Intentionality: An Essay in the Philosophy of Mind, Cambridge University Press, New York. Searle, J. (1997) The Mystery of Consciousness, New York Review Press, New York. ...read more.

The above preview is unformatted text

This student written piece of work is one of many that can be found in our AS and A Level Philosophy section.

Found what you're looking for?

  • Start learning 29% faster today
  • 150,000+ documents available
  • Just £6.99 a month

Not the one? Search for your essay title...
  • Join over 1.2 million students every month
  • Accelerate your learning by 29%
  • Unlimited access from just £6.99 per month

See related essaysSee related essays

Related AS and A Level Philosophy essays

  1. How might we combat the Gettier Problem? In giving your answer set forth two ...

    To do this we can either add / alter the conditions of the definition or reject the tripartite definition completely and come up with a new definition which accounts for the Gettier counter - examples and still defines knowledge accurately.

  2. Can computers think?

    is to help the interrogator."4 He then introduces the purpose of this game and the basis of the rest of his discussion when asking "'What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often [...] These questions replace our original, 'Can machines think?'."

  1. Introduction to Philosophy.

    > The 3 principles of GEORGIAS: 1. He said that nothing exists. So he was a NIHILIST (nothing). 2. He said that we cannot know anything. 3. Even if we knew we could not communicate it. > The only function of language for him is for expressions, through which we put forward our power and express our opinion.

  2. Sartre is a very strong proponent of strong determinism, that is, he does not ...

    To illustrate this, he speaks of a crag jutting out of a landscape. A particular crag, which manifests a profound resistance if I wish to displace it, will be on the contrary a valuable aid. In itself... it is neutral; that is, it waits to be illuminated by an end

  • Over 160,000 pieces
    of student written work
  • Annotated by
    experienced teachers
  • Ideas and feedback to
    improve your own work