Can a computer think? Really think?

Authors Avatar
At a time when computer technology is advancing at a rapid pace and when software developers are convincingly hawking their products as having artificial intelligence, the inevitable question has begun to take on a certain urgency: Can a computer think? Really think? In one form or another this is actually a very old question, dating back to such philosophers as Plato, Aristotle, and Descartes. And after nearly 3,000 years the most honest answer is still uncertain. After all, what does it mean to think? On the other hand, that is not a very satisfying answer. However, with his paper: Minds, brains and programs published in 1980, John Searle has had a huge impact on the artificial intelligence issue worldwide. This essay will focus on Searle's idea that computers are incapable of being conscious, and then analyse whether Searle is right in terms of his three main efforts: a critique of computationalism and strong Artificial Intelligence (AI); the development of a theory of intentionality; and the formulation of a naturalized theory of consciousness.

At the first place, the best-known example of Searle's critique of computationalism and strong AI is his Chinese Room Argument. The argument (1980) goes as follows: Searle supposes that, many years from now; 'we have constructed a computer, which behaves as if it understands Chinese.' In other words, the computer takes Chinese symbols as input, consults a large look-up table (as all computers can be described as doing), and then produces other Chinese symbols as output. 'Suppose that this computer performs this task so convincingly that it easily passes the Turing Test.' In other words, it convinces a human Chinese speaker that it is a Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese speaker. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does. Yet, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese symbols, looks them up on look-up table, and returns the Chinese symbols that are indicated by the table. Searle notes, of course, that he does not understand a word of Chinese. Furthermore, his lack of understanding goes to show, he argues, that 'computers do not understand Chinese neither, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is - and they do not understand what they are saying, just as he doesn't.'
Join now!


Searle's Chinese Room Argument seems to be logical initially. Yet, in a view of a system, it is clear that though Searle himself does not understand Chinese in the thought experiment, it is perfectly correct to say that Searle plus look-up table understand Chinese. In other words, the entire computer would understand Chinese, though perhaps the central processor or any other part might not. It is the entire system that matters for attributing understanding. Moreover, it can be noted that the reason we do not want to attribute understanding to the room, or a computer as described by ...

This is a preview of the whole essay