The key to the argument that a machine possesses knowledge to carry out its intended task is the assumption that the machine components when working in synch somehow transforms into the knowledge of the machine, as they together makes the machine work. It might be hard to understand how simple machine components that cannot perform any task on their own, and are unable to think can somehow make a machine know anything when arranged in a certain way. It is important at this stage to point out that knowledge does not necessarily imply intelligence; the machine does not need to be able to reason or be intelligent in order to know something. I have been writing computer programmes since I was very young, and the earliest programmes I wrote were simple mathematical algorithms to add or subtract a number to another number, like a simple calculator. The computer (an electronic machine) correctly preformed the calculations once my instructions were correctly written. The computer itself was not intelligent as I instructed it about what to do. But yet it did know how to manipulate the numbers and give a correct answer according to my specified axioms of mathematics, and hence it had the knowledge to calculate a true answer on the basis of the specified mathematical axioms. Yet it did not understand why it was doing like I instructed or why the axioms were correct. This working principle of machines can be developed into many other more advanced machines we frequently encounter in the contemporary society, for example Robots able to recognize and respond to human speech, customer service terminals etc.
The now famous Chinese Room Argument originally formulated by John Searle[2] furthers this argument. Suppose a computer in a sealed room is built so good that it can respond to Chinese human text input so realistically that it convinces a human being like me that know Chinese it is another human being sitting in the room responding to my text conversation input. Suppose now that my friend Viktor that knows no Chinese enters the room and starts responding to the text using dictionaries and rulebooks in the Chinese language so it appears very natural to me. Searles argues that just as Viktor using the dictionary does not know any Chinese, the computer does not either as they are both only following a set of rules; hence the computer machine cannot know.
However, one can argue that if Viktor and the computer room with all its contents, including dictionaries and rule books etc. are seen as a single machine, then the machine surely do know Chinese and is able to respond to it. An analogy can be that an ordinary car does not know how to operate until a human that knows how to drive starts driving it. The Chinese Room might not be able to generate new knowledge, but it can naturally use instructions to generate natural Chinese text response, which is its purpose. A part of the human brain cannot be considered to know Chinese; you have to consider the brain and the perception system working in synch in order to conclude that a human knows Chinese. I as a fluent Chinese speaker have good reasons to consider myself to know Chinese because I can recall the rules of the language from my memory and use my perception and senses to receive and respond to a conversation in Chinese. When speaking to other Chinese people, the correspondence theory of truth tells me that I do know Chinese since I can speak and understand them. I hence have Justified True Belief that I know Chinese. A computer works in a similar way by storing instructions/rules in its memory chip, processes the instructions in the processor and response using its senses, which are the speaker and microphone systems. In the Chinese room case the dictionaries were the memory whereas Viktor can be seen as its senses. The machine cannot look for reasons or use the correspondence theory of truth to confirm that it knows, but it can still use its knowledge as an outside observer sees it. This is perhaps why when humans refer to a machine being “simple” or “stupid”, we are usually referring to not the machines inability to carry out its task, but its extremely limited scope of knowledge. It lacks the skills to adapt and generate new knowledge. There can be disagreements about whether the information stored in the human memory can be seen as fixed instructions and rules, which is the case for a machine, but as illustrated above the way humans use knowledge to conduct a task is similar to a machines.
Some critics argue that when machines conducts a feat they are merely obeying natural laws, and for example by utilizing gravity to lift a load as a pulley does, it is rather nature that provides the knowledge and not the machine itself. However using this radically materialistic approach, humans are also obeying natural laws. In fact humans are classified as biological machines, and if humans can know anything, by the same reason as above machines should be able to know. Many consider the only aspect that makes humans and other higher organic creatures radically distinct from the commonly defined machine is our ability to express emotions and intuition. But since it is humans that build almost all machines, we need to understand how emotion and intuition works in order to create it. How emotions and intuition work is still an area of hot research and it is likely it will be hard to understand and replicate, as it is likely to be based on a subconscious mix of memory, experience, environmental factors and perception of the individual. A machine may be logically superior to a human in areas where there is a limited set of rules/instructions, like chess, as shown when IBM Deep Blue beat the world master in chess in 1997. But processing and expressing emotions can be seen as an area where there are either no fixed rules or too many rules to be contained in a closed system, requiring way more powerful machines to replicate if it is even possible. In fact, Gödels incompleteness theorem [3] states that a complete set of axioms of mathematics is impossible to find, hence implying that formal machines like computers that are based on mathematics and rationalised knowledge will never retain the reasoning and emotional abilities of humans. It also implies that humans and biological machines are somehow better than artificial ones. Much debate has centred on whether the human mind is a formal machine, since if it were then machines would be able to express emotions like humans.
A machine operates with the aid of its components and these in turn help to store and use the knowledge for the machine to perform a task, which applies for all types of machines. With modern computers, the prospect of creating machines that can know and express emotions has been debated, with arguments mainly circulating around whether the human mind works in the same way as a computer does. Many aspects of this essay touches upon contemporary psychological research topics about how similar to electronic machines the humans brain store information, and hence the assumptions and definitions provided leads to many controversies. Bearing these controversies in mind when basing my assumptions on the definitions provided, there is strong evidence in favour for the case that a machine is able to know how to perform a task. Word Count: 1574
Bibliography
Alchin, Nicholas. ”Theory of Knowledge”. John Murray: London 2003
Gettier, Edmund. Analysis, Vol. 23, pp. 121-23 (1963).
Gödel, Kurt: Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatsheft für Math. und Physik 38, 1931
Searle, John. R. “Minds, brains, and programs”. Behavioural and Brain Sciences 1980:3, p.417-457
--
[1] Edmund L. Gettier, Analysis, Vol. 23, pp. 121-23 (1963).
[2] Searle, John. R. “Minds, brains, and programs”. Behavioural and Brain Sciences 1980:3, p.417-457
[3] Kurt Gödel: Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatsheft für Math. und Physik 38, 1931