Putnam makes this clear through his example of an alien race seeing a picture of a tree for the first time. They have some mental image of the tree but this is not a representation of a tree itself, the aliens could not use their mental image of a tree contextually in a sensible way to convince a human, who did know what a tree was, that the meaning that they have ascribed makes sense. For them it will be a representation of a strange object that the mysterious picture represents. Putnam says that some would argue that the image that the aliens have come across is a representation of a tree, since what caused it was a representation of a tree to begin with. However we have to examine the causal link says Putnam as it could be absent, this picture could merely be a result of spilled paints which just so happen to look like the tree (like the ants line) and although the mental image of the alien would again be the same as ours, it would not be an image that represented a tree. Putnam’s main point here is ‘even a large and complex system of representations, both verbal and visual, still does not have an intrinsic built-in, magical connection with what it represents.’ Putman is getting at instrisicality semantics; he is denying that meanings are in the head or that they are mere behaviour. This is a Wittgensteinian point, Wittgenstein made the point about private language that it is essential that the meanings of the words are not in the head, and following a rule is crucial to words meanings, the meaning of any word is a matter of what we do with our language, not something hidden inside anyone's mind or brain. Thus reference cannot depend on mental images and nor can it depend on a correct usage of words. To illustrate this Putnam alludes to a computer that could have knowledge of trees, and it could be so convincing at conveying this that is passes ‘Turing’s test’. The computer could answer, perhaps even ask questions about trees and appear to have actual knowledge of them. Yet Putnam says it is indifferent to the actual existence of trees as if all trees ceased to exist the computer could still, if it was not reprogrammed in light of this information, continue with the conversation about trees when in actual fact it would not be referring to anything in the external world at all.
In the case of the brain in the vat Putman presents a modern day rendition of the Cartesian evil demon sceptical challenge. He imagines that an evil scientist has captured a brain and kept it in a tank of nutrients to ensure that it functions and then hooks the nerve endings to a computer which feeds the brain with sense data of the external world. Putnam wants to show that such a deception would be self refuting, even though he says it is not logically problematic, as everything no matter how improbable is a logical possibility. Through the brains in a vat scenario Putnam also explores the issues of causes of meaning and reference. He firstly unfolds his argument against the possibility of brains in a vat as it is self refuting by positing that if there were brains in a vat the they would be imagining that they are human in the natural world, if you were a brain in a vat you could not say “I am a brain in a vat” as it is self refuting, since the imagined brain in a vat situation is not being a brain in a vat but a human. If we are brains in a vat then the sentence “we are brains in a vat” says something ultimately false, thus the idea that we could be is necessarily false.
Putnam’s main problem with the brain in the vat, despite him seeing it as self refuting, is that the sense data produced by the machinery that the brain is hooked up to does not refer to anything in the external. Just as spilled paints can resemble a tree picture without being a tree picture, so can sense datum. The whole system of sense data has no more connection to trees than the ant’s line does to Churchill. Crucially this is an issue of ‘know that’ and ‘know how’. The computer generated sense datum is concept-less, it just has images of trees or bikes or chapels or a number of things, and to posses a concept is to know how to use it. If an ant draws a line in the sand that resembles Churchill or Brandom’s parrot answers appropriately when it sees red this is an occurrence. To ‘know how’ is an ability which is much more fluid that this, Putman’s complaint with all of this is that what is missing is this possible usage of concepts. Perhaps one could apply McDowell’s description of the “space of causes” to the brain in a vat whereas our references are in his “space of reasons”, without concepts formed through reason we would have meaningless images of a tree, like the aliens or the ant or the brain in a vat. Putman is strictly denying what Brandom called ‘gonzo externalism’ which is the idea that we can have conversations with animals which mean something if both parties are making all the right sounds and if the parrot calls red correctly when it sees it, it knows what red is and this is not a mechanically practiced party trick.
Putman concludes that the main problem arises due to people taking physical possibilities too seriously and making unfounded assumptions about the reference of mental representations to external things, it is not in the mind, similarity is not sufficient, nor intention or behaviour. It is the thought that matters, the ‘know how’, this could not be sufficiently achieved through computer wires because ultimately the cause is not of the external and the concepts are not real, and it certainly could not be achieved by an ant or ignorant tribe of twin earth.
Bibliography.
-Putnam H: ‘Brains in a vat’, Knowledge edited by Sven Bernecker and Fred Dretske
-‘Brains in a Vat: Different Perspectives (in discussions)’ - Yuval Steinitz - http://links.jstor.org/sici?sici=0031-8094%28199404%2944%3A175%3C213%3ABIAVDP%3E2.0.CO%3B2-7
A .M. Turing - a logician who proposed a test for consciousness. The computer could be ‘conscious’ if it could convince a human being but mode of conversation that it is not a computer but a human.
It’s truth implies it’s falsity.