According to John Searle, strong analytical devices that have Artificial Intelligence can, at most, have a strong sense of syntax and as such can interpret information by syntactic means.

Authors Avatar

Philosophy 390

Prompt:  According to John Searle, strong analytical devices that have Artificial Intelligence can, at most, have a strong sense of syntax and as such can interpret information by syntactic means.  If given the opportunity, would a digital computing device (using syntactical device as a means for interpretation) be able to choose between different means of social/political/economic way of life?  What means of life would the digital device choose?  Give an analysis of how this decision would come about.  Give John Searle’s reply to these remarks.

        John Searle worries that the contemporary interpretation of a computing devices’ “thought process” might be misconstrued as actual thought.  His main focus shines on the notion of a computing device that has strong Artificial Intelligence (AI) and is capable of arriving at seeming judgments in a fashion that is in accordance with our thought process.  His first three chapters of Minds, Brains, and Science is devoted to a series of thought experiments to entertain as well as evaluate such a situation where a computer would be given the hypothetical opportunity to “think.”  The main thrust of these thought experiments were to show that no matter how well a computer was able to manipulate a series of symbols, these manipulations or classical interpretation wouldn’t not be enough to actually extrapolate any sort of meaning out of the symbols/words/sentences in question.  The topographical interpretations of these symbols vis-à-vis meaningful interpretation of these symbols were accounted for as syntax versus semantics.  In short, syntactic manipulation of formal symbols does not by itself constitute semantics.  However, an interesting notion comes about as to whether or not such a computational device would be able to signal one set of social/economic/political ideals over another with nothing more than a syntactical interpretation.  Seemingly, to choose one set of values over another would take no more interpretation other than to evaluate pure benefits from each.  This conception of beneficiary power brings about a set of circumstances that were not addressed; it is my contention that through a careful analysis of the ways and means that a device exhibiting strong AI would examine such situations, we will be able to see the merits of one system versus another in a purely syntactical and analytical approach.

        John Searle gives a strong analysis towards the notion of whether or not digital computing devices are capable of thought in the way humans are.  The general notion that has been given is that computers are given to have abilities to symbolic interpretation, or information processing.  This processing of symbols is what Searle argues as nothing more than a syntactical interpretation of information because we really do not have any sort of meaning that can be extrapolated from these symbols other than the way they may fit together.  This problem is cited in the “Chinese room” argument that John Searle brings forth.  In this particular argument, the setup of the debate on whether a computer may have a distinguishable argument based in semantics versus pure syntax is as follows:

Searle asks you to imagine yourself a monolingual English speaker " second batch of Chinese script" and "a set of rules" in English "for correlating the second batch with the first batch." The rules "correlate one set of formal symbols with another set of formal symbols"; "formal" (or "syntactic") meaning you "can identify the symbols entirely by their shapes." A third batch of Chinese symbols and more instructions in English enable you "to correlate elements of this third batch with elements of the first two batches" and instruct you, thereby, "to give back certain sorts of Chinese symbols with certain sorts of shapes in response." Those giving you the symbols "call the first batch 'a script' [a data structure with natural language processing applications], "they call the second batch 'a story', and they call the third batch 'questions'; the symbols you give back "they call . . . 'answers to the questions'"; "the set of rules in English . . . they call 'the program'": you yourself know none of this. Nevertheless, you "get so good at following the instructions" that "from the point of view of someone outside the room" your responses are "absolutely indistinguishable from those of Chinese speakers." Just by looking at your answers, nobody can tell you "don't speak a word of Chinese." Producing answers "by manipulating uninterrupted formal symbols," it seems "as far as the Chinese is concerned," you "simply behave like a computer"; specifically, like a computer running Schank and Abelson's "Script Applier Mechanism" story understanding program (SAM), which Searle's takes for his example. But in imagining himself to be the person in the room, Searle thinks it's "quite obvious . . . I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing." "For the same reasons," Searle concludes, "Schank's computer understands nothing of any stories" since "the computer has nothing more than I have in the case where I understand nothing". Furthermore, since in the thought experiment "nothing . . . depends on the details of Schank's programs," the same "would apply to any computer simulation" of any "human mental phenomenon"; that's all it would be, simulation. Contrary to "strong AI", then, no matter how intelligent-seeming a computer behaves and no matter what programming makes it behave that way, since the symbols it processes are meaningless (lack semantics) to it, it's not really intelligent. It's not actually thinking. Its internal states and processes, being purely syntactic, lack semantics (meaning); so, it doesn't really have intentional (i.e., meaningful) mental states.

Join now!

The implications for this sort of syntactic interpretation and strong AI were held to be the following: first, syntactic interpretation fails because the formal syntax of a computer program has been shown not to be intrinsically semantic, and second, strong AI fails because a system's behaving as if it had mental states is insufficient to establish that it does in fact have these states.  This worry transcends into his argument of how computing devices aren’t truly able to think in the conventional sense that you and I take.  The further implications of thought goes into John Searle’s conception of ...

This is a preview of the whole essay