Perception → if it perceptually seems to S as if p and S has no reason to believe otherwise, then S has no duty to refrain from believing that p.
Memory → If S recalls that p and S has no reason to believe otherwise, then S has no duty to refrain from believing that p.
And as a result, a deontologist can make the comfortable claims:
-
The table seems to be red. → This is justified through perception and experience
- I seem to see a red table and I have no evidence to the contrary, then I’m justified in believing that there is a table there.
The case for the deontologist is made apparently true thus far; however, one must look directly at the case and realize that the Hard Cases are not allowed the same sort of epistemic duties that we are since for them, perceptual beliefs are involuntary. That is, they do not have any sort of voluntary intellectual control. They see and experience what they can, except, they do not have the capacity to not believe evidence if given the opportunity. The Hard Cases hold beliefs that are beyond their intellectual control and cannot be held accountable for their beliefs. They cannot refrain from believing in their beliefs. In other words, their perceptions could be because they ingest a drug everyday that makes them think that they see in our world, but to us reality, they are blind. It would be proper to view the definitions of an involuntary belief and the involuntariness principle here:
Involuntary Belief:
S believes p involuntarily if and only if S believes p and S cannot refrain from believing p.
Involuntariness Principle:
S’s believing P is involuntary, and then S cannot have an epistemic duty to believe p or an epistemic duty to refrain from believing p.
We see perceptions just the same as the hard cases see perceptions, but since perceptions and memories are subject to an involuntary belief pattern that a deontologist can justify with epistemic duty and a Hard Case cannot, we have to say that “soft involuntary” perceptions are part of a deontologist’s method of operation since these types of perceptions are subject to refutation and correction based on new and better evidence. “Soft involuntary” perceptions can be shown as follows:
Perhaps you are subject to a reality where you see butterflies everywhere. This occurrence is so commonplace in your realm that you think nothing of it for a time and after a while you seem to think that the butterflies are part of your reality. Most importantly, we (the observers) know that you as a person have the intellectual capacity, freedom, and control (this is key) to change your views of the world (namely that the world is not filled with butterflies) given better evidence to prove this. Such evidence would be that perhaps you’ve been drugged for the past 20 years of your life for an FDA trial and you are now done with it, so now you know that your prior state was an altered state, thus, you are within you capacity to “switch” you perceptual model according to the world that you now perceive.
Simply put, the hard involuntary cases (as that experienced by the Hard Cases) are the flip side of the soft where one does not hold the intellectual control to make the same judgments that soft involuntary cases are subject to. The Hard Cases are simply stuck in an evidentiary realm that is not subject to epistemic scrutiny, that is, if there is something odd but constant going on (such as a world full of butterflies), one cannot make an honest decision as to whether the perceptual realm one is experiencing is true. They only have their own realm to operate on and nothing more. Do they have to be justified in the deontological sense? They seem to possess all their discretionary powers within the evidentiary world that they experience; we know better, they are stuck in a reality where they are beyond an intellectual control to change their basic evidentiary knowledge. One methods of attack is appropriate: this is aimed towards the first part of the first premise where it is said that Hard Cases are justified in believing p. If it is shown that the Hard Cases are not justified in believing p, they do not have justified involuntary beliefs and because their beliefs are not justified, they cannot have the same treatment as a deontologist. Thus, the deontological theory is saved from the above refutation.
A deontological reply must follow this objection: Hard Cases are justified in believing p. If S and S’ have the same evidence and what is a defeater for one is a defeater for the other, then if S is justified, so too is S’. Of course, in this case, we can examine Hard Cases as being S’ and us as being S. The best reply made here would be that S’ is in fact not ever given the same evidence that we are. This is a sort of backhanded claim, but it is one that someone like Strawson would find very relevant. The likeness of one’s self and their conscious being cannot ever be transferred or reestablished by another being in exactly the same way. No matter how hard one tries to simulate if not directly copy one’s self and their attached experiences, we cannot even being to experience the same scientific and biographical truths that another has. This is something that can be exemplified in the case where both you and I look at a pen. Although we may both have an opportunity to look at the exact same pen under exactly the same circumstances (assuming that atmospheric conditions are held constant and nothing else is preventing the two situations from being different in any other manner than time), it still would not satisfy the condition that we are experiencing the exact same thing. Even if we had the exact same vantage point at exactly the same time to view the exact same pen, we are still different people and because of this, we can never say that we are subject to the exact same experience. Articulated in another way, I can never say that I am Peter Markie, Ph.D. I can never say that I am part of the faculty of the University of Missouri – Columbia Philosophy Department. I can never say that I have taught a class in epistemology at X time on X date. I can, however, say that I saw a pen at X time on Y date in an A fashion. However, I cannot say that I saw the pen the way that Peter Markie, Ph.D. has seen it because I am not you. We cannot have the same sensory experience and evidence because both of these are modified and described in terms of how we individually experience things. If we are not having the exact same experience, then we cannot have the exact same evidence. Evidentiary claims found their basis on experiential claims. One cannot be justified in the belief that there is evidence for a pen in front us without going through the proper experience to make such a claim. So, in this fashion, even though we are two different people, the above tautology applies to our situation because we seemingly have nothing to refute the world that we live in. We are in voluntary control of our intellectual processes and as such, our freedom allows us to say that we may have the same experience, but the evidence would be equivalent, not the same. Since our perceptual stance isn’t good enough to distinguish between the two, we can get away with saying that our evidence is the same. Since this is apparent, it is of even more importance to note that Hard Cases may go through this exact same process that you and I may go through. A Hard Case and I may both see a chair and we could go through the preceding argument again. In this case, there is a distinct difference. The Hard Cases are not like us. They have no likeness to compare to us because we are beings that have the ability to discern whether or not we have voluntary or involuntary control of our intellectual processes. This is part of our description of mental processes. We surely can tell whether or not one is in capacity to have voluntary intellectual activity and because of this, we can immediately point out that Hard Cases do not have the same voluntary epistemic processes we do. Because of this, they are not justified in believing what we believe, even though it may be correct with our beliefs 100% of the time by coincidence.
If we are to view a set of beliefs, we must examine the factual and epistemic probabilities of truth for those beliefs. When one examines such probabilities, there is a possibility in running into a problem of mixing and matching externalist thoughts with those of internalist thoughts. However, with the concept I choose to present, there is a bit of an external element that I am going to present with the internalist view. The idea of a transcendental metaphysical parallel plane is one that John Searle employs especially when he is talking about such things as consciousness in humans versus computational devices. In fact, for the example used in the initial discussion, Hard Cases may have an almost perfect comparison to that of computational devices. In order for this discussion to merit any sort of meaning, we must first examine Hard Cases and computational devices (specifically Strong Artificial Intelligence Device (SAID): computational devices that may mimic human decisions such as the Chinese Room Argument used by John Searle in Minds, Brains and Science) only on the plane of decision-making. The implication of comparing them otherwise would seemingly make this whole discussion a moot point, thus, for the sake of visualized example; a SAID serves to be a comparison only on the level of seeming decision and true beliefs. Again, before the discussion wears on, it is clear from my standpoint and John Searle’s standpoint that computational devices cannot and will not have consciousness and they will never have the ability to be human, but they may be able to mimic the representation of human thoughts without having them. This is a point I must articulate because this is exactly what the Hard Cases are doing.
Searle would begin by saying that the characteristic mistake in the study of consciousness is to ignore its essential subjectivity and to try to treat it as if it were an objective third person phenomenon. Instead of recognizing that consciousness is essentially a subjective, qualitative phenomenon, he would point out that many people mistakenly suppose that its essence is that of a control mechanism or a certain kind of set of dispositions to behavior or a computer program. In the case we are looking at, we are confusing the subjective phenomenon that we enjoy (voluntary control of belief) with that of the Hard Cases (who act like they have voluntary intellectual control, but actually do not.) The two most common mistakes about consciousness are to suppose that it can be analyzed behavioristically or computationally. Searle would contend that the Turing test leads us to make precisely the mistake of behaviorism and the mistake of computationalism. According to Alan Turing, the question whether machines can think is itself “too meaningless” to deserve discussion. However, if we consider the more precise, and somehow related, question whether a digital computer can do well in a certain kind of game that Turing describes (“The Imitation Game”), then, at least in Turing's eyes, we do have a question that admits of precise discussion. Moreover, as we shall see, Turing himself thought that it would not be too long before we did have digital computers that could “do well” in the Imitation Game. It leads us to suppose that for a system to be conscious, it is both necessary and sufficient that it has the right computer program or set of programs with the right inputs and outputs. Searle’s objection to behaviorism is that behaviorism is not right because a system may behave as if it were conscious without actually being conscious. In our case, a being may act like he is thinking meaningful justified beliefs without thinking as we do and making justified beliefs as we do. There is no logical connection, no necessary connection between inner, subjective, qualitative mental states and external, publicly observable behavior. Of course, in actual fact, conscious states characteristically cause behavior. The behavior that they cause has to be distinguished from the states themselves. The same mistake is repeated by computational accounts of consciousness.
Just as behavior by itself is not sufficient for consciousness, so computational models of consciousness are not sufficient by themselves for consciousness. The computational model of consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modeled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious.
In essence, Searle would say that the same mistake is made in both cases.
Searle offers proof showing that the computational model of consciousness is not sufficient for consciousness. He in fact goes so far as to say that:
I have given it many times before so I will not dwell on it here. Its point is simply this: Computation is defined syntactically.
He defines computation in terms of manipulation of symbols. Syntax by itself can never be sufficient for the sort of contents that characteristically go with conscious thoughts because having zeros and ones by themselves is insufficient to guarantee mental content, conscious or unconscious. This argument is sometimes called ”the Chinese room argument” because He originally illustrated the point with the example of the person who goes through the computational steps for answering questions in Chinese but does not thereby acquire any understanding of Chinese. Similarly, the Hard Cases may be practicing what they conceive of as justification of beliefs in their own world, but they do not have the proper basis for making this sort of intellectual jump. Syntax by itself is not sufficient for semantic content. Searle plainly does not think that syntax is sufficient for semantic content. It is meant that the form of justification doesn’t lead to justification itself. On the same note, Searle says that:
I was conceding that the computational theory of the mind was at least false. But it now seems to me that it does not reach the level of falsity because it does not have a clear sense. Here is why. The natural sciences describe features of reality that are intrinsic to the world, as it exists independently of any observers.
He then says that gravitational attraction, photosynthesis, and electromagnetism are all subjects of the natural sciences because they describe intrinsic features of reality. Such features as being a bathtub, being a nice day for a picnic, being a five-dollar bill or being a chair are not subjects of the natural sciences because they are not intrinsic features of reality. All the phenomena Searle names are physical objects and as physical objects have features that are intrinsic to reality. But the feature of being a bathtub or a five-dollar bill exists only relative to observers and users.
To understand the distinction between those features of reality that are intrinsic and those that are observer-relative are very important to John Searle and this discussion. Gravitational attraction is intrinsic. Being a five-dollar bill is observer-relative. Justification is intrinsic. Being justified is observer-relative. Justification is a specific property where the word “being” assigns a very specific nature to the particular term being delivered. Being justified means that one is imbued with the qualities of justification. Thus, it is clear that being justified is an observer-relative feature. Now, the really deep objection to Hard Case theories of justified belief formation can be stated quite clearly. Being justified in a belief does not name an intrinsic feature of reality but instead points out an observer-relative feature. This is because the idea of being justified is defined in terms of empirical reality interpretation. The notion of a ”being justified” is not a notion of universal reality; it is an artifact of someone’s particular reality. Someone is treated as being justified only if it they are treated or regarded as being justified. The Chinese room argument showed that semantics is not intrinsic to syntax. There are no purely physical properties that zeros and ones or symbols in general have that determine that they are symbols. Something is a symbol only relative to some observer, user or agent who assigns a symbolic interpretation to it. Thus, for Searle, the question, “Is consciousness a computer program?” lacks a clear sense. He further articulates that if one asks, “Can you assign a computational interpretation to those brain processes which are characteristic of consciousness?” the answer remains that one may assign a computational interpretation to anything; if the question asks, “Is consciousness intrinsically computational?” the answer is: nothing is intrinsically computational. Thus, nothing is Computation exists only relative to some agent or observer who imposes a computational interpretation on some phenomenon.Of course, if a Hard Case were to talk to another Hard Case, they would think that they were perfectly justified in what they believe. Specifically, if they were to examine a red table, they would be perfectly content in saying that they are justified in believing that they see a red table. They may even look at us incredulously when we ask them if they are really justified in seeing a red table. The fact of the matter is that we see a reality that they do not. We see that they are stuck in a reality where they are forced to make a decision without any sort of way to reconcile potentially “false” information. We know that they exist in a phenomenological plane that doesn’t allow for any sort of voluntary intellectual control and that is the bottom line. We know and they do not. This is a very clear-cut and potentially problematic stance that may allow one to say that we may be subject to the same sort of situation (where we are the Hard Cases for someone else). However, since the case involves us (human being in the sense that we see ourselves, normal in every capacity and able to identify Hard Cases as not being “like us”) in specific versus Hard Cases, we can assess nearly every infinite regress objection as a moot point. We know that Hard Cases do not see the world the way we do and because of this, they are not justified in their beliefs.
We can see that Hard Cases have a set of evidence that they are in effect ignoring; they are not being proper epistemic inquirers by deontologists’ standards, although they are not at fault for this. However, because they are in effect missing information that they could possibly use to perhaps change their “involuntary beliefs” in the soft involuntary sense, they are not justified in believing what they do. This would be the best deontological reply and as such, it would work only on a limited scale. There would be a lot of debate as to whether or not one’s likeness is actually transferable and because of this, there wouldn’t be necessarily any agreement outright. On the other hand, this seems to provide the absolute best solution to preserve deontological theory and serves a very important function in helping us discern what is evidence and experience really mean to us. I think that this reply is successful on many grounds. It helps to preserve deontological theory, it helps to distinguish between people that are phenomenologically stuck on one evidentiary platform from those that can discern between the best evidence available, it helps to lay out a foundation for the definition of “same.” On these grounds, success is defined by the absolute refutation of the attack on deontological theory and preserves the notion of voluntary belief while also helping to account for those involuntary beliefs (soft involuntary) that we are allowed to access through our epistemic duty.
The Problem of Consciousness. J. Searle.
Minds, Brains and Science. J. Searle.
The Problem of Consciousness. J. Searle.