Picture of the author
Topic :
Dissolving the "Hard Problem" of Consciousness
Thread Created at Mar 6th 2013, 5:55:30 am | Started by Multisense Realism
Number of Post in this thread: 5
Please Sign In to comment on this Thread
Multisense Realism replied 11 years ago (Mar 25th 2013, 9:20:34 pm)
Proposed consolidation of open Philosophy of Mind issues as aspects of the "Presentation Problem" or Unified Presentation Problem (UPP). I propose that the Hard Problem, the Explanatory Gap, the Binding Problem, the Symbol Grounding Problem should all be understood as aspects of the same Unified Presentation Problem. 1. Hard Problem = Why is X presented as an experience? (X = "information", logical or physical functions, calcium waves, action potentials, Bayesian integrations, etc.) 2. Explanatory Gap = How and where is presentation accomplished with respect to X? 3. Binding Problem = How are presented experiences segregated and combined with each other? How do presentations cohere? 4. Symbol Grounding = How are experiences associated with each other on multiple levels of presentation? How do presentations adhere? 5. Mind Body Problem = Why do public facing presences and private facing presences seem ontologically exclusive and aesthetically opposite to each other? My own proposed answers to these problems can be found here: http://s33light.org/post/46246847025 Thank you, Craig
richwil replied 11 years ago (Mar 6th 2013, 2:56:33 pm)
Talk of encoding qualia in bit patterns assumes that the computational functionalism theory (CF) is true. In addition to physicalism - the mind is instantiated in the brain - CF holds that the brain is just an information processing system. If CF were true then you could encode a quale as a bit pattern in a computer. Simply envisaged, you could use 1 for red and 2 for blue so a computer system with normal colour processing would output 1 for colour input from a UK post box and 2 for a clear sky. One with an inverted system would output 2 for the box and 1 for sky. But this is arbitrary. Functionally, the two systems are identical yet they output different values for the same input. Therefore CF is false; qualia cannot be data. The inverted qualia argument does not defeat non-functionalist physicalism, however. Yes, the conscious brain is not at all like a computer: qualia are not data, colours are not encoded by bit patterns in the brain. This does not rule out the possibility that consciousness is brain activity, just that it is not information processing. I would add that the notion it would be possible to encode information in a glutamate molecule is silly. You can, of course, encode information in DNA sequences but that is because you can change the sequence of bases whereas one glutamate molecule is the same as any other.
Multisense Realism replied 11 years ago (Mar 6th 2013, 7:43:39 am)
The problem that I have with all of these discussions is that the begin from flawed assumptions, so that however rationally they proceed from there, it ultimately leads back, in a circular fashion, to the original bad assumptions. The first is that, as Mark Waser writes in his article: "1. There really is an external reality composed of physical objects (and their movements extending to the extent of complete processes like orbiting a star, etc.)." There is no support for this from a completely objective point of view. While we can judge that there certainly seems to be a universe which goes on without us being present, there is no reason why we should jump to the conclusion that there is a such a universe which is independent of all beings - of all perception and experience. It should be a clue that when we imagine the universe without perception, we cannot help but imagine a universe exactly as it is from our experience. Atoms are very small, a thousand years is very long, stars are bright, rocks on planets are heavy, etc. Of course, this is complete nonsense. Without a human sized, lifetime scaled phenomenon to serve as the orienting frame of reference, there is no qualitative presentation which can have precedence over any other. Past, future, fast, slow, enormous, tiny, etc... all meaningless. There is no here or there, no way to recognize any distribution of particles as a pattern or any pattern of energy as a particle. Indeed, a universe without an observer is indiscernible from nothing at all. It could only be silent, invisible, intangible. In short, there can be no "external reality". The second great mistaken assumption about the Hard Problem is that qualia has something to do with representation. It does not. Qualia is presentation. It is the source of all presence. Representation does not need qualia at all. As we have seen, any kind of quality which can be detected by an instrument can be digitized into a generic representation. Qualia would not only be superfluous and unexplainable, it would be an impediment to the transfer of information. The third false assumption in the Hard Problem discussion is derived from the second - after jumping to the conclusion of qualia as representation, the next premature jump extends the metaphor to conscious experience as a model or simulation. If qualia were a representation, an encoded symbol, then it would make sense indeed that a network of symbols (a model) is a good way of describing our experience. Quoting Waser: "Per Hofstadater [2007], Damasio [2010], and others Llinas [2001], consciousness/self is simply a process running on a physical ("strange") loop/feedback network created per Richard Dawkins' [1976] speculation that "perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself"." A natural speculation if you are approaching consciousness from the outside, where it has no qualia or presence and can only be inferred by smuggling in our own knowledge of our own consciousness. Consciousness does rely heavily on feedback loops, but feedback of what? Loopness? There's no physics there, no participation or presence. In reality, what is being looped can only be something which is already sensory or proto-sensory for it to conceivably scale up to the full blown human interiority which we experience. A model need not have any more or an experienced presence than a symbol does. It is simply processed from one encoded form to another and never is there any decoding into a significant experience. Inputs are compared with logical expectations, outputs are produced. I doesn't matter whether those outputs are hunting a wild boar or incrementing a memory register. There is no way to assign any visceral realism or qualia to differentiate the two, and of course, none are needed. Critical processes need only be assigned higher priority, not some kind of fireworks display of flavors and colors. As far as the 'is my red your green?' conversation goes, the confusion is because visual qualia is unusual in that it is presented as the most object-like feeling that we can experience. Red is still a feeling, but it is a visual feeling, which seems to be scoped or tuned toward experiences which convey the third person perspective - detached, fully available to examination as a knowable shape/design/color. It is a feeling which in the most public facing way. What this means is that visual qualia is actually a second order qualia, the first being the feelings that we have *about* the color, or through the color. It is very confusing indeed if we don't realize this and conflate both qualia as a single one. From optical experiments we have seen that our visual sense is highly resilient to experimental manipulation, and will quickly route around tricks, such as flipping the image upside down or pushing the chroma median away from true yellow. We get used to it and our visual presentation returns to its inertial norm despite having altered optics. Combine these two and we can understand that it is a two way street, with our first order feelings driving the perceptual inertia from the inside out and the second order visual feelings being partially driven by optics. I don't know if we all have the same green and red, but if I had to guess, I would say that we have first order feeling qualia which are both more human-universal and personal-idiosyncratic, and second order visual qualia which is more local-physiological-cultural. Craig
ameetnsharma replied 11 years ago (Mar 6th 2013, 6:55:59 am)
Suppose a child is a born, and he has some sort of permanent set of glasses or cybernetics placed on him so that he sees green when we see red and vice versa... He has this since growing up... so when we see red... he sees green, but he calls it red, because he has no idea our experience is different.... We likewise do not know that his experience is different. As an information processing system is the child with his glasses or cybernetics different from an ordinary human being? remember, we're only looking at input output mechanisms... There is no difference right? So according to the hard materialist position, there is no difference in qualia between him and ordinary humans. At 30 years of age his cybernetics are removed... now are his qualia different? Are they closer to our qualia? Or were they closer when the cybernetics were on him?
Brent_Allsop replied 11 years ago (Mar 6th 2013, 5:55:30 am)
[http://transhumanity.net/authors/n/mark-waser Mark Waser] got his interesting paper entitled [http://transhumanity.net/articles/entry/dissolving-the-hard-problem-of-consciousness 'Disolving the "Hard Problem" of Consciousness' published over on Transhumanity.org]. A great conversation ensued over there between the readers, but evidently things broke down and people's comments are no longer being posted. So I've proposed moving the conversation over here, on this thread at Canonizer.com, where I hope we can get some additional input. I saw a post go by, via e-mail, but I have no idea who posted it. I'd like to reply, so I'm going to include the entire post here, and then reply in a subsequent post to this thread. I hope who ever posted this will identify himself, here. Upwards, Brent ================================= Brent, I've been meaning to follow up with your "so we agree" post to see if we really agree, but I've been busy. So you seemed to ask that if my brain used some glutamate molecule to represent redness, and some computer used electrons, they would at least be different, right? And the answer is of course. They use different hardware in their map so they are different. If you have two systems, that used all the same abstract information processing, so that it's information output was the same for same inputs, but used different internal coding, would our qualia be different? Or, to make the question easier to understand, lets say you and I had nearly identical brains, but due to a genetic difference, my brain encoded redness with a glutamate molecule, and your brain used that molecule to encode greenness. Would that mean our qualia were inverted? Seems intuitively that the answer would be yes. But, the answer, sad to say (for you), is NO. It can't be in that case. This is because if the information processing is the same, then we could change the encoding, and the person would then be unable to report that encoding had changed. If we move software from one computer that encodes 1's with a +5, to a second computer that encodes the 1's with +3, could the software report that encoding had changed? Not if the information processing had stayed the same in the move. The software could not tell us, "yes I can tell I'm running on the +5 version of my qualia now". Likewise, if we switch your brain, to encode red the way my brain encodes red, would you be able to say, "I see, your red looks like my green"? The answer is NO you couldn't. If we swapped your encoding, it means you would have no memory of how it was, to compare to how it is now, to detect the difference.