Picture of the author
Topic :
Thread Created at Invalid date | Started by
Number of Post in this thread: 3Please Sign In to comment on this Thread
Brent_Allsop replied 12 years ago (May 9th 2011, 2:20:37 am)
Dystopian said: "I think that the internal simulation or model that is built on qualia is fundamental." Yay! I see so many AI people think that internal models and knowledge aren't important because they are naive realists, and or they think we don't have qualia... In my opinion this is significantly holding such people back in their work. Would you agree with this? Similarly, I think people ignoring what qualia are, how we use such phenomenal differences to distinguish the difference between a green leaf and a red strawberry and how this is all unified together, along with our cognitive knowledge of such, like our memory of how sweet the red ones are... is unified into one space of conscious knowledge is critically important and that ignoring such is also holding neural researching and AI people back. Sure, they can make ever more progress at learning ever more abstracted information about the behavior of what is there, abstractly simulating such and so on, but if you also were asking what is that neural stuff like to experience, and how are we aware of all of it at the same time, would you agree that such could give us a significant leg up in any AGI, and simulation of such efforts? If so, can we find what we do agree on and merge it into a unanimous super camp? Also, would you be interesting in helping us with [http://canonizer.com/topic.asp/88/6 Representational Real Theory] or perhaps one of it's yet to be falsified sub camps? I'd love to know which type of representational realists you are. Brent Allsop
Dystopian Welcher replied 12 years ago (May 7th 2011, 6:56:06 pm)
Hi Brent, I may have been unclear about about specific ideas in my camp statement. I think that the internal simulation or model that is built on qualia is fundamental. There must be some means of internal modelling, abstraction and simulation for a thinking entity to make predictions, and anticipate solutions to problems which would lead us to infer that such an entity is intelligent. But one must also consider that an entity which percieves and has the capacity for abstaction but exhibits no response can not be determined to have intelligence from external observation - akin to locked in syndrome. Therefore I must conceed that modelling behavior is a necessity for building useful AI, such that we can recognize an entity's external responsiveness to internal cognition and thus interact with it in a meaningful way. However the internal model (qualia) can perhaps be represented in many different ways, not strictly as we would see it or hear it or percieve it. How does a cephalopod think? How does it model the world? How does it assume that there may be somthing interesting inside a mason jar and figure out how to open it? Where is the line between feedback loops which generate impulses and the kind of behavior that is generated by more sophisticated pattern recognition through abstraction which we recognize as awareness? Our current technology has exceptional power for modeling the world in arbitrary ways and we can also model behavior, but we need lean and mean pattern recognition to reach the goal of AGI. My suspicion is that Wolfram's research may lead to a processing architecture that can provide a kind of universal pattern recognition and fill in those arbitrary levels of abstraction.
Brent_Allsop replied 12 years ago (May 7th 2011, 9:17:11 am)
Hi Eric, Welcome to the milestones survey. It's great to have some more camps start to show up, so thanks for this great contribution, and it is a great step forward. I see some possibility of achieving some consensus between each of our camps. Worst case, I would like to move our camp into a supporting sub camp of your camp, since I agree with most everything you say. But, first, there may also be some significant disagreements, so I'd like to clear up and explicitly clarify some of that first. First off, your entire statement is all only about behavior and abstracted simulation of the human neocortex. Have you nothing to say about any of this beyond mere behavior? Anything at all about what these neural correlates are phenomenally like to experience? I'm in the camp that believes everyone being so completely ignoring anything but behavior is already significantly holding the science of the brain, and AGI significantly back. Perhaps you are a qualophobe that thinks quale either don't exist or aren't important? Or would you agree that we could continue to "simulate" the behavior of ever more of the abstracted behavior of neurons, and completely miss this as a qualophile? I'd love to get more of your thoughts on any of this, to see if we can achieve a consensus with and combining with your camp, anything from my camp? Thanks, Brent Allsop