> " If I make a computer program that models one set of data as another
> set of data, why would some third representation arise to mediate the
> two? More importantly how could it arise? It doesn't make sense."
> I can't even follow this objection/question.
A good reason to consider it more closely.
>I model the world, and
> experience my model.
If you model the world directly, then you are experiencing the world and don't need a model. If you mean that your brain models the world then it is your model that is experiencing your brain and your brain that is experiencing the world (without a model). It all doesn't matter since at some point you have to experience something. If you don't need a model to experience your model of the brain's experience of the world, then why would you need any model at all to experience anything? What is it about the brain that requires an intermediary model for it to be what it already is and make it seem like it is doing something other than what it is already doing?
> I write AI algorithms that do that all the time.
> Then I have an identity theorem that says that the model IS the
> experience. Done. What's the problem? What doesn't make sense?
Your algorithms only model information for your benefit. The computer doesn't model anything. All the computer knows is 'if x byte at m memory location is n value, then change y bytes at ym memory locations to n values. It is entirely dependent upon a material substrate that can be used to detect, change, and detect changes within it.
In a computer program there is no model because it's not literal to begin with. It's all a model - for our eyes and fingers and minds, not for the computer. The experience of the computer is only the microelectronic phase transitions (and whatever qualia might be associated with them from the inside...certainly nothing like the software GUI we see with out eyes and understand with our mind on the outside.)
Models don't ex-ist. They in-sist as part of the sense-making experience of a sophisticated organism which has evolved cognitive elaboration to its sense channels. A sign that says STOP is a model of the experience of being warned to decelerate immediately and avoid oncoming traffic and moving violations. It's only a model to us, because we know how to read it. If the model ex-isted, then deer would obey stop signs and be better off.
> In any event, while interesting, this conversation needs to be tabled
> while we work out the camp hierarchy. You can try to convert me to a
> different place in the hierarchy after we work out what the hierarchy
> should look like. >
Ok, but I don't know what is the rush to crash this train into the brick wall it's headed for. I'm trying to show you how to switch tracks.
> Specifically, we need to come to a conclusion about how we are going to
> define qualia in the RQT camp, so that us RF's can decide if we belong
> in RQT, or if we need to make our own representational experience
> theory camp. If you guys really want to define qualia as some extra
> physical stuff, and if you think that matches common usage best, then
> that's ok, we will just create a RT or RET (Representational Experience
> Theory) camp, that doesn't involve these extra things, and join that.
> Eventually, RQT would also join this new super camp. But if we go that
> route, then it seems to me that there is no difference between RQT and
> PD... since it is PD that specifically tries to suppose that there is
> something EXTRA involved in qualia. The purpose of splitting them was
> to push all that belief in "extra stuff" down into the PD camp, so that
> there is a place for those like myself, Mike, and Dennett, that don't
> believe in the extra stuff.
I gave it a shot earlier playing devil's advocate, but no matter how I try to make RQT into a plausible and defensible position, it can only unravel itself like the weightless, ungrounded approach that it seems to be.