Topic: Neural Substitn Argument

Camp: Agreement / Neural Substtn Fallacy

Camp Statement History

Objected
Live
Not Live
Old
Statement :

Neural Substitution Fallacy



It can be argued that this neuro substitution argument is the primary reason “functionalism” has become and remains the leading consensus theory.

“Functionalism” is a supporting sub camp of “Representional Qualia Theory” which defines consciousness to be “computationally bound elemental physical qualities like redness and greenness.” While at first glance it appears that the neuro-substitution argument includes redness and greenness functionality, the substitution is always done on a system that does not include the required “binding mechanism.” The absence of a binding system enables the sleight-of hand trick that does not include redness and greenness functionality in a sufficient capacity.

To illustrate the fallacy, let’s do a neuro substitution on a contrived, overly simplistic theory that is not qualia blind (does include these necessary qualia functions) to illustrate the fallacy. Since Molecular Materialism is a simple theory, let’s start with a variant of that. Let’s assume the neurotransmitter glutamate, reacting in a synapse, has a redness quality, and glycine has a greeness quality. Let’s say there is a single neuron representing each pixel of visual knowledge of which we are consciously aware. If these neurons dump glutamate into the synapse, that pixel of knowledge has the physical glutamate redness quality, the same for when that pixel neuron switches to dumping glycine greenness when that single pixel on the surface of the strawberry turns green. In other words, for each point on the surface of a red strawberry, there is a neuron firing with redness glutamate, and for each point on the green leaves, there is a neuron firing with greenness glycine when we are consciously aware of a strawberry.

Let’s say there is a single downstream neuron detecting the glutamate or glycine from all these upstream pixel neurons that perform the necessary computational binding to compose a complete composite qualitative conscious awareness of a strawberry. This binding neuron would necessarily be a glutamate (redness function) detector. Anything other than redness functionality being presented to this binding neuron must produce a “not redness” output, otherwise the required ability to detect glutamate functionality would not be preserved.

Let’s simply use a binary 1 to simulate redness, and 0 to represent greeness. In parallel, let's do a redness / greenness substitution to further illustrate the problem. In this example, the first pixel neuron firing with redness in the middle of a patch of redness on the strawberry is what is replaced. The simulated neuron is now outputting a binary 1. The inverted version is dumping greenness glycine. In order for the binding neuron to say the particular patch was entirely red, you’d need to provide an interface system, as described in the argument, that would present glutamate to the real binding neuron when the simulated neuron produced a 1. In parallel, the inverted neuron is proving greenness being mapped back to redness.

Let’s progress to the point where about half of the real pixel neurons are replaced with simulated ones. At this point, we want to be able to switch between the real binding neuron and the simulated binding neuron. With the switch in the real binding neuron position, all the simulated neurons’ 1s need to be mapped back to redness to feed the binding neuron synapses. Then, when you switch to the simulated binding neuron, this mapping system from the simulated pixels is no longer necessary, as the artificial binding neurons just take 1. Since half of the pixels are still real, they are dumping glutamate, which now needs to be mapped back to 1, which the simulated binding neuron requires. This extreme amount of change being done with one atomic switching action completely removes any knowledge from the system which could inform it of the significant internal qualitative functional change that results in the seemingly consistent output. This is the sleight-of-hand.

The provided binding mechanism must also enable the “strongest form of effing the ineffable” (See referenced material in “Representional Qualia Theory”). In other words, you could use an Avatar-like neural ponytail to connect to the neural substituted system. A neural ponytail would enable someone to directly experience all of the experiences, not just half. If the necessary binding system wasn’t enabling you to detect whether the target neuro substituted system was using redness or greenness, or 1s to represent red things, it wouldn’t be functioning correctly.

Canonizer always stresses how falsifiability significantly improves the quality of camps. Towards this end, we admit that if anyone can provide a description of a theory that meets the above necessary definitions of consciousness, and then describe a neural substitution on that system in a way that there is no such sleight-of-hand (without doing violence to Occam’s Razor), the supporters of this camp would consider this camp to be falsified and abandon it. We would then leave it up to the experimentalists to determine which of these theories, if any, is THE ONE true theory that can’t be falsified.

Edit summary :
Submitted on :
Submitter Nick Name : Brent_Allsop
Go live Time :
Statement :

Neural Substitution Fallacy



It can be argued that this neuro substitution argument is the primary reason “functionalism” has become and remains the leading consensus theory.

“Functionalism” is a supporting sub camp of “Representional Qualia Theory” which defines consciousness to be “computationally bound elemental physical qualities like redness and greenness.” While at first glance it appears that the neuro-substitution argument includes redness and greenness functionality, the substitution is always done on a system that does not include the required “binding mechanism.” The absence of a binding system enables the sleight-of hand trick that does not include redness and greenness functionality in a sufficient capacity.

To illustrate the fallacy, let’s do a neuro substitution on a contrived, overly simplistic theory that is not qualia blind (does include these necessary qualia functions) to illustrate the fallacy. Since Molecular Materialism is a simple theory, let’s start with a variant of that. Let’s assume the neurotransmitter glutamate, reacting in a synapse, has a redness quality, and glycine has a greeness quality. Let’s say there is a single neuron representing each pixel of visual knowledge of which we are consciously aware. If these neurons dump glutamate into the synapse, that pixel of knowledge has the physical glutamate redness quality, the same for when that pixel neuron switches to dumping glycine greenness when that single pixel on the surface of the strawberry turns green. In other words, for each point on the surface of a red strawberry, there is a neuron firing with redness glutamate, and for each point on the green leaves, there is a neuron firing with greenness glycine when we are consciously aware of a strawberry.

Let’s say there is a single downstream neuron detecting the glutamate or glycine from all these upstream pixel neurons that perform the necessary computational binding to compose a complete composite qualitative conscious awareness of a strawberry. This binding neuron would necessarily be a glutamate (redness function) detector. Anything other than redness functionality being presented to this binding neuron must produce a “not redness” output, otherwise the required ability to detect glutamate functionality would not be preserved.

Let’s simply use a binary 1 to simulate redness, and 0 to represent greeness. In parallel, let's do a redness / greenness substitution to further illustrate the problem. In this example, the first pixel neuron firing with redness in the middle of a patch of redness on the strawberry is what is replaced. The simulated neuron is now outputting a binary 1. The inverted version is dumping greenness glycine. In order for the binding neuron to say the particular patch was entirely red, you’d need to provide an interface system, as described in the argument, that would present glutamate to the real binding neuron when the simulated neuron produced a 1. In parallel, the inverted neuron is proving greenness being mapped back to redness.

Let’s progress to the point where about half of the real pixel neurons are replaced with simulated ones. At this point, we want to be able to switch between the real binding neuron and the simulated binding neuron. With the switch in the real binding neuron position, all the simulated neurons’ 1s need to be mapped back to redness to feed the binding neuron synapses. Then, when you switch to the simulated binding neuron, this mapping system from the simulated pixels is no longer necessary, as the artificial binding neurons just take 1. Since half of the pixels are still real, they are dumping glutamate, which now needs to be mapped back to 1, which the simulated binding neuron requires. This extreme amount of change being done with one atomic switching action completely removes any knowledge from the system which could inform it of the significant internal qualitative functional change that results in the seemingly consistent output. This is the sleight-of-hand.

The provided binding mechanism must also enable the “strongest form of effing the ineffable” (See referenced material in “Representional Qualia Theory”). In other words, you could use an Avatar-like neural ponytail to connect to the neural substituted system. A neural ponytail would enable someone to directly experience all of the experiences, not just half. If the necessary binding system wasn’t enabling you to detect whether the target neuro substituted system was using redness or greenness, or 1s to represent red things, it wouldn’t be functioning correctly.

Canonizer always stresses how falsifiability significantly improves the quality of camps. Towards this end, we admit that if anyone can provide a description of a theory that meets the above necessary definitions of consciousness, and then describe a neural substitution on that system in a way that there is no such sleight-of-hand (without doing violence to Occam’s Razor), the supporters of this camp would consider this camp to be falsified and abandon it. We would then leave it up to the experimentalists to determine which of these theories, if any, is THE ONE true theory that can’t be falsified.

Edit summary : Update to current terminology and appropriate references to stuff that has changed, significantly.
Submitted on :
Submitter Nick Name : Brent_Allsop
Go live Time :
Statement :

Neural Substitution Fallacy


David Chalmers talks about 'fading' or 'dancing' qualia and predicts that one possibility will be that some kind of 'dancing' or 'fading' qualia will occur as the neurons are replaced. He also argues that any such 'dancing' or 'fading' qualia is very unlikely. The Idealized Effing Theory World provides one idealized theory making precise predictions of just what kind of 'fading' and 'dancing' qualia will occur, and what it will subjectively be like, as you replace various neurons and systems in the brain, and why.
This idealized theory illustrates how the typical description of the transmigration process completely leaves out any system capable of 'binding' our conscious knowledge painted with phenomenal properties, so we can know, absolutely, what redness is like, and how this is different than greenness. Our assumption is that leaving out this binding process is the 'fallacy' leading one to use mistaken assumptions like a phenomenal property can 'arise' from anything that is functioning properly, wither it be neurons or silicon. Our prediction is that Chalmers' "Principal of Organization Invariance" is leading people astray in their effort to understand and discover phenomenal properties, and the fundamental qualities of consciousness.
Within the Idealized Effing Theory World there is the all important binding neuron that unifies the palate of neurotransmitters that a brain is using to represent our phenomenal conscious knowledge of a strawberry with. Obviously, if you replace any one voxel neuron and the particular neurotransmitter it is firing to paint a red point in our conscious knowledge of the strawberry, the prediction is that the binding neuron, or process, will easily know that nothing but glutamate has the right quality. In other words, there won't be any kind of 'fading', you simply won't be able to fool this binding system with any single simulated version of glutamate, on any single location representing the strawberry with redness.
The prediction is also that you will be able to replace the entire binding neuron, and all the neurons 'painting' our phenomenally colored knowledge of the strawberry by firing with the appropriate neurotransmitter, in such a way that whatever is modeling the entire system can be interpreted in such a way that it will enable it to 'pass' the Turing test, or claim that the simulated glutamate has a redness phenomenal property. But, since we will effigy know otherwise, we will know what why and how the zombie system using only abstracted knowledge is lying.
Our assumption is that reality will turn out to be much more like this idealized world, and that Chalmers is wrong when he argues that 'fading' or 'dancing' qualia are not very likely to turn out to be true.



Edit summary : Improve Name.
Submitted on :
Submitter Nick Name : Brent_Allsop
Go live Time :
Statement :

Transmigration Fallacy


David Chalmers talks about 'fading' or 'dancing' qualia and predicts that one possibility will be that some kind of 'dancing' or 'fading' qualia will occur as the neurons are replaced. He also argues that any such 'dancing' or 'fading' qualia is very unlikely. The Idealized Effing Theory World provides one idealized theory making precise predictions of just what kind of 'fading' and 'dancing' qualia will occur, and what it will subjectively be like, as you replace various neurons and systems in the brain, and why.
This idealized theory illustrates how the typical description of the transmigration process completely leaves out any system capable of 'binding' our conscious knowledge painted with phenomenal properties, so we can know, absolutely, what redness is like, and how this is different than greenness. Our assumption is that leaving out this binding process is the 'fallacy' leading one to use mistaken assumptions like a phenomenal property can 'arise' from anything that is functioning properly, wither it be neurons or silicon. Our prediction is that Chalmers' "Principal of Organization Invariance" is leading people astray in their effort to understand and discover phenomenal properties, and the fundamental qualities of consciousness.
Within the Idealized Effing Theory World there is the all important binding neuron that unifies the palate of neurotransmitters that a brain is using to represent our phenomenal conscious knowledge of a strawberry with. Obviously, if you replace any one voxel neuron and the particular neurotransmitter it is firing to paint a red point in our conscious knowledge of the strawberry, the prediction is that the binding neuron, or process, will easily know that nothing but glutamate has the right quality. In other words, there won't be any kind of 'fading', you simply won't be able to fool this binding system with any single simulated version of glutamate, on any single location representing the strawberry with redness.
The prediction is also that you will be able to replace the entire binding neuron, and all the neurons 'painting' our phenomenally colored knowledge of the strawberry by firing with the appropriate neurotransmitter, in such a way that whatever is modeling the entire system can be interpreted in such a way that it will enable it to 'pass' the Turing test, or claim that the simulated glutamate has a redness phenomenal property. But, since we will effigy know otherwise, we will know what why and how the zombie system using only abstracted knowledge is lying.
Our assumption is that reality will turn out to be much more like this idealized world, and that Chalmers is wrong when he argues that 'fading' or 'dancing' qualia are not very likely to turn out to be true.



Edit summary : Upate this very old statement.
Submitted on :
Submitter Nick Name : Brent_Allsop
Go live Time :
Statement : The central part of the transmigration argument focuses around a small set of one or possibly more neurons or other small set of brain matter being replaced with something like a silicone isomorph that behaves the same, perhaps through transducers, in stimulating any downstream motor neurons leading to the picking the strawberry. It is usually proposed that the original isn't completely replaced, but rather simply configured with the silicon isomorph in such a way that it is easy to repeatedly swap between which one is engaged in the system.
The conclusion drawn is that since, by definition, the silicone isomorph is stimulating any causally downstream neurons in an indistinguishable way, it must also result in the same phenomenal awareness somehow necessarily 'arising'. But, there are several types of possible theories, including the Nature Has Phenomenal Properties theory for which this is not necessarily true.
To see how this theory could work, consider that the set of neurons, or whatever, being replaced has a red phenomenal quality - in additional to any behavioral properties that cause and effect observation can detect. This particular spot of phenomenal red is being used by our brain as a voxel (3D pixel element) to represent a corresponding spot on the surface of the strawberry. This set of neurons is configured in the rest of our brain in a way that this voxel of phenomenal red is, due to whatever phenomenal laws of nature might be, unified together with all the other similar sets of neurons representing the rest of the voxels in a unified 3D space representing a strawberry patch we are looking at.
This theory predicts It is this unified 3d representation, and the difference between the phenomenal red representing the strawberry at one spatial location and the phenomenal green representing the leaves at another, that is our awareness of the location of the strawberry amongst the leaves. And it is this conscious phenomenal knowledge that enables the downstream mechanism to choose to pick it and initiate the actions to move the arm to do just that.
The theory predicts that any intended switchable substitution set of circuits, composed of silicone or anything else, that would result in identical functionality of the entire system, must then have and be able to present a similar phenomenal property for this same voxel into this unified phenomenal model of this strawberry patch that is our knowledge, before any equivalent awareness of such would be possible, and before any downstream functioning neurons could function similarly in the picking of the strawberry.
It could be that only a particular set of specific kind of brain matter in a particular active state has a red phenomenal property, and a different set has a green phenomenal property representing the leaves. This could also include that silicone, nor anything else that is drastically different in its fundamental nature, has anything like these same phenomenal properties. Because of this, it could not recreate and present the same phenomenal spot of red into the unified system of conscious knowledge, making it impossible for any downstream behavior to be identical.
We should point out that these phenomenal properties are properties of the same brain matter that has the behavioral properties traditional scientific cause and effect observation can see. Behaviorally, we can see everything they are doing, including what downstream neurons they are stimulating, and so on. But, just like knowing that the surface of the strawberry has a causal behavioral property such that it reflects 700 NM (red) light, tells us nothing of any phenomenal quality the same surface may or may not have. In other words, if we cut open the brain of whoever is experiencing this red, and shined a light on whatever gray matter has this red phenomenal property being used to represent conscious knowledge, it would surely be more likely to reflect grey light, or something other than 700 nm light. Phenomenal properties are in this way blind to mere traditional cause and effect observation and require effing of the ineffable to be demonstrably understood.
When we use our cause and effect based detectors, it is true that we can identically model all such behavior, including equivalent stimulation of causally downstream neurons. But you must also include the fact that a theoretical model that only includes discreet stimulation of downstream neurons through synapses, does not, alone, resolve the 'binding problem' where everything is unified into one world of conscious awareness. There must be some additional physical behavior that must be modeled, which achieves this unification of awareness. The Smythies-Carr Hypothesis is one of many possible theories that includes a natural behavior which could accomplish such binding.
The nature of our conscious knowledge includes introspective subjective knowledge of the phenomenal nature of what is doing the representation. When we are tasting salt, we can imagine throwing a switch in someone else's mind, enabling them to experience the same thing, and thereby effing what salt tastes like to us, to the other person. Any mechanical isomorph system must include similar knowledge, if it is indeed expected to behave the same, especially when it is asked such a question as: "What is red like for you?".
We can easily make a 'zombie' robot that knows about the strawberries and the leaves, by representing red with an abstract 1, and green with an abstract 0. Though such abstract knowledge could enable the system to be intelligent enough to behave the same as far as picking the strawberry, it, alone, would certainly not act the same if we asked it what its red was like.
Another possibility is you might be able to have some kind of introspective abstract knowledge about your abstract knowledge in the design. In other words, the silicon isomorph could include knowledge of the fact that it represents red with one and green with zero. When you asked it what it was like, it could reproduce the behavior by mapping this awareness from one and zero - to words and attempted descriptions of red and green. In other words, you could program such a system to lie about 'what it was consciously like'. But of course, if you were able to eff the ineffable, and fully understood how it's mind worked, you would easily know its similar behavior was nothing but a lie.
So, in conclusion, we believe that to assume that just because a silicone isomorph to a neuron is behaving similar by simply stimulating it's causally downstream neurons in a way that does not include some additional mechanism to achieve binding of conscious awareness is somehow resulting in subjective consciousness arising from any functionally equivalent isomorph, is taking things way too far. And we see that the 'hard problem' likely isn't as 'hard' as initially thought.
We believe the transmigration argument can be seen as a fallacious one.


Edit summary : Complete rewrite after discussion with Chalmers.
Submitted on :
Submitter Nick Name : Brent_Allsop
Go live Time :
Statement : The way this transmigration argument is used is a fallacy, and simply reveals ignorance of the way a unified world of consciousness works or how we could be consciously aware of things. The "Nature Has Ineffable Phenomenal Properties" camp describes one possible theory which if true would clearly invalidate this argument.
The bottom line is, this theory predicts that certain neural conscious correlates predictably and reliably always have the same phenomenal property. If you use some other matter, in some other state, though it may be considered to be behaving like the original, it is fundamentally and phenomenally not like the original. Nature is such that these phenomenal neural correlates can be bound together into a unified world of awareness where we are aware of all of them, and their phenomenal differences, together.
Evolution has simply used these fact to implement our unified consciousness knowledge. This is very much different than abstract computer system for which, by design, what they are like doesn't matter. So nothing, other than what is phenomenally like 'red' could be used to present into the conscious world the same phenomenal likeness. And certainly, nothing that doesn't matter what the representations are like, could enable the system to say of any such merely properly behaving correlate - that is the same as real phenomenal red.
So, this theory predicts that once you remove the correlate that represents a spot of red on the strawberry, if whatever you replace it with is not fundamentally and phenomenally the same, and able to present something that is phenomenally like 'red' to the system, transmigration will simply fail at this point, and the subject might simply say, yes, I still know that there is something representing a red strawberry there, and I can possibly behave the same, but it isn't phenomenally like the original t all.
If we are looking at a strawberry patch when this transmigration operation is performed, there is something in our brain that has green phenomenal properties representing the leaves and something with red phenomenal properties representing the strawberries. This red and green knowledge in our conscious world of awareness is what enables us to pick out the strawberries from amongst the leaves.
There are things in nature that have the phenomenal properties of red and green. There is a notion of 'effing' the ineffable where you can configure something with a different phenomenal property (lets assume something no human has experienced and call it "gred"), and unify it into a world that is already aware of red and green. When you throw the configuration switch in this now properly augmented and unified mind, the conscious entity may reply: "Oh THAT is what "gred" is like".
The entire conscious system that unifies all this red and green into one phenomenal 3D world obviously uses the difference between red and green to represent and distinguish between the red strawberries and the green leaves. The fact that we know the difference between red and green, and the fact that it is aware of them at the same time in a unified effing way, is what gives us the ability to pick out the strawberries from amongst the leaves.
Typical virtual simulations can accomplish representations using abstracted numbers, say 1 to represent red, and 0 to represent green, but there is no known way to unify any such distinguishable representations so the entire system is aware of an entire scene at the same time as we are. This well known problem is sometimes referred to as the binding problem.
Of course neurons have many properties individual bits, or even registers of bits in computers lack, that could possibly accomplish the unification of all this stuff into one world of awareness. The fact that when a neuron fires it communicates near simultaneously to many downstream neurons at the same time, and the way neurons fire in synchronous wave patters likely has something to do with the way things are 'bound' to produce our effable, unified 3D world of red and green.
Any virtual system resulting from transmigration will have to accomplish this binding in some yet to be discovered way, so that the resulting 'mind' can be aware of the leaves and the strawberries, at the same time, in a unified or 'bound together' world that is indistinguishable, behaviorally at least, from the phenomenally conscious worlds we know so phenomenally well.
If there is a set of neurons responsible for one spot or voxel (3D volume pixel) of red on the surface of our knowledge of the strawberry in our unified and effable conscious 3D world that is our awareness, let us imagine what could happen is we transmigrate this set of neurons.
When this set of neurons is simulated, the simulation must communicate into the remaining effably unified world of awareness so that the world can continue to distinguish this red voxel from the green voxels representing the leaves in the unified conscious space.
But of course, if there is some unique state of matter for which only it has this 'red' phenomenal property, as would be true in this theory, any abstract 1 or 0, whether such is represented by silicone or any other abstract representation will not be able to accomplish this. Any such abstract simulation could in no way fill the hole where the phenomenally red spot once was.
It might be theoretically possible to create an abstracted world of unified awareness using something like ones and zeros, enabling a similarly unified and aware system to pick out the 1 colored strawberries from amongst the 0 colored leaves. Theoretically such a system could behave like us as it picked the strawberries from amongst the leaves.
To enable transmigration, perhaps a binding way could be found to unify the beginnings of a simulated world of ones and zeroes (or whatever is used as in an inverted qualia and such scenario) with an existing unified world of red and green. But even with this, when the 'effing' switch is thrown between the two, the ones and zeros in this effable unified world would clearly and obviously be different than the red and green.
One might be tempted to argue that you could skip this one at a time unification problem, and jump, entirely, to a complete system representing a strawberry patch with a unified world of ones and zeros (or any other different representation), avoiding the need for the system to distinguish the ones and zeros from the red and green. One could wire such a system to simply answer a question like: "What is red like for you?" with an answer like 'red'.
But of course, any such answer would be a lie. And given such 'effing' abilities, and awareness of what it is that reliably does and does not have red phenomenal properties, and how they are different from ones and zeros, we would be able to prove and effably demonstrate that such was a lie.
Here we have shown one possible theory, which if true, indicates there will be clear problems when attempting any transmigration of a unified and phenomenally conscious mind. This clearly demonstrates the fallacy of using this transmigration argument to claim abstracted simulations would have the same phenomenal properties that real unified phenomenal minds have.


Edit summary : Atempt a sumary of the argument in the first paragraphs.
Submitted on :
Submitter Nick Name : Brent_Allsop
Go live Time :
Statement : The way this transmigration argument is used is a fallacy, and simply reveals ignorance of the way a unified world of consciousness works or how we could be consciously aware of things. The "Nature Has Ineffable Phenomenal Properties" camp describes one possible theory which if true would clearly invalidate this argument as we will describe.
If we are looking at a strawberry patch when this transmigration operation is performed, there is something in our brain that has green phenomenal properties representing the leaves and something with red phenomenal properties representing the strawberries. This red and green knowledge in our conscious world of awareness is what enables us to pick out the strawberries from amongst the leaves.
There are things in nature that have the phenomenal properties of red and green. There is a notion of 'effing' the ineffable where you can configure something with a different phenomenal property (lets assume something no human has experienced and call it "gred"), and unify it into a world that is already aware of red and green. When you throw the configuration switch in this now properly augmented and unified mind, the conscious entity may reply: "Oh THAT is what "gred" is like".
The entire conscious system that unifies all this red and green into one phenomenal 3D world obviously uses the difference between red and green to represent and distinguish between the red strawberries and the green leaves. The fact that we know the difference between red and green, and the fact that it is aware of them at the same time in a unified effing way, is what gives us the ability to pick out the strawberries from amongst the leaves.
Typical virtual simulations can accomplish representations using abstracted numbers, say 1 to represent red, and 0 to represent green, but there is no known way to unify any such distinguishable representations so the entire system is aware of an entire scene at the same time as we are. This well known problem is sometimes referred to as the binding problem.
Of course neurons have many properties individual bits, or even registers of bits in computers lack, that could possibly accomplish the unification of all this stuff into one world of awareness. The fact that when a neuron fires it communicates near simultaneously to many downstream neurons at the same time, and the way neurons fire in synchronous wave patters likely has something to do with the way things are 'bound' to produce our effable, unified 3D world of red and green.
Any virtual system resulting from transmigration will have to accomplish this binding in some yet to be discovered way, so that the resulting 'mind' can be aware of the leaves and the strawberries, at the same time, in a unified or 'bound together' world that is indistinguishable, behaviorally at least, from the phenomenally conscious worlds we know so phenomenally well.
If there is a set of neurons responsible for one spot or voxel (3D volume pixel) of red on the surface of our knowledge of the strawberry in our unified and effable conscious 3D world that is our awareness, let us imagine what could happen is we transmigrate this set of neurons.
When this set of neurons is simulated, the simulation must communicate into the remaining effably unified world of awareness so that the world can continue to distinguish this red voxel from the green voxels representing the leaves in the unified conscious space.
But of course, if there is some unique state of matter for which only it has this 'red' phenomenal property, as would be true in this theory, any abstract 1 or 0, whether such is represented by silicone or any other abstract representation will not be able to accomplish this. Any such abstract simulation could in no way fill the hole where the phenomenally red spot once was.
It might be theoretically possible to create an abstracted world of unified awareness using something like ones and zeros, enabling a similarly unified and aware system to pick out the 1 colored strawberries from amongst the 0 colored leaves. Theoretically such a system could behave like us as it picked the strawberries from amongst the leaves.
To enable transmigration, perhaps a binding way could be found to unify the beginnings of a simulated world of ones and zeroes (or whatever is used as in an inverted qualia and such scenario) with an existing unified world of red and green. But even with this, when the 'effing' switch is thrown between the two, the ones and zeros in this effable unified world would clearly and obviously be different than the red and green.
One might be tempted to argue that you could skip this one at a time unification problem, and jump, entirely, to a complete system representing a strawberry patch with a unified world of ones and zeros (or any other different representation), avoiding the need for the system to distinguish the ones and zeros from the red and green. One could wire such a system to simply answer a question like: "What is red like for you?" with an answer like 'red'.
But of course, any such answer would be a lie. And given such 'effing' abilities, and awareness of what it is that reliably does and does not have red phenomenal properties, and how they are different from ones and zeros, we would be able to prove and effably demonstrate that such was a lie.
Here we have shown one possible theory, which if true, indicates there will be clear problems when attempting any transmigration of a unified and phenomenally conscious mind. This clearly demonstrates the fallacy of using this transmigration argument to claim abstracted simulations would have the same phenomenal properties that real unified phenomenal minds have.


Edit summary : "the ignorrance" to just ignorrance
Submitted on :
Submitter Nick Name : Brent_Allsop
Go live Time :
Statement : The way this transmigration argument is used is a fallacy, and simply reveals the ignorance of the way a unified world of consciousness works or how we could be consciously aware of things. The "Nature Has Ineffable Phenomenal Properties" camp describes one possible theory which if true would clearly invalidate this argument as we will describe.
If we are looking at a strawberry patch when this transmigration operation is performed, there is something in our brain that has green phenomenal properties representing the leaves and something with red phenomenal properties representing the strawberries. This red and green knowledge in our conscious world of awareness is what enables us to pick out the strawberries from amongst the leaves.
There are things in nature that have the phenomenal properties of red and green. There is a notion of 'effing' the ineffable where you can configure something with a different phenomenal property (lets assume something no human has experienced and call it "gred"), and unify it into a world that is already aware of red and green. When you throw the configuration switch in this now properly augmented and unified mind, the conscious entity may reply: "Oh THAT is what "gred" is like".
The entire conscious system that unifies all this red and green into one phenomenal 3D world obviously uses the difference between red and green to represent and distinguish between the red strawberries and the green leaves. The fact that we know the difference between red and green, and the fact that it is aware of them at the same time in a unified effing way, is what gives us the ability to pick out the strawberries from amongst the leaves.
Typical virtual simulations can accomplish representations using abstracted numbers, say 1 to represent red, and 0 to represent green, but there is no known way to unify any such distinguishable representations so the entire system is aware of an entire scene at the same time as we are. This well known problem is sometimes referred to as the binding problem.
Of course neurons have many properties individual bits, or even registers of bits in computers lack, that could possibly accomplish the unification of all this stuff into one world of awareness. The fact that when a neuron fires it communicates near simultaneously to many downstream neurons at the same time, and the way neurons fire in synchronous wave patters likely has something to do with the way things are 'bound' to produce our effable, unified 3D world of red and green.
Any virtual system resulting from transmigration will have to accomplish this binding in some yet to be discovered way, so that the resulting 'mind' can be aware of the leaves and the strawberries, at the same time, in a unified or 'bound together' world that is indistinguishable, behaviorally at least, from the phenomenally conscious worlds we know so phenomenally well.
If there is a set of neurons responsible for one spot or voxel (3D volume pixel) of red on the surface of our knowledge of the strawberry in our unified and effable conscious 3D world that is our awareness, let us imagine what could happen is we transmigrate this set of neurons.
When this set of neurons is simulated, the simulation must communicate into the remaining effably unified world of awareness so that the world can continue to distinguish this red voxel from the green voxels representing the leaves in the unified conscious space.
But of course, if there is some unique state of matter for which only it has this 'red' phenomenal property, as would be true in this theory, any abstract 1 or 0, whether such is represented by silicone or any other abstract representation will not be able to accomplish this. Any such abstract simulation could in no way fill the hole where the phenomenally red spot once was.
It might be theoretically possible to create an abstracted world of unified awareness using something like ones and zeros, enabling a similarly unified and aware system to pick out the 1 colored strawberries from amongst the 0 colored leaves. Theoretically such a system could behave like us as it picked the strawberries from amongst the leaves.
To enable transmigration, perhaps a binding way could be found to unify the beginnings of a simulated world of ones and zeroes (or whatever is used as in an inverted qualia and such scenario) with an existing unified world of red and green. But even with this, when the 'effing' switch is thrown between the two, the ones and zeros in this effable unified world would clearly and obviously be different than the red and green.
One might be tempted to argue that you could skip this one at a time unification problem, and jump, entirely, to a complete system representing a strawberry patch with a unified world of ones and zeros (or any other different representation), avoiding the need for the system to distinguish the ones and zeros from the red and green. One could wire such a system to simply answer a question like: "What is red like for you?" with an answer like 'red'.
But of course, any such answer would be a lie. And given such 'effing' abilities, and awareness of what it is that reliably does and does not have red phenomenal properties, and how they are different from ones and zeros, we would be able to prove and effably demonstrate that such was a lie.
Here we have shown one possible theory, which if true, indicates there will be clear problems when attempting any transmigration of a unified and phenomenally conscious mind. This clearly demonstrates the fallacy of using this transmigration argument to claim abstracted simulations would have the same phenomenal properties that real unified phenomenal minds have.


Edit summary : First Version
Submitted on :
Submitter Nick Name : Brent_Allsop
Go live Time :