use of the term "organism" is not intended to rule out the possibility that it might be possible to create conscious artificial machines. Consciousness might be possible in machines capable of interacting with their environment with a degree of organized complexity and differentiation comparable to that displayed by conscious animals or humans. Thus, as used here, "organism" does not necessarily imply living, but, rather, implies the sort of interactive complexity, organization, and integration that is typical of (but not necessarily confined to) living things.
On this "interactivist" view, consciousness is a property of organism-environment systems, not of organisms considered in isolation. The complexity and differentiation of the contents of consciousness is a function of the organized complexity and differentiation of the organism's interaction with its environment, and that in turn depends largely on the power of the organism's brain to sustain that level of organized complexity of interaction (although other parts of the organism, especially its endowment of sense organs and motor apparatus, also play a role). Organismic (including human) consciousness therefore depends heavily upon the brain. From this perspective, however, consciousness is not in the brain, conscious states are not brain states, and disembodied brains could not, even in principle, be conscious.
It is no more paradoxical to suggest that consciousness is a property of an organism-environment system than it is to suggest (as many people do) that consciousness is a property of a brain. After all, the brain (or even each single neuron, or anything short of an elementary particle or Leibnizian monad) is itself a system. The components of the brain (or, indeed, of the body) are, in general, more tightly coupled to one another than the organism is coupled to its environment, but the difference is one of degree, not of kind.
The advantage of ascribing consciousness to the organism-environment system is that the relation of consciousness to its intentional objects becomes an internal relation (the objects are part of the conscious system) rather than the external one (presence of objects is correlated with, and perhaps causative of, some state of the brain that is thus taken to represent the object) it must be if consciousness is considered a property of the brain. In the external relation case, there is nothing intrinsic to the supposedly conscious brain state that makes it a representation or a consciousness of that object, which is why, as Chalmers says, it seems perfectly conceivable that brain state might occur "in the dark," i.e., without giving rise to consciousness of that object at all.
This sort of view does not provide the same account of the problem of conscious experience of things or situations that are not present in the current environment (imagination, memory, representation, error, etc.) as do traditional theories that equate conscious (and unconscious) mental representations with inner states, such as brain states. This fact does not, however, furnish a knockdown refutation of the organism-environment interactivist theory of consciousness (as is sometimes claimed). It is probably true that if the traditional "inner representation" theory can account for perceptual consciousness at all, it can incorporate these more complex cases rather easily. It does not follow, however, that no other view of consciousness can account for them, and, in fact, quite detailed proposals as to how an interactivist theory of consciousness, of the sort currently being considered, might account for these phenomena can be found in the cognitive science literature. On the other hand, it is very questionable whether "inner representation" theory can account for perceptual consciousness at all. Indeed, understanding how it possibly could do so is, to all intents and purposes, the hard problem.