Challenges and Issues confronting the design of artificial minds
In the above agreement camp, we were discussing about a general consensus about how robots should be designed as phenomenal machines, and what cognitive architectures would fit in to solve some still unresolved problems the AI designers are facing likely. We can definitely move along some line of thought in engaging in such discussions of how machines can be conscious, or how much they can be conscious, but before that, we should keep in mind that it is not just concerning some design or architectural issues, rather, there should be something more than about abstract simulations of reality in smart machines. Stanley P. Franklin's(1997)
work "Artificial Minds" points out to such sublime issues confronting the design aspects and how we should treat the systems of mind to derive something potential capable of thinking and acting in similar ways as we human beings do. That's not just about simple emulation, but something more than imitation of mankind. When we talk about silicon-based synthetic nervous system embodying an artificial mind, it is pertinent to mention that such an artificial mind is a resultant composite of something unmindful or mindless elements. This is what Marvin Minsky (1988)
in his work "Society of Mind" once mentioned about building mind from mindless elements. True enough, we may ask ourselves, if we consider human beings as natural machines, the components building up our conscious awareness embodied within a mind is nothing but a layered- superstructure build upon integration of mindless elements within which they work in harmony. This is a functionalist approach which Franklin described as one which would fit the definition of an artificial living system (ALS). But as one might agree, this would not likely meet the epistemological objectives of machine consciousness. After all, what is the value of intelligence without the knowledge of judgment?
Nevertheless, the architecture for a machine mind should have an underlining conception of human cognitive systems, since the former is being modeled after the latter. In this respect, the AI systems can be conceived from both design and program-based approach or by training and evolution. What Franklin's analysis mentions about is an integration of top-down and bottom-up approach to AI systems development. To be kept in mind that consciousness here means a physical phenomenon having practical effects on behavior. That is, the top down including the Cognitive psychology and Artificial Intelligence, looks the problem from outside to the inside, while the bottom-up encompassing cognitive neuroscience and mechanism of mind, looks at it the other way around, reciprocally. Psychology deals with problems of behavior, as external manifestations as much as AI considers layered architecture consisting of propositional (rule-based) systems and heuristics. While neuroscience considers a bottom-up approach (looking from the inside) about the nature of the structures and functions of neurons and neural groups, and of chemical and biophysical correlates of conscious awareness. The last one, which take account of the theories related to and the philosophy associated with, the functions of the mind, comprises of the "mechanism of mind", as Franklin describes. Now putting together these two approaches, one might be able to derive the full functional model of a mind, whether artificial or natural.
Epistemological Issues: Inclusion of Culture
But these again do not completely define the epistemological issues McCarthy (1977)
revolving around such an endeavor. Ned Block
considers "The Harder Problem" as more epistemological than the Hard Problem Chalmers (1995)
itself. The harder problem is to understand the real problem first- why we have phenomenal experience and how to comprehend consciousness in terms of physical properties in attempting to understand what phenomenal experiences are and what subjective properties could be readily emulated in machines, and then, how? The second problem is the problem of representation and the frame problem. The third epistemological paradox is about understanding human intelligence. It is now generally assumed that symbolic representation is necessary for general intelligence. This also includes how information about the real world needs to be processed. After all, what would be the true value of such a grand endeavor when such a 'marvel mind' conceived is left best to deal with just few moves on a chess board, or follow blindly some human commands without even having the knowledge about 'what' it is doing and 'why' it should do so? This mandates incorporation of other elemental conceptions about the evolution and understanding of what a society means, and what it means to be a socially responsive agent. Equally relevant is the incorporation of intentional stance in robotics designs.
The inclusion of culture, sociology and anthropology hence, is likely to fulfill those epistemological conflicts arising out of ethical and cultural paradox that a true intelligent robot would likely face someday. Here, culture is of primal interest, since a robot does not evolve as we have done so far. We have a long history and the historical memoirs build up as memories of the past cultures, societies and anthropomorphism shows how we have evolved through time as these timeless moments of our history is as important as to know about how we should move ahead in future. This is learning from the past, appreciating others' values, embracing cross-cultures and building collective memories Colin Renfrew (1988)
. And this is what perhaps religion was build around those times which lacked moral norms and in those times when we had the very slightest, faintest glimpse of the evolving natural sciences and technology. The question of whether robots would be spiritual Geraci, (2006)
or would have a religion someday is an altogether different aspect, but the fact that they need to know what religion is and why people follow them, is important. However, considering as though a robot may well be a secular, but to interweave perfect interaction of humans, robots and morality, exclusion of culture might prove otherwise, to be awful.
If we are looking forward to a futuristic hybrid society Ray Kurzweil (1999)
, that which would be populated by men, women and intelligent machines who would make a difference, then, those "i-machines" must be guided by the cultural norms and social standards, as is prudent from Isaac Asimov's classic "I-Robot" (1950)
. A conscious robot in such sense must be aware of all such human actions, activities and purposes, as well, cultures, customs and beliefs. The complete knowledge and cognition about all human activities and purposes would never be completely imaginable, but that what is available at our very own custody should be incorporated into such futuristic design architectures deemed for an autonomous artificial mind.