The new minds of AI, as we may call them so, are confounded with the question so intriguing yet much fascinating as of, "How can machines think like us we humans do?" Perhaps, the future of AI-based machine thinking involves a great deal of understanding of how human beings think, and then, to simulate by emulating such thought processes in machines in order to manifest the true essence of intelligent thinking in physical form in machines. Human patterns of thought is often abstract, and yet much different from those of massive parallel processors processing enormous amount of information to enable them to make decisions, since, we all know, that an entity must think before it should make some rational decision(s). More so, the power of introspection is what that enable us to review our own thoughts. Now, how such artificial entities can be empowered to think (meta-heuristics), and then to introspect is perhaps another aspect which is more demanding. So, my assumption to some minute extent is that, since when computers are making decisions automatically, are they "thinking" as well...? The difference is that, our brains do not work like simple heuristic machines.
A theory once proposed by Graham Wallace in his book "The art of Thought" back in 1926- the Stage Decomposition, has been advanced recently to accommodate more disparate theories based on the former where, specific processes used by human brain contemplates various stages of the incubation of thought, imagination and insight. One such recent theory is Explicit-Implicit Interaction Theory (Sun and Helie) which unifies fragmentary elements of other theories related to creativity and problem solving. That creativity is the result of abstract thinking in human mind, and simulating such creative elements in machines would require something more than just heuristics, reasoning and logic. Sun and Sebastian Helie thus added a connectionist component to their EII theory which led to the development of CLARION, or AARON- a simulating entity that is able to draw 'works of art'. Now, is it not that the computer is "thinking"-or is it again that they are following some instructions or executing few programs to draw or paint some works of art? Then what is "Art" itself- a program with some procedures backed up by abstract thought? Is perception just some means of delivering the content of reality to be perceived and if then, it is a physical process which can be modeled completely, no doubt about this theory. The other element, sensation is then again, some process to integrate and analyze what is being perceived. And then again, experience is an integration of these two, perception and sensation, to construct that imaginable reality which gives rise to the fundamental elements of "Qualia". Are these qualia, then, not a product of few integrated biological and neural processes? And if that it is really, then, why this cannot be replicated in machines thereof?
We do already have excellent conceptual understandings of the theoretical contents of computers, including programs which drive those hardwares' and the interrelationship between computer memory, processing units and programs. What we still do not have a proper understanding is about what constitutes human abstract thinking and about the science of thought, and about the contents of our own perceptual abstraction which take shape into ideas or imagination. Modeling though in physical forms is hence still a difficult task unless and until we know what constitutes a human mind and how the mind enables us to think! From the neuroscience point of view, there is still complete knowledge lacking to demystify the nature of interneuronal connections and their workings (the neural groups). We know how sensory data spreads across the cortex, and how the associative areas in the brain are linking multiple sensory modalities. We may hypothesize about "gatekeeping" steps in the attention system which controls information flow across the biological network. But we still speculate as of how all the information is integrated inside the brain. Machine learning may be fast, but it still does not match the accuracy of human perception. Hence, the processes of perception is as yet, I suppose, at its infancy in machines since without proper perceptional mechanisms in place, the sensory modalities in machines may not be as accurate as in humans.