Picture of the author
Topic :

Camp Statement

Go live Time : 23 January 2009, 02:41 PM
Many of the mind's components can be reduced to symbolic issues with the systems approach. However, a few require what I call 'visualization register' which the two closest computational analogs are: a restricted virtual reality - or - extensive 3D bit array. Capacity issues are critical but so are timing issues. If we do not give the prototype direct access to robotic sensors, as well as filtered access, as well as ensuring those signals are conveyed at least as fast as in human systems (such as in our nervous system), the prototype is unlikely to be conscious. Core to the current model and the major obstacle to implementing this system presently is what I call 'motivation subsystem'. In a sense, it is a computational analog to our feelings - without the 'baggage' associated with them. In order to avoid catastrophic failures, I purposefully avoid 'designing in' emotional capacity (although I believe it is possible to quasi-computationally model and implement it). The result is a need for some 'motivation' or 'guiding force' basically to keep the system away from catastrophic failure while not rigidly focused on the goal list. Suggested approaches have come from the 'GP/GA camp', but I am still considering. This is a critical design issue.

Later, much later, we can address issues such as intuition, insight, and inspiration - which I agree are almost mystical-like capacities in humans. In a sense, I am 'scared' by that endeavor - for if we could make machines with those capacities (for example, if there were a machine-Mozart), we would be making ourselves somewhat obsolete.

Support Tree for "quasi-functionalism" Camp

( Based on: "" )
Total Support for This Camp (including sub-camps):

No supporters of this camp

Current Camp Recent Activities

No data