DRAFT IN PROGRESS

Exemplar theory assumes that people categorize a novel object by comparing its similarity to the memory representations of all previous exemplars from each relevant category.  Exemplar theory has been the most prominent cognitive theory of categorization for more than 30 years and continues to be.

https://lh5.googleusercontent.com/LykXZnsmqEhEQjHLMqm9KRwCESM2SshAhcEVbYNfDPrg3vOGON6h0uhMW2VLunTYEwIMLtLbrvy4Kwb4sZZ2v5vQUveCdRB7jOdxd_UiewxYy0CAog7DXszxnwN5U8LueCsQ8sXv=s0

Exemplar theory has had considerable success in providing good quantitative fits to a wide variety of accuracy data.

[PDF] A neural interpretation of exemplar theory.    How are exemplar representations transformed by encoding, retrieval, and explicit knowledge? A commentary on Ambridge (2020) - Patricia J. Brooks, Vera Kempe, 2020   

If we study Jeff Hawkins we realize Jeff's theory (especially his coffee cup example) is based on putting these exemplars into the Friston-esqe (free energy principle) version of and optimization of Predictive coding (a theory of brain function in which the brain is constantly generating and updating a mental model of the environment).  


Predictive Coding (click to reveal)

Should we care if the AI has a similar sensory problem set to predict? 

What if LM489s sensory system was not as complex as ours.. Could that  prevent it from becoming as intelligent as we are?  ( most people assume the answer is that intelligence came through a slow adaptive evolutionary process therefore it needs to be under the same sort of conditions in order for it to evolve intelligence ) what would be hilarious though is to find out whatever was there that allowed that evolution is no longer there no matter how many sensory inputs are available!   I know that doesn’t make sense for me to say, but think of it for a moment, if there was specifically a particular force within the world that began the evolution of intelligence. Now contrast this idea to what we actually believe: that a set of slow evolutionary upgrades leading from protozoa onward to mankind- where creatures, as they got more complex bodies, started to also get more complex minds.  This means there’s a paper trail that leads between where the computer is now and where we want the computer to be.  That paper trail is not something we would necessarily easily follow because first off we are not even sure how the mind of the protozoa works (or the mind of a wasp or the mind of a fish or a beaver, a dog or a human.)  What I’m saying is that the evolutionary roadmap that we believe in is certainly not something that we could begin to follow or hope to recreate. So effectively no matter how complex the senses of this AI are it’s not going to help it along some evolutionary path. In other words solving the tractability problem or replicating the density of the senses may not be important at all.  All it would do is prove to some people that the system was quite capable of doing something hopefully amazing.  But becoming intelligent? That is something altogether different.

This viewpoint does not at all disparage the importance of predictive coding.  In fact predictive coding does recognize the most important factor of all:  To be able to generate that mental model of the world into a grounded conceptualization.   What that well grounded conceptualization does is creates a world of the exact size which the organism is able to mentally grok.  So one new idea that has come out of the research done is that of a predictive coding/exemplar comparator. With this we could compare an imagined organism performing the activities of an intelligent being with the active mental state of our AI.    Say someone is dancing in what might be an intelligent manner (as opposed to someone completely randomly moving ) our robot could begin to compare its movements with the movements of an intelligent dancer.  So imagine for a moment that we had a little video clip of someone dancing our robot and could compare its own movement to that video.  If the robot didn’t have all the same joints of the ballerina .. It merely has a swivel which could be turned left to right; it could actually copy that motion.  A good question would be why would the robot even bother?  Our theory answers that question and the answer comes from our evolutionary hypothesis: that there is a paper trail from protozoa to human, that is there is a set of pre-visualizations called behavioral instincts that move along those upgrades.  Our predictive coding comparator (found in logicmoo) is used to compare its current grounded conceptualization with these behavioral pre-visualizations.  During some research we found that we need to have at least 6 to 10 pre-visualizations running at all times so the creature can at least key into one of them in any given moment.  All of them have to be “running” even when only one is being used at a given time.  As the creature is very likely to be doing at least three things narratively at once for any single task.  For example, if it the creature was collecting food outside it must be walking around searching for things yet never stop the process of what would be the simplistic narrative of ”exemplar collecting food” … So for example the six stories could be “exemplar trying to return home”, “exemplar collecting food” , “exemplar existing and being conscious of its surroundings”,  “exemplar trying to find a difficult object”,   “exemplar not being eaten by a bird”, and so on.    The “exemplar trying to return home” story would be awaiting permission to take over the main narrative.  All of these are trying to take over the main narrative.   Each type of story within themselves have their own grounded conceptualization.  All six stories can be played at once because of the separation of arenas of logicmoo.

Now imagine a system that was forced to be only able to process when it was using the idea of grounded conceptualization that was available from the narrative arena..   Such a system would be completely different from a system like CYC or like used in machine learning.  Both CYC and machine learning systems are able to process things without a grounded conceptualization of a narrative.  Logicmoo ( and likely humans ) only process things which it can mentally conceptualize.  That means to conceptualize through the predictive coding world and at least some other world which it is being applied to.  The reason for the six different narratives is that at any given time hopefully at least one will be applicable to the sub details of the conceptualization. (Not everything within those narratives will be part of the conceptualization that is understood by the creature. This is fine, not everything was designed to be applicable to the given situation.)  So then in this forced “only be able to process while conceptualizing a narrative” platform we must always have at least one narrative that would be applicable to any given situation. So even when the narratives have nothing for the creature to do themselves there is still a narrative to explain what the creature is observing.  

 This situates the creature within a context in which such an observation would be possible.  That process of situating the creature puts the creature into the story that would allow the creature to be able to observe its surroundings.  There often is not enough information within the creature stories to actually fully describe even the simple whatnot.  So several neutral kinds of open-ended narratives are provided. During this entire process a singular narrative is being recorded that explains all the state transitions of the multiple stories into a single story.  The single-story is added back to the library of narratives and will likely become one of the six stories that are activated in the particular time. 

 This means there are several very generic narratives that really don’t say or do all that much other than supply a homeostasis which the system can follow.   As it is, the universe is never so kind as to allow this to last for very long.  Especially not the body with all of its demands.  The mind itself has very little reason to stay silent and every reason not to because for every given situation there will seem to be an explanation that is unrelated to the creature as a main participant.  

Constrained by the statistical regularities of the outside world (and certain evolutionarily prepared predictions), the brain encodes top-down generative models at various temporal and spatial scales in order to predict and effectively suppress sensory inputs rising up from lower levels. 



 

Tags:
     
Copywrite © 2020 LOGICMOO (Unless otherwise credited in page)