What problems are solved by LOGICMOO?

The main problems that most AI/AGI has is the “real word” scaling problem like

Degrees of Freedom Problem  - The world is contains too many ways of doing things

Optics Problem - The world is contains too many ways of seeing thing

Egg Cracking Problem - The world is contains too many ways of representing things

These problems are often thought to be solvable only by an Embodiment solution.. but the Embodiment solution usually has the Centipede Problem though: The mind would become overloaded trying to think about 100 legs.

 Logicmoo tackles these by creating not one physical or virtual embodiment, but by creating several mental bodies each only having to scale to its respective story environment.

 The story environments are only as detailed as the main character in the story has to attend to “in order to make the story make sense”.   Which is often even less when the average mind only has to be detailed enough to communicate with the rest of the mind about it (since the rest of the mind(s) have specific features or details that each are looking for within one another substories.)

 The actual body of a spider brings the world flooding in at a much higher detail level that any one of these minds will understand.  But the mind only understands what it already understands [so far].  It expects all experiences to tie in somehow to something it already should know,  when it doesn't it must find a placeholder to accommodate.  Still this is only within some reasonable placeholder that still will fit into the story in which it is living.

 Such accommodation is done in the several stories the mind is living in.  That is that each part of the mind has its own type of placeholder. Another part of the mind might not even be conscious of an others’ placeholder(s). 

 Admittedly this puts quite a bit of emphasis into what it means for a placeholder to “fit into a story” - for example: A spider has a story that 

  1. it successfully bites ?unparalyzed-thing? that become ?paralyzed-thing?
  2. ?paralyzed-thing? won't shake the web  
  3. ?unparalyzed-thing? shake the web  

 The spider can construct a Story#1 of ?unparalyzed-thing? shakes the web then it has a successful bite then ?paralyzed-thing? Is not shaking the web. 

In the real world an edible bug flies into its web and makes the web shake.  The spider follows the Story#1 narrative and it happens as it expects it to happen. 

 As simplified as this biting story was, it is still too complex to “encode” a system in which a behavioral decision tree would operate in the real world.   So instead of a decision tree being predefined.  We used a story generator to predefine the script.   Then the running script assimilates the world as a set of newly bound “imagens” some-fly#2  to the spider’s existing Story#1 narrative.

Problem with Pure Embodiment

If we decided to get dressed  in the morning, that actually entails quite a few intense calculations in order to do things like straighten out the socks and put them over our feet.  

What we know as humans doing what we think is logic is that we are constantly generalizing and keeping a high-level simplistic view of what we are doing.  " merely put on my socks" though as high-level as we seem to think it is we made it ..   that still did not give us  a free pass to  skip over any of the low level details !

It is understood that the amount of logic it takes to do something like putting on pants would be overwhelming processing-wise. so then how would animatory logic still fit in to be a possibility? However regardless we still assume  that logic plays a big role in all of this. 

Procedural Reasoning System (PRS)

In artificial intelligence, a procedural reasoning system (PRS) is a framework for constructing real-time scripted solutions for completing tasks.

A user application, when defined,  provides the PRS system a set of knowledge areas. Each knowledge area is a piece of procedural knowledge that specifies how to do something, e.g., how to navigate down a corridor, or how to plan a path (in contrast with robotic architectures where the programmer just provides a model of what the states of the world are and how the agent's primitive actions affect them). Such a program, together with a PRS interpreter, is used to control an agent.

 An interpreter is responsible for maintaining current beliefs about the world state, choosing which goals to attempt to achieve next, and choosing which knowledge area to apply in the current situation. How exactly these operations are performed might depend on domain-specific meta-level knowledge areas. Unlike traditional AI planning systems that generate a complete plan at the beginning, and replan if unexpected things happen, PRS interweaves planning and doing actions in the world. At any point, the system might only have a partially specified plan for the future.

PRS is based on the State, Goal, Action framework for intelligent agents. State consists of what the agent believes to be true about the current state of the world, Goals consist of the agent's goals, and Actions consist of the agent's current plans for achieving those goals. Furthermore, each of these three components is typically explicitly represented somewhere within the memory of the PRS agent at runtime, which is in contrast to purely reactive systems, such as the Subsumption Architecture.

Our PRS system is in the business of organizing these

 

Pseudonyms

 

Actions (compound and simple) done by agents.

Intentions, Plans, action primitive,

  Taking a bite of food, Chewing Food

Exemplars Objects and Structures

Types of

Object, Agents, Smells, Tastes

  Joe. Food. Myself. You.

States properties of Exemplars

World State

  Joe has some food. 

Percepts and used to detect the above

Observations,

  Joe takes a bite of food

Goals by agents.

Desires

  Joe wants to be full

Beliefs about states of how the world is presently arranged

Imaginary world

  Joe is hungry, Joe took a bite of food

Event Frames Narratives that may contain any or all of the above

Frames, Events,

Memories

(All of the above) + Joe is a person whom was hungry

 and then

he took a bite of food

Explanation Narratives

Rulify the why Procedural nature and nuances of the above into contaminants and sequences

Text, PPLLL, English, PDDL

(All of the above) + Joe is a person whom was hungry

 so that is why

he took a bite of food

LOGICMOO encoding Theory

(We are using Dual Coding’s terminology of Sequegens and Imagens here)

  • Words/speech are not Sequegens in LOGICMOO they are Imagens
  • Imagined sequences of words/speech are not Sequegens (We see them as Imagens)
  • Sequegens may contain both Imagens/Sequegens 
  • Thinking is Procedural planning using only narrative Sequegens (constructed over "Mentalese")
  • Speech is a procedural planning using only narrative Imagens (constructed over EventCalc)
  • Imagining is procedural planning using only narrative Imagens (constructed over EventCalc)
  • Consciousness is a combo of Imagining and Thinking 
    • Multiple Stories are assessed at once
    • There is a Prime consciousness that receives information from 4 MiniPrime-Consciousness.
    • Each MP-Consciousness picks and chooses of 6 M-Consciousnesses, each of which are living out a single story-narrative
    • Each Consciousness picks and chooses what stories they are consuming Sequegens from and where they visualize the Imagens
  • Memories are accessed as Sequegens then experienced as Imagens (separately)
  • Bootstrap Memories are Sequegens Memories (without Imagens) 
  • Sequegen Parsimony is used in order to Scale Thinking 
  • Imagens Parsimony is used in order to Scale Imagining 

https://lh5.googleusercontent.com/akpjsKaLuGQ0Inwgq1cOYzf6MJBpTsXXUl8li9CesD1VKTc0FdLtlZbsaiCqQWlnTUsitY2JF4s29KbW37jGMwyd9-gtNX7UUxStIN6pTMCX0NEVZDoXsXMiHM6pOOpVO6aGo8rn

The restriction of using limited resources

The restriction of using limited resources is a part of LOGICMOO's AGI implementation (AIKR - Assumption of Insufficient Knowledge and Resources). It not just brings the theory and implementation of the system closer together, but also brings artificial and biological systems closer together.

 Firstly, it is the restriction of having limited physical resources (in terms of time and memory) and secondly, limitation of information (amount and truthfulness) the system can perceive. AIKR doesn't really allow using a conventional (axiomatic) logic to build such a system, at least not in its conceptual level. Regarding the inverse optics problem (which could be generalized to any type of perception, physical measurements, or perceived information) there is no way to track back the source of the information received to just one by means of any logical transformation on it.  (To put it more simply - there are a huge number of stimuli that could add up to the image we are seeing and it's impossible to express the domain of those images that we may have seen without an expression of what it was decided to mean.)

There are many other phenomena, such as the decision problem (Entscheidung's problem), where a first order logic statement, is not always (universally) provable by an axiomatic logic (a finite set of axioms), or the implication paradox, where using irrelevant data might be intuitively problematic, although it gives us correct results.

 

Tags: LOGICMOO
     
Copywrite © 2020 LOGICMOO (Unless otherwise credited in page)