What is LOGICMOO?

Logicmoo is a project that is intended to create AGI

In the very simplest description there is a virtual game world for the bot LM489 and the player to interact. (Almost all other AGI theories don't include an inner and outer world in their systems - and so already can't fully model the human experience of living in private mind/body within a larger world!) The real work of the system happens within the LM489 bot, just as the real work of being a thinking human happens inside of our minds.

(But also Logicmoo software consists of several non-AGI modules, merely AI, at various levels of completion)

  • A simple dynamic MUD (generated from common sense rules) that lets a user roam around and manipulate objects 
  • A module that does Q&A about some structured data in and outside of the MUD
  • A module that converts the physical layers of the MUD into a non-Worldlike set of structured data.
  • NPCs that use praxis rules and behaviour trees
  • Module that reads existing Prolog code and documents it in English
  • A Chatbot that will Dungeon Master for your D&D group

What are the most basic parts of the LOGICMOO system? 

How is LOGICMOO different from other theories?

 Other theories of AGI assume by solving common everyday problems of the type that humans solve, the "hidden features" of the mind, such as consciousness, will emerge. That by combining answers to small problems the answer to the larger problem of consciousness and human-like intelligence will be solved.    Despite four billion dollars spent per year on this bet, there is only evidence of this not being true!   

Instead, LOGICMOO starts with a particular theory of conscious awareness

What are the theoretical basis from which logicmoo draws?

Our approach to understanding uses collaborative argumentation between multiple attention systems

What problems does logicmoo solve?

The main problems that most AI/AGI has is the “real word” scaling problem .. Logicmoo uses a theory that can actually handle quite a bit of it because it redescribes things at all levels and only experiences its own descriptions.. not the real world sensory data

What is a simple description of one if its processes?

I am using the above Script system which is an "Event Calculus" MUD that is especially easy for a robot since it  >is using an identical implementation to the memory structures. Ideally If everything went correctly I am  >supposed to see the same structures in both places. 

> I am afraid I do not know/understand what you are referring to here. Concretely, do you use graphical tools  and/or 2D modules with pointing devices to interact with your simulated robot? 

The event calculus is a logical language for representing and reasoning about events and their effects first presented by Robert Kowalski and Marek Sergot in 1986.[1] It was extended by Murray Shanahan and Rob Miller in the 1990s.[2] Similar to other languages for reasoning about change, the event calculus represents the effects of actions on fluents

MUD originally multi-user dungeon, is a multiplayer real-time virtual world, usually text-based. MUDs combine elements of role-playing game. Players can read or view descriptions of rooms, objects, other players, non-player characters, and actions performed in the virtual world. Players typically interact with each other and the world by typing commands that resemble a natural language.

The system receives textual events from MUD simulation#1 that was written in Prolog by a human (me) 

  •  
  • 0:In this room:  Joe and Floyd (event_1)
  • 0:Floyd is standing here observing (event_2)
  • 0:Joe is standing here observing (event_3)

    •  
  • 1:In this room:  Joe and Floyd (event_4)
  • 1:Floyd is standing here observing (event_5)
  • 1:Joe walks through a door to the north leaving the room (event_6)

    •  
  • 2:In this room: Floyd (event_7)
  • 2:Floyd is standing here observing (event_8)
  • 2:Joe walks through a door to the north entering the room (event_9)

    •  
  • 3:In this room:  Joe and Floyd  (event_10)
  • 3:Floyd is standing here observing  (event_11)
  • 3:Joe is standing here observing  (event_12)

The AI system is tasked to automatically programing a simulator#2 (It's imagination) that will emulate simulator#1

To do so it must track what events have happened #1 and emulate them into #2It determines what  event_#s cause other event_#s to happen for example: 

  • 1:Joe walks through a door to the north leaving the room (event_6)

    • which makes Joe become missing:
  • 2:In this room: Floyd (event_7)

    • which allows:
  • 2:Joe walks through a door to the north entering the room (event_9)

    • which makes Joe become present:
  • 3:In this room:  Joe and Floyd  (event_10)

 After the system is ran enough in simulator #1  

It should have made enough observations to create #2 that would accurately predict what happens in #1

We should be able to go into simulator#2 and type these two events

  • In this room:  Joe and Floyd
  • Joe walks through a door to the north leaving the room

it should be able to predict this is the next event which is that Joe is no longer here 

  • In this room: Floyd
     

    We type now 

  • In this room:  Joe and Floyd

With the experience from previous runs it should fill in the missing narrative and reorder it’s memory by inserting inserting 

  • Joe walks through a door to the north entering the room

     

So the sensory experience of 

  • In this room:  Joe and Floyd
  • Joe walks through a door to the north leaving the room
  • In this room: Floyd 
  • In this room:  Joe and Floyd

Confabulates a memory that says:  

  • In this room:  Joe and Floyd
  • Joe walks through a door to the north leaving the room
  • In this room: Floyd 
  • Joe walks through a door to the north entering the room
  • In this room:  Joe and Floyd

The actual events might have been that Joe came in from the south and the system didn't see that.   But experience says that Joe mostly comes from the north.

The virtual circuits are able to very easily implement this level of inference by running your system in a forward and backward manner by seeing your reactions as Events.

We can even feed/train your system by sending events backwards from how they are being experienced.

Why do we even want AGI? 

Why are you looking for funding?

To match Douglas' contributions, it would require a team of 8-12 highly specialized/qualified experts from multiple specialized fields(including cognitive psychology, discursive logic ai, user-interface. etc) at normal salaries of 120-300k per year working full time.  With the purchase of equipment, office space, upgrades, and outside consulting as necessary, it should make the project yearly budget just under 4MM per year.   This brings the 5 year total up to 20MM.

How else can I support LOGICMOO?

 

→ timeline, expense, expertise (TODO)

 →What deals are you willing to accept for full funding? (TODO)

→ mention patent pending and retention of proprietary knowledge

Tags: LOGICMOO
     
Copywrite © 2020 LOGICMOO (Unless otherwise credited in page)