Coding Theories are theories on how sensory stimuli and mental state is stored and processed inside the mind. LOGICMOO follows the "function being done by the mind" and "not the form of the brain",  but LM489 must still encode its experiences and learn from them.

It becomes easier to understand LOGICMOO's "Multiple" Coding Theory (MCT) by first understanding other Code Theories not used by LOGICMOO.

Dual Coding Theory - 1960's Cognitive Linguists (click to reveal)
     (Two, One, Zero, Non-LOGICMOO Coding Theory Examples)

Single Coding Theory - Used by most by Neural Nets (click to reveal)

Like many systems can

  • Perform Predictive coding  on Logogen sequences.
  • Perform Conceptual blending on Imagens


If a system, can corollate connections between Imagens and Logogens then they can

  • Predict Imagens!
  • Blend Logogens!
Effectively “Dual” Coding theory...

takes two mostly incompatible types of encodings and provides a system that makes connections between them.

Imagens <--> Logogens which enables Imagen Sequences 

Shank’s Natural Language Understanding technique
Conceptual Dependency Theory is similar to Dual Coding between English and Animation scripts.

LOGICMOO Multiple Coding

Defines abstract Term datatype instances that group/connect/unify/compare via 

  • Sequentially as “lists”  
  • Non-sequential “set/bags”
  • Two-way reflexive equality
  • One-way transitive subsumptions

One-way transitive subsumptions 

Provides LOGICMOO the ability to store actual Analogue codes by maintaining the digital "boundary" (Sub-Imagens → Imagens) Since the systems can perform subsumptive unification on Imagens.

Encoding Type Usages

  • Termified MUD 
  • Mesh/3D-Plot (Visual) 
  • English description
  • Sequences of Primitive Animation Operations
  • Sequences of Animation Names
  • Narrative of Intent




QUESTIONER: I understand the info about “Chambers” Would you describe what a TOLM Chamber is?

A  Chamber is like a small scale ant, though a real ant would still use 100 chambers


QUESTIONER: Are these, 1 and 2, similar to Kahneman's system 1 and 2?

Almost yes.. .my numbers 2 and 1 would been reversed from Kahneman's 1 and 2

QUESTIONER: Is it like Numenta's 1000 brain hypothesis?

Close but Numenta seems to be stuck in Hebbian weights                                                                 

QUESTIONER: I like the idea about the importance of narratives though on TOLM.   Not sure about the implementation, are you using a NN?                                                     

No, the narrative made NNs a bit unlikely .. At best, NN are designed towards logogens/imagens encoding 

Our implementation was designed for sequegens/imagens encoding.

QUESTIONER: These are messages in the form of sequences? A sequegen?                                                        


QUESTIONER: Can it be represented as text, so I can understand it? A short example perhaps? What are the components of such a sequence? 

Here is a bit deeper about the information encoded in sequegens: 

This goes into a  PrologMUD is a chamber 


In the current test environment i have them at this high level 

QUESTIONER: And a sequegen is a logical expression, similar to first-order logic that is sent to a chamber ?

Doing it high level as i dont think keeping that at a crazy subsensory level will make things go faster for me 

Correct but the hard brittleness of logic is circumvented via circumscription logic 

QUESTIONER: Circumscriptions were invented as a means to solve the grounding problem 

yep.. each chamber has a miniature ungrounded view 

though in a way that is grounding.. hehe 

at least the kind of grounding that we do in thinking 

QUESTIONER: Yeah, that has been the problem with logic-based AIs 

yeah i think people threw the baby out with the bath water rather than going ahead and committing to solving it 

QUESTIONER: Let's look at the real world. It is virtually endlessly complex. 

yeah, whatever system tries to interact with it would need to create a model exactly the size they can handle 
AND a system that can envision it at that subjective size 

QUESTIONER: First of all from its mere size and all the interactions that occur 

I was concerned about the egg cracking problem... which so far the chamber system corrects 

the egg cracking problem is that any logical world we create is going to be too-thin and too-unrealistic 

since there are millions of details that go into a robot trying to crack an egg to make an omelet in real life 

the robot/humans still need at least one level of detail they can work with 

QUESTIONER: What happens inside a chamber once a sequegen enters? 

When an sequegen enters it creates a world state.. that gets built up 

until you'd say an mini-world is created 

(since the goal is to test if accommodation/assimilation works right .. and we see it end up still simulating the stochastics we see in nature ) 

QUESTIONER: Looks like a kind of logical structure. So is this related to  What is MUD, then? 

Multi-User-Dungeon.. like a text adventure, each chamber as a small view of a scene but in a MUD.

Well I wanted a MUD a few years back, and then wrote it, that had the capability to provide an infinitely complex or stupidly simple world and have both representations equally as useful. so then i re-used the MUD code to help the TOLM implementation 

QUESTIONER: TOLM seems to be a theory explaining the human mind, if I understand correctly and the MUD is just an environment for the AI so that you can interact with it 

Yes.. there is a one outer MUD that we can interact from, i wanted to see if the TOLM MUDs would model the outer MUD at all 

i realized if i failed at transferring the model despite them being close to an identical representation then I must've really done a poor job vs a person trying to model a real world this way.. They will never know if they did a poor job. 

QUESTIONER: Ordinary MUDs are programmed in ordinary programming languages, so Turing complete 

yeah .. that was why i wrote a new kind of MUD 

this kind of MUD has unbound callable code 

that is that prolog tolerates ungrounded future variables 

QUESTIONER: So anything that can be modelled in an ordinary MUD must be possible with TOLM 


QUESTIONER: That's a good idea. 

Surprisingly everyone but us seems to think the secrets to consciousness is the modeling of the real world.. I don't think this is so 

QUESTIONER: Then, who knows how ever complex physics is going deep into its structure (atoms, subatomics, etc) 
QUESTIONER: So, in all practicality, we can say that reality is infinitely complex 


QUESTIONER: But a computer simulation cannot ever be 

yet can be moved forward and back in complexity "as needed" 

QUESTIONER: What does that mean? 

in PrologMUD a person doesn't have fingers until they try to put a glove on that requires exactly five fingers.. then the MUD frees the fingers from memory 

QUESTIONER: Ah, clever, that reminds me of Minecraft world generation. 

the level of detail of the mud various moment to moment.. sort of how when we imagine things.. we filin details only as much as we care 

QUESTIONER: Perhaps even reality is that way, according to some theories of quantum physics (wave function collapsing and all...) It's not decided before it is needed 

fill in* 

hehe yeah 

in a PrologMUD chamber i expect some levels of detail never get past a single animated idea 

but those chambers get combined towards other chambers etc 

QUESTIONER: Wait wait - there's a problem I think. If the world in a chamber is underspecified where does that additional information come from when needed? 

the "Outer MUD'' (for code reuse) is a Prologmud that is mostly always instantiated.. it doesn't disappear the fingers ``as-needed”

it acts like a conventional MUD .. but the inner chambered MUDs do what we are talking about 

QUESTIONER: But the idea of fingers is instantiated in the original chamber? 

.so we have for interactions a non-chambered PrologMUD  (chamber-0!)

And then 1st/2nd chambers use instances of the MUD that fingers disappear in but yeah... currently i leave the fingers in the original mud 

but, at least I make them appear whenever first looked at but then they don't disappear 

like the minecraft generation thing 

QUESTIONER: Yeah but where does the definition of the fingers come from in the first place? 

but in Chambered PrologMUDs they disappear the moment the entity stops thinking about them 

The start out in a special text file like... 

but there definitely is far less helpers than the library size .. instead of the numbers growing exponentially.. they instead shrink exponentially 

QUESTIONER: And they're maybe not active unless needed? 

active or not they'd fall on deaf ears 

here is the kicker about this theory.. if you look at the language size it is not very big 

QUESTIONER: So, if I understand correctly. Visual imagination is using some, but not all, chambers. But seeing the world is using all (in congruence)? 

(well heck i am really not confident i understand the size and your summary is more important) 

right the vision system mixes down into a handful of chambers rather than a army of 1000s 

the V5 language is a product only of the imagination which has an initial set of plenty of things it can see 


the visual imagination is pretty important 

but the same process can happen with any sensory system and has similar X1-X5s 

QUESTIONER: Makes sense 

oh and X5 and V5 speak the same languages 

V1 and X1 .. i dunno 

QUESTIONER: In X5 and V5 "ambulance" is the same   (Like  for hearing and vision)

Yes, You get it.    Even if this theory sounds convoluted it at least bootstraps with what we know have and doesn't pretend they will magically have emerged 

QUESTIONER: Nah I think you have a quite developed theory..  I prefer theories like this without magic 

secretly i think it is the theory everyone has a sense to thought up.. just it contains too many anti-anthropocentric values 

QUESTIONER: People in general don't like reduction 

Maybe gotta make it inflate human egos to let us create this for humanity?

QUESTIONER: There "has to be qualia" and such 

in this theory the consciousness was the easiest part 

the harder part is things like ensuring conditioning still seems to appear in the ways that we test for 

but it think so far it still works in that it appears to act statistical on the outside 

QUESTIONER: You can probably connect ordinary classifying neural networks to X1, and use that as ground truth 

one use for NNs i found was just the order in which libraries are accessed 

another is i'd love to let a NN spy on sequegens and see if i can replace a chamber or two 

the underlying problem in NNs is what we been talking about would need them to somehow create this bass-ackward system of processing.. that is more likely to never score very well 

QUESTIONER: Yeah, I'm not a believer in NNs myself 
QUESTIONER: It's become the default go-to toolbox for quick and easy function approximations 

as simple as a pocket calculator is.. A NN can't ever replicate such processing 

QUESTIONER: I wouldn't bother. Would be cool to see MUDs use some of this though 

just PrologMUD is pretty cool without the secondary work towards AI 

i haven't made it user friendly but i'd like to dedicate a few engineers to doing that 

its so hard to find people that are good at Prolog.. to use it so hardcore




This wiki is licensed under a Creative Commons 2.0 license
XWiki Enterprise 12.10.2 - Documentation