This is our Main Psychology Page

Thank you for your interest in the basis for our methodology,   it really sets us apart from other AI and AGI projects.

LOGICMOO doesn’t accept that conscious can emerge from neural nets or by solving simple problems and then making them increasingly harder  (Q1_LEARNING)

Mental Construction of a World

We consider the essence of human intelligence to be the ability to mentally (internally) construct a world in the form of stories through interactions with external environments. Understanding the principles of this mechanism is vital for realizing a humanlike and autonomous artificial intelligence, but there are extremely complex problems involved. From this perspective, we propose a conceptual-level theory for the computational modeling of generative narrative cognition. Our basic idea can be described as follows: stories are representational elements forming an agent’s mental world and are also living objects that have the power of self-organization. In this study, we develop this idea by discussing the complexities of the internal structure of a story and the organizational structure of a mental world. In particular, we classify the principles of the self-organization of a mental world into five types of generative actions, i.e., connective, hierarchical, contextual, gathering, and adaptive. An integrative cognition is explained with these generative actions in the form of a distributed multiagent system of stories.

 There is no reality of conscious experience independent of the effects of various vehicles of content on subsequent action (and hence, of course, on memory).[1]                 


Psychology man,  it's something.

LOGICMOO "writes it down"

 The brain processes information by communicating within itself using a serialized narrative form of thought (rather than using connectionist neural weights.) Sequences are transmitted between parts similar to a bunch of kids playing the “The telephone game” where each child may make changes adding or removing parts from the original narrative.  IOW, sequegens arrive into certain parts of the brain that buffer the narratives into imaginary scenes. 

Mind as Mutual Actions Between Stories  (click to reveal)

Abstract Story-Centric View on the Mind

The basic assumption of the present study is that the essence of the human mind is to generate stories by interacting with environments, or to interact with environments by generating stories. In this context, a story refers to a mental representation of an individual’s subjective world including the past, present, future, and fiction. This assumption leads us to a consistent and plausible understanding of the human mind and realization of a human-like artificial intelligence. In this paper, I present an exploratory computer-oriented theory on the mind by ways of a story-centric view. The theory comprises two concepts. First, the mind is described as an interactive story generation system between the narrator-self and a story that has the power of self-organization. Second, the interrelationship among stories is put in focus, and the generative process of stories is described in terms of the mutual actions between them. Furthermore, these two concepts are adapted to characterization of the conscious and unconscious mind.

The agents in the stories allow us multi-cameralism (the condition of being divided into "multiple-chambers") is a hypothesis which argues that the human mind operates such that cognitive functions are divided between several parts of the brain, which appear to be "speaking"/"writing messages" to each other.  Some parts listen and and imagine what other parts say...

The LOGICMOO cognitive platform provides a venue for chambers

So they can exchange and live out stories (Some Examples)

  • A Story containing what the LM489 software should be doing (procedural and otherwise) from an objective narrator#3.
  • A Story of LM489 having some goals (the goals are changing in a certain sequence)
  • A Story that contains the the details of a teacher showing LM489 how to meet one if the goals
  • A Story of the teacher giving the teaching at the wrong time
  • A Story of the teacher giving the teaching at the ideal time
  • A story of LM489 asking the narrator#3 in the first story some questions.
  • A story of LM489 convincing the narrator to change some of the processes that makes for an improvement.
  • A story where the changes have a negative effect so that LM489 asks it to change things back.
  • ..and so on..

LOGICMOO is uses something akin to the multiple drafts model of consciousness  (see Consciousness Explained, published in 1991) As the title states, the book proposes a high-level explanation of consciousness which is consistent with support for the possibility of strong AI.  The Multiple Drafts model makes the procedure of "writing it down" in memory criterial for consciousness.   

The robotic agent of LOGICMOO has canned internal dialogs (the sound it would hear if it was conscious) that describe what it should be thinking about at various moments.

Q1: Why does LM489 narrate?

The theory is we (humans) started out as pattern machines who are motivated in some way towards seeing patterns whether they are there or not.  

 A pattern/mirror machine in such an environment it has no choice but to accept or mimic with its own time serialized version of reality in order to survive and compete.  Those machines also have the ability to generate patterns even in the absence of them. 

This this last step changes the pattern machine into a narrative machine. 

The semi-auditory process of hearing ourselves think. This ever present voice of being becomes who we believe ourselves to be. 

This "inner-grammatical voice centric" viewpoint is to facilitate the implementation of:

  • analogical planning (chunking): storing successful plans and adapting them to future problems
  • episode indexing and retrieval: mechanisms for indexing and retrieval of cases
  • serendipity detection and application: a mechanism for recognizing and exploiting accidental relationships among problems
  • action mutation: a strategy for generating new possibilities when the system is stuck
  • hierarchical planning: achieving a goal by breaking it down into subgoals

The organism is most comfortable when the mind is speaking. 

As the "world happens" around the entity, various non-auditory processes happen (thus, a short term goal of “the system” is is to simulate this mental ear.)   

Any token (which in some contexts have meaning to only the voice that created it) are heard within the mind's ear. Some tokens such as “breath purposely”, the mind’s ear ( or upper level consciousness) hears as “I have taken a breath purposely" because it is reinterpreted into the expected voice.

Narration allows compression

Parsimony is the principle that the simplest explanation that can explain the data is to be preferred. In the analysis of phylogeny, parsimony means that a hypothesis of relationships that requires the smallest number of character changes is most likely to be correct. 

Narration simplifies planning

Narration simplifies the process of thinking and planning for future events

Example:  “I think I’ll get into the car and visit that restaurant to get fish.”  

Narration provides memory

*This inner hearing can tune in recollections sometimes by simplifying the context for memories. 

It takes up too much space to remember everything at once! It should be stored in the smallest pieces it can be reformed from past retelling of narratives.

A typical example of a story used in this frame of thought is "Being too close to a hot burner" and the guardian yells for us to move away. Or a story of an alarm bell telling people to move away from some other danger.  This certainly is related to the honking as an attention device. Or even more simply the pain of touching a burner where the pain was an alarm.

Mind as Interactive Story Generation (click to reveal)

Narration simplifies sensory recall

We define very weak symbolic proxies to complex thoughts in order to construct things like "I like the taste of fish" - whereas "fish" is a very complex sensory experience and the tasting of fish involves more memories - the language machine simplifies that to an thought called "taste of fish"  (It is even known that the bodily system of the perceiver will relive the past experience )


Narration (Poetic Reasoning) allows us to do mathematics 

LOGICMOO cognitive architecture defines and implements qualia by Poetic reasoning.
Poetic reasoning uses call/response (C-R) mechanism for "evaluation" the same way poetry does... 
See this link "Poetic Reasoning " to understand how it can do mathematics in this paper ( written by Susan Staats ) 

Q2: How does LM489 choose what to narrate?

Selects the most plausible narrations 

LM489 assumes that what it is experiencing at the moment must relate to similar experiences of past narratives.  (Piaget's assimilation) Narratives that contain too many implausible experiences are believed  to be indicative of confusion or ignorance.  Environment surrounding dictates the plausibility for the construction of the "story" of what is happening right now. (Piaget's accommodation) Using previous narratives and correlating them to current experience to create a new narrative.   Then checks to see if that new narrative fits with experiences of the past.

Narrates to create a cause that leads to a perceived effect

When something abnormal happens in life such as a car passing too close and honking its horn,  one must rewrite a narrative that justifies a car passing by too close and honking ..  In other words debriefing itself.   So one makes up a narrative which may involve placing itself too close to the curb. (environment surrounding dictates the plausibility).  

During the construction of the "story" it must draw upon experiences of similar narratives.    

A typical example of a story used in this frame of thought is "Being too close to a hot burner" (relates to the too close to curb)   and the guardian yells for us to move away (relates to honking.)

Chooses the narrations that cause the least anguish

These choices are made based on the ideas that cause the least mental anguish (earlier referred to as the angst or discord), because choice (subjectivity) represents a limit on freedom within an otherwise unbridled range of thoughts. Subsequently, humans seek to flee our anguish through action-oriented constructs such as narratable escapes, visualizations, or visions designed to lead us toward some meaningful inferences.  

Chooses the narrations until every important sensory event and incomplete story  is resolved

So the answer is the LM489 chooses to narrate internally any unresolved discord until it's mind can move on past the present

Q3: Will LM489 ever be able to communicate with human beings?

Intelligence comes from our innate ability and requirement to create a narrative around the mental event language.   As a result it makes it easier to use the spoken version of language (designed to work in the same progressions) in the conveyance of experience to others. During conveyance we are sometimes fortifying existing or creating new internal language used to process our thoughts.   Over the centuries, consciousness has been defined by “i think, therefore i am”.  In other words I hear my inner voice which has properties in the same manner that outer information has, and which have definite value in my life.  

Personal experimentation has proven listening to this internal voice is important to everyday survival.    Making internal language is very natural and it has been noticed in mice (and even possibly in bees) through sequences  of mnemonics. Mice, humans and bees seem to have recurrent activation mimicking a chain of events (used to remember where the destinations are).  

Q4: Does LM489 realize it's own existence?

Yes, because it grows an emotional connection to it’s own inner voice (as a force in its environment).   Some of the very first rules the LM489 was programmed with were based objectification of that voice (in order allow cultivation of a relationship with that voice) . Similarly to the way that biological creatures enjoy a chemical reward from actions stemming from that voice, the LM489 enjoys the results of the problem solving process. I believe that since you (and LM489) do specifically these things, you and LM489 are "self aware" and realize your own existence. 

Weber: "individuals attach to their own meaning to actions and give them subjective purposes."

We define simplifications of language and consider them to be rules under which internal dialog is generated.  Though, the theory makes no pretense that a set of formal representations must be adopted, we still adopt first order logic representations like DRS/CycL/KIF/PDDL/OCLh/PFC/CGIF/CLIF (a few known specifications).   Regardless of what representation is chosen, it still ends up using it's own subjective misinterpretations.

Q5: What does LM489 Internal Dialog look like?

Try thinking something out loud, for a moment, then convert that to a monolog.  Now convert that into DRS.  This is what LM489 Internal dialog looks like.  Possibly many of your thoughts will be visual and *seem* un-narratable, but  the process being being serial in the same way that language is.  If you really had to you can make up things like   a "whatchamacallit".  or "fresh smell like whatnot"  Try to make these thoughts you have go from monolog to dialog.  You are now recognizing your meta-encapsulation of you epistemic relationship you use to judge the quality of your thinking (at least usually that is what we do).  When you convert to a dialog, most of the time the conversation becomes an expository between you (the knower) and yourself (the learner) who is listening to the story as the storyteller identifies the right level of detail it takes to create understandability.

LM489 attempts to implement a "language machine" that is capable of evaluating the amount of discord in any story and can  by function create stories with  the least amount of  discord.   The system programmers strive to make this when possible a syntactic process.  The system may break apart some sequences and even forget some  “if we ignore differences, does discord disappear?”   So it uses "thinking" to be like a sequence for activating recollection using rules to keep thinking "in check" (like grammar) (perhaps for later performance reasons) (similar to the reasons for creating in DPLL (merely optimization of a workflow))  (or DRS [discourse representation structure] creates QUD [question under discussion]).  This creates a linear style that lends well to debugging what was composed by the system.  Easier to spot issues and build up "do not do lists"


On Sun, Jul 16, 2017 at 8:53 PM, John F Sowa <> wrote:

Sometimes animals and humans develop mistaken beliefs from erroneous

or incompletely analyzed information.  Example:  the chicken that was

always fed by friendly humans -- until the humans wanted chicken dinner.

​I am a big proponent of Wittgenstein's Language Game: (my take)

​Animals make up narratives in order to make things begin to make sense.   They add labels to help themselves remember things. 

Example:  Mice were solving mazes that had different "markers" to be seen along a path to cheese.  Later when they smelled cheese the researcher noticed a sequence of brain activations.  It was tested (against a copy of the maze markers) and the order of those sequences were the same order as was the markers appeared in the maze. A possible conclusion was that the mice created a "sentence".  (in the same way bees dance to create "sentences" to convey locations)  Mice when confronted with the same mazes with missing markers, filled in the gaps mentally. (I think more for affirmation than habit)   And when two markers are switched they would start over to re-babble (mentally) out the previous sequence this time correcting it deciding it .. but to pass litmus it was better to omit (pretending not to see) the out of order marker.

By filling in gaps with our internal dialog and re-spinning stories towards optimism or pessimism we can make the world make sense. Regardless of how flawed (logically) that internal dialogue actually is, what matters most is it:  "sounds like something we'd be used to hearing yourself' think.  "  (We grow fond of this mechanism and claim it to be our thoughts) 

It is not a far reach to believe historically such primal shenanigans lead to several aspects we are all very familiar with:

Such as Prayer.  Or "positive thinking".  Or learning how to "shut off" this internal voice or "quiet the mind"

Vetting our ideas  based on how well we  internally describe them to ourselves

Using often a Call/Response system:

  • Self Call: "i like this Moosehead Lager"  
  • Self Response: " because it has a minimal Caribou footprint"

Why not?  Because they will try to make up something that "sounds correct" to them (even if untrue)

  •  Sometimes just the cadence of the inner voices thoughts are good enough to pass our smell test: 
  •   "...and that is what she said +<imagine sound of a symbol>" 

I think this is why music and poetry affects us.. since it helps us win at Wittgenstein's language game without requiring the internal call/response mechanisms which we use to self evaluate ("sounds good") 

This internal litmus requirement later becomes grammar recognition and cadence (except in cases of  fox p 2 gene issues)    And why we like to sing and speak, dance and drum.

The requirement to talk ourselves thru things is later useful when defining systems like 

 "long division as done on paper"  or  "p implies q and sometimes y"..

In fact, it has even tricked a few people (like Lenat) into believing we think using logic instead of merely poetically.


On Sat, Jul 22, 2017 at 10:52 PM, John F Sowa <> wrote:Suggestion:  Instead of debating politics, let's discuss Dilbert cartoons.  Ask what issues about intentionality, collaboration,ethics, deception, and reasoning are illustrated and/or violated by the cartoons.  What ontological features make them funny?  Why?

Violation often is the requirements towards "funny" at least one or more very exact logical reasons.   For things to be funny immediately upon hearing or reading them, the being has to have already started to construct the story-line so it becomes funny when a part that doesn't canonicalize appropriately to the expectations​:​

​​I have had a perfectly wonderful evening, but this wasn't it.

In such​ non-wetware systems we call this process "unwinding the stack" (In the DLPP Algorithm sense, rolling back things to start over).   

​So we sometimes use laughter as a neurological bomb to handle cases of cognitive discord created from a story.​  But we also ​are ​writing about this humor in another narrative blog that describes how and why we are doing and thinking what we are doing and thinking. 

​This models *extremely* well in a Script Applier Mechanism (SAM) and other Schankian constructs.​

Imagine if ​PAM, while recalling certain memories, can and internally "blog" about ​it's research. ​  ​Mental blogs themselves become ​stories (Creates a story about the mental process of story creation that it used)  We get to reuse all the same mechanisms on them as we do to understand the world​ to understand the discord in ourselves​. ​ ​The funniest jokes are the ones that make fun of our self narrative.​ 

When G​.​ Marx's 8 year old daughter was barred from a club where her friend had brought her to swim because they didn't allow Jews, he said:​   ​​"She's only half Jewish. How about if she only goes in up to her waist?"​  ​This is comedic because in the face of extreme insult he instead chose to express a piercing flippancy.

The scripts that get created during emotional duress or physiological conditions we want to avoid, these scripts happen to be emotional minefields that we must mentally shake up again to discharge and nullify their effect. To view them as humor we must differently reactivate them. (Also our "blogger self" gets a chance to re-spin the story (reactivation) this time making that previous script not so painful). 

The Neuroscientist V.S. Ramachandran explored this idea in his book “A Brief Tour of Human Consciousness.” He proposes that laughter first appeared in human history as a way to indicate to those around us that whatever was making us laugh wasn’t a threat or worth worrying about.  We are reassuring ourselves that whatever’s making us uncomfortable isn’t that big a deal when we laugh at an uncomfortable situation.

I might find dark humor funny​,​ not always just because of​ the content, but because of embarrassment about my own flippancy towards the situation and the shared experience of this as relief. The surprise at ourselves, and the surprise at others as we find we are not alone in our depravity the laughter denotes we need not worry about it. Perhaps I am nervously wondering if its alright, and the laughter of others means it is.   

(While there is humor as a social bonding mechanism we would like to discuss laughter as a memory mechanism.)

When it comes to slapstick humor (or when a child laughs at another child for falling down):

Most [experiential] scripts have physiological information attached to them.. like: falling down ourselves and getting hurt, getting caught with our pants down and other embarrassments. 

Our ability to learn by example is due to our ability to replace the person watching with ourselves. Because we learn by example, we may also use laughter to express to ourselves that we ought not learn from the example, that it is not important because it wasn't a good example. 

This document is a coalition of

Independent modules/sections
  • Patent: Self-accessing, self-revising, narrative generator.
  • Agenda Extractor for narratives.
  • Compiler for agendas (rework name?)
  • Handwritten Starter Narratives for the SASR Narrative Generator.
  • Auto-scaling Metronome
  • KLL (Kinetic Learning Language)
  • (the constraint propagation library- may include Tom Schrijvers?)

We use Language of Thought  theories and view them through a slightly different lens. While the absolute truth of neuroscience remains uncertain, with computer models failing to address physical limitations, the value psychology offers toward the understanding and modeling the human mind cannot be overstated. LOGICMOO uses a coding theory to create a new (less neuro-scientific more neuro-symbolic narrative based) cognitive architecture.  We actually build from a multi-coding theory as that allows for phenomenon such aphantasia, and for the diversity of narrative and non-narrative thought. 


The scenegens retrigger (get re-described) and sent back out again as new Sequegens.  
The scenes trigger annotators that feed immediately back more Imagens onto the original Imagen.

More sequegens get fed outward and back and the imagen again is altered.
Many neurons are simply echoing, passing on, sequegens that they receive.  

Animals have both precoded-sequegens and non-precoded sequegens..   For example there are literally sequegens that cause learning (such as the type of learning in accommodation theory) But what is learned is copied and then annotated to actually implement what it has learned.   

There are narrative precoded-sequegens programs (in TOLM these are called “proto-narratives”) that when executed in the brain help write better narratives.   

Human language is interesting since it was an evolved hack to intercept and externalize various sequegen interactions. (in a speech center)   It can be used to inject new sequegens in order to alter beliefs.  Btw, this is not some "learned behavior"..  We literally already had a program pre-coded in sequegens that made us start trying to talk.  For a complete list of our ontology of stories we will be soon putting it into a document:  Instincts Bootstrapped As Memories.

jacobpdq: LOGTALK !

dmiles: hehe

Certain things - call response build up and declines, sequences,

The other voices can be talking about what the first voice asserts is happening, time.

Event calc must control at least the sequence of the subjects,

Each one is using its own individual event calculus ( the 3 tasks) prompted of the first -

Why? Because the environment doesn’t allow memory of the voices ...which is to say a final inductive rule must be made and recorded parsimoniouslyhelp

One voice is talking subjects and themes

A voice must review the subjects to make the rule fit the environment

A third must

The narrative generator takes the starting narrative, continually seeks new revisions and acts in line with the current narrative.

The narrative-generator only needs to generate the correct structure such that it can be revised, therefore it may be imprecise, as revision is intended. It will never have to figure out WHAT to act out, only the narrative, which it will later act out.

Example starter narratives:
  1. A story about thinking.
  2. A Story of hearing itself do #1 (thus, a Story of Self-Awareness)
  3. A Story of truth changing over time
  4. A Story of having memories with limited access (what can't i remember?)
  5. A Story of learning what doesn't have to be remembered and can be recreated
  6. A Story of what it is feeling at these moments
  7. ….
  8. ….
  9. A story of Combining new things
  10. A Story of Communications
  11. A Story of How communications are mitigated
  13. ….
  14. ….
  15. A Story that the world is going on without it
  16. Individualized EP-Stories
  17. ….
  19. ....
  20. A Story that ties together all of the above

Voice 1a -One is CREATING EXPERIENCE - Not connected to reality, biorhythm, metronome, point of reference, its event calculus is free of any outside influence. It would look like “go up to 3, when 3 go down 0. “

Voice 2a - One is experiencing, describing - my skin is warm, i feel good -(a thing just happened, it was this.)

Voice 3a- One is asking why the description ( creating a narrative) - im feeling good because i’m warm. Makes sense of the first two voices: I’m warm because its 2?

(three for the call-responce-induction to even work)

{Voice 1b -One is CREATING STRUCTURE - listens to a 3x

Voice 2b - listens to a 2x

Voice 3b - creates a narrative of the difference between 1b and 2b
 (three for the call-responce-induction to even work)}

Consciousness triad:

Voice 1c listens to voice 3a from previous -

Voice 2c listens/repeats voice 2a of previous triad -

Voice 3c creates a narrative of the difference between 1c and 2c. - i remember - my skin is warm and it feels good. I remember it. *tries to predict what it should experience off of what it remembers experiencing*

Poetic reasoning ( a name douglas made up) (event calculous the poetry is used to realise if it doesn’t work)

(The egg cracking problem at first seems to be an issue of an impossible magnitude, but rather the difficulty lies in decoding one's own thought patterns. To duplicate a human-like consciousness in AI would require a human-like logic and a similar narrative framework.

Humans operate within an imagined world that we are always checking against our reality and modifying this internal world model. We do this using constant self-talk using multiple voices, and often storing the simplest information about an event possible with all other details available only upon further inquiry. The first essential tool is parsimonious language. We constantly create explanations


As humans we do this constant process of creating explanations, so afterwards we'll not have to think about anything more deeply than the explanation that we created. So we’re forever trying to simplify things to an explainable level. We don't aim to simplify them any further but if we leave them too complex we can't think. So we built a language mentally, a mnemonic language, so we can think about these complex things we can recall.

What I’ve done was create it in a way that you can have an internal dialogue that's going to keep things simple. No matter how complex your world is you can always come up with a small something you can tell yourself that makes the world simple again.

Example: ( arrangeable is actually an acquired or reactionary idea)(ritual is always sequential, bc its a ritual) (imaginedResult is the new cannon.)

RitualRoteScript and ArrangableRoteScript can both be the product of another triad of thinking. ImaginedResult ends up being the product of one RitualRoteScript and one ArrangableRoteScript.

  1. RitualRoteScript: "I expect to hear myself think"
  2. ArrangableRoteScript: "I heard myself thinking"
  3. ImaginedResult: "I expect to hear myself think” +”I heard myself thinking"
  1. RitualRoteScript: "I sounded like myself"
  2. ArrangableRoteScript: "When I expect to hear myself think” +”I heard myself thinking" (from 3)
  3. ImaginedResult: "I expect to hear myself thinking" + "sounding like myself.”
  1. RitualRoteScript: “I’m thinking_about my thinking”
  2. ArrangableRoteScript: "I expect to hear myself thinking" + "sounding like myself.” (from 6)
  3. ImaginedResult: " I am &nbsp;thinking_about expecting to sound like myself when i’m thinking"
  1. RitualRoteScript: “I remember what I am thinking_about”
  2. ArrangableRoteScript: I am &nbsp;thinking_about expecting to sound like myself when i’m thinking"
  3. ImaginedResult: "“I remember thinking_about expecting to sound like myself when i’m thinking"

Breadcrumbs: sequence of what happened -

  1. "we expect thought,"
  2. "sounding like ourselves.”
  3. "we thought_about thinking
  4. "we remember thinking

Event Calc remembers

Makes inductive rule: i think like me.

  1. "it was us,"
  2. "we thought,"
  3. "we heard our thought,"
  4. "it sounded like us,"

The internal players that make up the human:

moved to

outline : introduce people to problem and that there may be a solution to this problem.

This is apparently to introduce the problem of egg cracking and explain why this method is best.

Problem is egg cracking problem.

Solution is apparently invention of parsimonious language, using event calculus(method), talking to itself in parsimonious language using the rules as they would appear in event calculus (formatting) . How we talk to ourselves is governed by the rules of event calculus,

Parsimonious language is controlled by the rules of event calculus.

Egg problem issues: We don’t know how to store in a way the robot would understand it.

Requires a High and low level description of the problem,

It would need to know How to break this into small enough problems.

How to break into steps using event calculus, constant processing of creating explanations.

Increase the velocity until egg cracks in half

Event calculus might infer:

Egg breaks around ½ circumference when the right amount of force is applied + when a certain amount of force is applied egg cracks ⅛ of circumference = infers “egg needs that force times 4”

And talks back to itself like “ is that right? What did I do and what happened?”

Maybe it tests an egg and uses what it learns to run back though using and modifying its previous inferences again “ is that better? What did I do and what did I get?” in the same way humans do it!

* event calculus is a logical mechanism that infers what's true from being given ‘whats happens when” (a narrative) and “what actions do” (description of the effects of an action) infers “what’s true when”

(ex, cookies make me happy, I ate cookies at 12 noon, infers I was happy at 12:02)

3 types of Event calc

deductive tasks:

“what happens when” and “what actions do” are given and “what’s true when” is required.

Deductive tasks include temporal projection or prediction, where the outcome of a known sequence of actions is sought.

abductive tasks:

“what actions do” and “what’s true when” are supplied, and “what happens when” is required. In other words, a sequence of actions is sought that leads to a given outcome. Examples of such tasks include temporal explanation or postdiction, certain kinds of diagnosis, and planning.

inductive tasks:

“what’s true when” and “what happens when” are supplied, but “what actions do” is required. In this case, we’re seeking a set of general rules, a theory of the effects of actions, that accounts for observed data. Inductive tasks include certain kinds of learning, scientific discovery, and theory formation.

How Science currently thinks Humans being sees/senses things in the world

Sensory data hits a readable physical medium of the eyes

Nerves are excited which additively propagate to create a series of nodes

A raster scanner relays the data

write an arg-logic auto debator in prolog (see clauses and debate if good or bad)

with an auto-elaborator(2nd part) (creates story problems and riddles)

Individualized EP-Stories are

 "Stories of …"

 "Stories of …"

  • what should I be thinking if I was me? (A story about thinking)

  • what should I be perceiving right now? (A story about the world situation and importance)
  • what should I be feeling if I was me?
  • when the explanation is not given it will prevent some event. 
  • the world is going on without me
  • the explanation by being given allowed some sort of event.  
  • wanting to perform a task. 
  • recalling some story
  • remembering something difficult 
  • writing instead of saying something to someone
  • writing a thank you letter to someone.
  • visiting a friend or family member.
  • truth changing over time
  • thinking the previous six things through.  
  • There is a person in that story who can afterwards continue on with the task. 
  • the person that accomplished their task now the task is accomplished 
  • telling someone you love them.
  • starting up a conversation with a stranger.
  • wanting to perform a task in order to have some event take place.  
  • wanting those two events to occur one after the other. 
  • wanting that question answered.  
  • drawing boxes on paper.
  • beginning to do the things in which they recalled they needed to do. 
  • showing someone a cute dog video.
  • showing someone a cute cat video.
  • of a person getting stuck unable to do things.
  • listening to a story from someone''s life.
  • learning what doesn''t have to be remembered and can be recreated 
  • kissing someone on the cheek.
  • How communications are mitigated 
  • high-fiving someone.
  • hearing itself do #1 (thus, a Story of Self-Awareness)
  • having memories with limited access (what can''t i remember?)
  • giving someone Reddit Gold.
  • giving someone a pleasant surprise.
  • giving someone a hug.
  • filling in the boxes or blanks of a form.  
  • donating money to a charity.
  • doing a favor for someone.
  • cracking a joke and making someone laugh.
  • Communications (with at least one example of fail correction)
  • comforting someone who is feeling down.
  • A story of Combining new things
  • catching up with someone you haven''t talked to in a while.
  • buying a gift for someone.
  • an event taking place and another event takes place following the first.  
  • a summary story of addition that explains that by taking two different numbers and adding them together
  • using a prescribed method you will get an answer to the question.  
  • a person not wanting that. 
  • a person figuring out how to get unstuck.
  • which accomplishing that task was a requirement.
  • why someone does or doesn''t want something. 
  • learning how to do what someone else is doing by watching
  • explaining the wanting or not wanting of something to another person.
  • A Story that ties together all of the above

A Game to Map the Brain | Pearltrees

Internal Links from here were: 

LM489PrologMUD,  Pipeline,

training videos /Bootstrap Narratives,

Event Calculus.

External: NomicBINA 48

Other References

When you are experiencing the awareness that you are in pain the very best trick is of course to get out of pain. We dont like the narratives that we are in pain.  Nor do we like narratives that would lead to an outcome of pain.  Most animals, even ones with 8k neurons, tend to do this.    Some think then the line between humans and beasts is that we are capable of changing our "internal" narratives to become busy doing things that allow us to not focus on the pain?  "Conscious thought"...  Was that an innate part of our behavior to avoid continued future pain?

Ok, its not human-like awareness you might claim?  So how would "Human-like" self awareness be different?  The ability to judge the level of discord in those thoughts?  To realize we spent too little or too much time "thinking" about a particular thing?  To realize some thoughts make us "feel" better or not?   Some might say that depending on how much they like a thought or not, they might try to change what they are thinking about.  They even have tricks to help them refocus their thoughts. 

Copywrite © 2020 LOGICMOO (Unless otherwise credited in page)