In the very simplest description there is a Nomic-MU (Game World) for the LM489 (AGI Bot) and the player to interact. Developed from Franken-SWISH notebook.

DESIGN MOTIVATIONS

  • Permanent autonomous learning, including the ability to learn “from bootstrap
  • Generalized decision making that is dependent on particular system narratives
  • Newly formed narratives dependent on only congenital narratives
  • Introspection of the system knowledge using human-readable knowledge representation
  • Injection of experience and skills extracted from another compatible system or provided by a human as alternative form of learning
  • Ability to explain the decision made by the system in a human-readable for
  • Acts using congenital scripts to explore the world and themself
  • Use generalization to rationalize the environment
  • Storing successful narratives and adapting them to future problems
  • Daydreaming strategies for what to think about
  • Hierarchical planning to achieve a goal by breaking it down into sub projects
  • Episode indexing in F-Logic and retrieval mechanisms for indexing and retrieval of cases
  • Serendipity detection and application of a mechanism for recognizing and exploiting accidental relationships among problems
  • Action mutation for a strategy for generating new possibilities when the system is stuck

PROTOTYPE

The LOGICMOO system has a robotic prototype called LM489 which humans can be able to hang out with in a Nomic Game called PrologMUD

In the LOGICMOO System, our LM489 bot experiences events, actions, internal dialog in the MUD thru a special memory Narrative-Pipeline.   The is channeled thru the Pipelines to allow LM489 configure the level of detail it will need to remember to learn how to do things.    The Real world is full of complexity and not any more conducive (quite the opposite) to making it easy for an agent to learn the language that will teach it to manipulate an environment.  Yet, the simplified environment of a typical MUD would not be complex enough for the agent's imagination palace to learn language.  So we invented a new kind of NomicMU* built from high-order logic (still with a syntax of FOL), that can scale to the complexity of human imagination and even a cybernetic lifeforms imagination.  In other words, anything we can talk about (and way beyond) can be represented in "living form."   So by conversing with humans and other robots, LM489 can learn new behaviors.  

LM489 system bootstraps with built in narratives that when played constitute a set of innate hallucinations.  These hallucinations cause LM489 to "just in time" know how to do something by watching the agents contained in the hallucinations.  LM489 can store vast amounts of knowledge by recording such knowledge as "training videos" written only in The Event Calculus that "play" in the PrologMUD (aka NomicMU) theatre.

---------------------------------------------------------------

Kittens, even with no adult cats present, such as bottle fed kittens, already know how to play various games
and are knowledgeable in the rules and know how to switch off between sides...

  - Keep-away/Guarding an object 

  - Hide-n-seek (chase and predation games)

  - King-of-the-mountain (territory contest games) 

  - Follow-the-leader (watch each other to make sure they each tried all the same risky paths)

Environmental factors, or feline body design, would NOT account for the complicated behaviors.

The complications of the mind, the feline body evolution would be contain more of the resulting adaptation to having such behaviours instead of the behavior conforming to the feline's body?

It would seem very plausible for these games to be done by a narrative [en]coding.  To date, no one has posed a viable alternative.   

It is likely that whatever behaviours kittens later learn, from watching others will be

---------------------------------------------------------------

We've got diagrams of its modules on the way. Hold tight.  (our TODO List)

Logicmoo AGI

Artificial General Intelligence Libraries for Prolog LM489

LM489 is constantly reprogramming itself when attempting to splice together several built-in internal dialogs in order to make the external and it's internal anticipated world congruent.  LM489 expects whatever imagens produced by the external world to be bound to the logogens of the internal dialogs.

(The real work of the system happens within the LM489 bot, just as the real work of being a thinking human happens inside of our minds.  We believe we can model the human experience of living in a private mind/body within a larger world!)

  • Uses several PrologMUDs where each have a selected
    level of  detail and these constitute its imagination space. 
  • LM489 trains in PrologMUD which is a highly adaptable world, or a very cool game. 
  • Uses NomicMU as a world to interact with humans.

  • Starts with several built-in internal dialogs in which LM489 attempts to connect together in ways that are supported with evidence from the world.

    "LM489's understanding uses Script Theory to create cognitively plausible Internal Dialogs"
  • Uses several PrologMUDs where each have a selected level of  detail and these constitute its imagination space. 

    "The real work of the system happens within the LM489 bot, just as the real work of being a thinking human happens inside of our minds.  We believe we can model the human experience of living in a private mind/body within a larger world!"
     
  • Uses NomicMU as a world to interact with humans. 
     

It trains in PrologMUD which is a highly adaptable world...
  or a very cool game.


Programmed with stories

Aspires to completely understand a controlled subset of English during its bootstrapping phase

 "In LOGICMOO's representation language, we develop new cognitively informed plan-based models of narrative action and we attempt to demonstrate that these models can be used both to control a virtual environment and to make effective predictions about the results of users’ mental models of the stories that they characterize. Motivated by psychological models of plans and plan reasoning, the team builds on prior work in plan generation and plan-related communication to develop an architecture for creating understandable interaction in narrative-oriented virtual environments."

TESTING AGI NARRATIVE

A test narrative is based on the virtual embodiment and environment. In the discussed test, the virtual world and a set of virtual MUD-commands were simple enough for analysis yet sufficiently complex for testing the most significant aspects of the proposed approach. After analyzing a few variants, the following one was selected:

A virtual creature starts with zero knowledge about the world and autonomously learns to live in a mutable environment. Monitoring the process was implemented using a web interface that displays the current map of the world, the information about the current decision and expected consequences, and statistical data about the process, such as the total number of generated narratives, the histogram of the length of discovered patterns, the percentage of each feeling for the last N steps, and so on.

The experimentation has confirmed the ability to achieve autonomous learning from zero using the described approach. The most effective autonomous learning was achieved by using the curious points-of-view on the initial stage and then switching to venture points-of-view after the system accumulated a sufficient amount of knowledge about the world.

ARCHITECTURE

AGI fundamentally expands the number of remembered past events: it remembers a much longer event sequence in a dedicated event sequence. The event sequence is very similar to stack, referred to above, but it is intended for a long-time data accumulation. Event sequence includes channel data, internal state at correspondent time, and actions executed by system. The longer the event sequence is, the more intelligent narrative can be which uses this information.

The AGI system consists of the core unit that is independent of the system narrative, the narrative unit that is dependent on the narrative and actually defines it, and a set of MUD-commands (physical or virtual) that are also dependent on the narrative. MUD-commands fall into two categories: CD-Scripts that are intended either to change the system state or affect the external world and MUD-precepts that are intended to obtain some information about either the system state or the external world.

MUD-commands are mediators between the system core and external world. Any information about the environment (including the embodiment if any) obtained by the core or any action performed by the system is a result of the MUD-commands request. Upon receiving the request, the MUD-command generates a response that contains either requested data (in case of a perception) or details about the performed action (in case of an CD-Script). Both the request and response are sequences of abstract metalanguage tokens.

E2C is used for translation between internal and external representations of logical entities. The external representation is human-readable. The narrative representation provides the ability to introspect a system using the interface that presents the knowledge in a human-readable form. The introspection channel can be also used for modification of the knowledge set.

Collected knowledge includes the historic sequence that describes what happened in the past (mainly, what requests were sent and what responses were received). The approach uses the logical simplification of the historic sequence. If some subsequence occurs twice or more, then a new narrative is created to denote it. Next, the new narrative replaces all occurrences of the subsequence in the experience. Such ‘logical simplification’ is used recursively and produces hierarchies of narratives that describe all known subsequences in the whole collected experience. The tail of the historic sequence represents the current state. Since a logical metalanguage token may represent sequence of atomic events, such state is a sort of context for the decision making.

Knowledge can be divided into exemplar knowledge (X is Y) and temporal knowledge (X occurs, and then Y occurs). Temporal knowledge is the starting point for detection of cause-and-effect relationships. Our approach is distinct from many other approaches (like Artificial Neural Networks) that do not provide explicit ways for representing and processing temporal knowledge. Our AGI architecture is based on the tight integration of the exemplar knowledge represented by a Internal Dialog (described in [1], section 2) and the explicitly represented temporal knowledge stored in an Internal Narrative. Such integration is a foundation for forecasting, learning and decision-making (Fig.1).

Embodiment: MUD-precepts and CD-Primitives

  • The command that is just directly mapped to equivalent request to some MUD-command (such as ‘turn light off’) and is part of the introspection mechanism. The set of commands includes the red button command that turns the system off.
  • The task that is similar to the goal of a narrow AI and represented by a sequence of commands and sub-tasks (for example ‘go home’). In case of a previously unknown task, the solution can be done by using both the traditional approaches and help from a trusted source (‘master’) that can suggest how to reduce the unknown task to a sequence of actions and known subtasks.
  • The activity mode issued by the trusted source that sets system narrative parameters (for example ‘move North-West’).

Note that the lack of external directives does not result in deactivation of the autonomous system. In such a case, the system acts on its own.

 SHAPE  \* MERGEFORMAT  SHAPE  \* MERGEFORMAT

System architecture (physical or virtual). To achieve this goal the output must be represented in a narrative form as sequence of metalanguage tokens. A metalanguage token corresponds to an entity, i.e., metalanguage token is a reference to node of the Internal Dialog. Information must be requested from a channel by sending it a proper sequence of metalanguage tokens that describes appropriate details.

An CD-Script has the same interface: it gets a request for action as sequence of metalanguage tokens and responds with a sequence of metalanguage tokens that provides some information about the performed action.

Internal Dialog and Internal Narrative

The AGI design is based on using the Internal Dialog for storing exemplar knowledge as described in [1], section 2. This approach incorporates the abilities that are provided by semantic nets, anthologies, rule-based systems and predicate-based systems.

Past history (experience) is backbone for discovering cause-and-effect relationships and of the ability for learning as a whole. Our design is based on the collection of the experience in an Internal Narrative that is permanently extended by pushing information about current system state, performed actions and received channel data. Events sequence collect all the available information as temporal sequence of a metalanguage tokens. A reference to a narrative that represents a set of entities can also be used as an element of the history sequence (for example to represent a set of simultaneous events).

The longer is the past history stored in an Internal Narrative, the smarter the system can be, but resources of real system are limited. An obvious way to increase size of the stored experience is to use some kind of data simplification. A common way to compress data is to find a repeated subsequence (aka pattern) in the sequence, to then store such patterns in a separate place (the Internal Dialog) and to replace all matched subsequences by references to the corresponding patterns. On the other hand, discovering of the cause-and-effect relationships is based on searching repeated cause effect subsequences, so the detection of repeated patterns lets us kill two birds with one stone: to increase system experience capacity and to provide a way to discover cause-and-effect relationships. The Internal Dialog is used to store discovered patterns (among of another knowledge).

Any newly discovered temporal pattern is represented as a combination of previously known entities. A hierarchy of a temporal patterns is represented by a binary tree (kind of pattern ontology) where each of two child elements (head and tail respectively) can be another pattern or a single metalanguage token.  A pattern which consists of more than two elements can be represented by more than one tree, so the Internal Dialog keeps all the actual representations of a particular pattern that can be composed using fragments of Internal Narrative stored in the Internal Dialog. In the Internal Dialog each pattern bonded to a set of pattern representations where each element of this set is a root of a binary tree (that in turn is a sub-narrative of the Internal Dialog).

The history sequence is updated permanently, which is different from “classic data simplification. When the next metalanguage token is pushed into the sequence, the last two added metalanguage tokens are checked to see if they are a representation of some known pattern. If so then this pair is replaced by a reference to the known pattern, and new pair of last metalanguage tokens is checked again (so cascade updating is possible). In the case when such convolution is not possible the whole Internal Narrative is checked for two consecutive elements that are identical to the pair of last pushed metalanguage tokens; if such a pair is discovered, a new pattern will be composed and both pairs are replaced by references to the newly created pattern.

Despite the simplification the Internal Narrative will grow permanently. To conform to the memory limitation the Internal Narrative can just be truncated on the oldest side so most outdated information become forgotten. Another way to maintain the size of memory used by the remembered history is to create a new pattern that is a generalization of a few existing patterns, then make related substitution in the sequence. This kind of compaction means losing some details, i.e., such replacing should first be applied to oldest part of sequence. Pattern generalization can be done in many ways, for example:

  • A metalanguage token in a pattern that refers discretized data can be replaced by another metalanguage token that refers the same value but in a coarser scale; this creates new narrative of a coarse pattern that generalizes a few fine patterns
  • A set of patterns which represents the different history fragments that finished with the same final effect can be combined into the new generalized narrative

Prior Works

Probably the program to most successfully use Roger Schank’s Conceptual Dependency was "Daydreamer" - a computer model of the stream of thought developed at UCLA by Erik T. Mueller from 1983 to 1988.  It implemented the following:

  • daydreaming goals: strategies for what to think about
  • emotional control of thought: triggering and direction of processing by emotions
  • hierarchical planning: achieving a goal by breaking it down into subgoals
  • analogical planning (chunking): storing successful plans and adapting them to future problems
  • episode indexing and retrieval: mechanisms for indexing and retrieval of cases
  • serendipity detection and application: a mechanism for recognizing and exploiting accidental relationships among problems
  • action mutation: a strategy for generating new possibilities when the system is stuck

https://lh6.googleusercontent.com/LF2ILLKHBpgTKoMk4JCdHZ2p_bwTXW-aE0dwsqvPZYP5_pI0FZSduuoBTu2LruN1gcHkfG52wN3R5u2BDFQ8FSlHtnEQ8uqtQ_cjciMjGW-1Zcr0s3V6sQtEuMA3UBgFoOxvsQwhttps://lh5.googleusercontent.com/ICUzRt3u8eQ_nGiXf1guSWIb1iQ_cKHx-vD9Qd8sg9v1eRX1L3JS4_b6wYmUVG0ZNbPaDJB_NjFyAP2SfYufwv7ewRlXHrweQf85KTFugfKqFXx1q3ZXaO1tyBqSTjxQ_6xy5KRj

Also a previous script generating system is the SAM system (Cullingford, 1978; Schank and Riesbeck, 1981). LOGICMOO improves upon SAM in several respects. While SAM only handles several invented restaurant stories and several edited newspaper articles, LOGICMOO can handle an unlimited number of new and real texts. LOGICMOO is able to produce deeper models of the script events in time and space. For example, for the restaurant script LOGICMOO represents the waiter walking into the kitchen, picking up the food, walking back into the dining room, and placing the food on the table. SAM represents that the waiter performed an abstract transfer of possession (ATRANS) of the meal to the customer.

LOGICMOO and SAM have similar architectures. The functions of the ELI parser and PP-Memory coreference resolution module of SAM are performed in LOGICMOO by the i@bnformation extraction system. The functions of the script applier of SAM are performed in LOGICMOO by the reasoning problem builder and commonsense reasoner.

Several other deep narrative generating systems are able to handle new stories (but not real-world text).  GPT2/3 System (2016-).  The Meta-AQUA system (Cox and Ram, 1999), an extension of AQUA (Ram, 1989), handles new stories automatically generated by the Tale-Spin story generator (Meehan, 1976). The distributed situation space model of Frank, Koppen, Noordman, and Vonk (2003) handles new stories occurring in a microworld consisting of two children playing soccer, hide-and-seek, and computer games. Story generating systems are reviewed by Ram and Moorman (1999) and Mueller (2002).   

The notion that generating consists of building models derives from past research on mental models (Craik, 1943; Johnson-Laird, 1983). Cognitive psychologists have argued that the reader of a narrative creates a situation or mental model of the narrative including the goals and personalities of the characters and the physical setting (van Dijk and Kintsch, 1983; Bower, 1989; Rickheit and Habel, 1999). Proponents of the deictic shift theory of narrative comprehension (Segal, 1990; Duchan, Bruder, and Hewitt, 1995) have argued that the reader constructs a mental model and keeps track of the shifting here and now as the narrative unfolds. The ThoughtTreasure system (Mueller, 1998) builds models of a story consisting of a sequence of time slices, where each time slice is a snapshot of (a) the physical world and (b) the mental world of each story character. The physical world is represented using spatial occupancy arrays, and mental states are represented using finite automata.

In order to perform commonsense reasoning, LOGICMOO uses the classical logic event calculus (Shanahan, 1995; Shanahan, 1997; Shanahan, 1999), which derives from the original event calculus of Kowalski and Sergot (1986).  (LOGICMOO is in collaboration with Bob Kowalski in his System now called LPS as our current inference engine)  The classical logic event calculus is based on many-sorted predicate calculus with equality (Walther, 1987; Enderton, 2001). The event calculus includes sorts (or types) for fluents, events, time points, and domain objects. A fluent (McCarthy and Hayes, 1969) is a time-varying proposition such as the fact that a particular object is in a particular room or that a particular character is hungry.

We use a classical logic axiomatization of the event calculus called the discrete event calculus which is equivalent to a standard axiomatization provided in a paper by Miller and Shanahan (2002) if the time point sort is restricted to the integers. Specifically, Mueller (2004a) proves that for integer time points the discrete event calculus is logically equivalent to a formulation that combines axioms from Sections 3.2, 3.5, and 3.7 of the paper by Miller and Shanahan.

The predicates of the classical logic event calculus are as follows:

  1. Happens(e,t): Event occurs at time point t.
  2. HoldsAt(f,t): Fluent is true at time point t.
  3. ReleasedAt(f,t): Fluent is released from the commonsense law of inertia at time point t. The commonsense law of inertia (Sandewall, 1994; Shanahan, 1997) states that a fluent’s truth value persists unless the fluent is affected by an event. When a fluent is released from this law, its truth value can fluctuate.
  4. Initiates(e,f,t): If event occurs at time point t, then fluent becomes true after and is no longer released from the commonsense law of inertia after t.
  5. Terminates(e,f,t): If event occurs at time point t, then fluent becomes false after and is no longer released from the commonsense law of inertia after t.
  6. Releases(e,f,t): If event occurs at time point t, then fluent becomes released from the commonsense law of inertia after t.
  7. Trajectory(f1,t1,f2,t2): If fluent f1 is initiated by an event that occurs at time point t1 and t0, then fluent f2 is true at time point t1 + t2.
  8. AntiTrajectory(f1,t1,f2,t2): If fluent f1 is terminated by an event that occurs at time point t1 and t0, then fluent f2 is true at time point t1 + t2.

Narrative Processes

(At a minimum it must be dealing at all times )

MindwareOS has a general multi threaded interpreter for  putting together fluids and action and events. 

It executes N stories under N different Attention Schemas (AS)  [Ganziamo 2015].. Each AS is in charge of tracking It's place in a narrative/story.  

  • If this was a story about what was happening in the world then what narrative/story am I participating in?

 (The system expects that it is participating as a character from within)  The system is choosing a story that may include That it is at an interview and is holding a conversation,

 

  • If there was a story about what I should be doing mentally right now..  What should I be thinking about?
    What parts of my thinking lineup and don't line up. What do I need to add or pay no more attention to?  Should I be Learning new things Or could I already understand this?
     
  • If this was a story about emotional reaction what reactions would make sense by me and any other participant in this story?  Example: Is this something that would make someone uncomfortable?  Is a person right now expecting me to feel a certain way?  What story are they playing out that would make them expect me to feel this way?
     
  • If I was trying to write a story about everything that was happening to me right now that includes all the things In the above (bullet points) what would that be?  How would I explain this to myself later In a way that would allow me to feel that I remembered what happened?  [Schank book: "Tell Me a Story"]
  • keeps its data in an elaboration tolerant formulation
     
  • LPS problem-solving and planning system;
     
  • A narration plan is related to the actions LM489 needs to accomplish in order to accommodate its intentions (e.g., if it is responding to some goal that was "posted" to the like the "Who is my child?" ). It does this through an interaction plan, which relates dialogue plans within a pragmatic frame (which captures the current general state of the task oriented dialogue, e.g., are we trying to figure out what we are trying to do, are we testing a particular plan we have in mind, are we selecting between possible plan executions, etc.)
     
  • Every CHAMBER is constantly looking for, and responding to, "posted" messages. When it gets one it can understand, and is interested in, it reacts to it by updating its own internal data structures (this varies widely of course from CHAMBER to CHAMBER), and in particular by updating its menu of possible tasks/options for the LM489. E.g., if the LM489 just explained that they are thinking about their child, one "CHAMBER" might raise the question: Who is my child? That question, in ANAMATORY_LOGIC , would be seen by other CHAMBERs who hopefully reply back in ANAMATORY_LOGIC  "My child is Joey"

Some key technical aspects:  (some references overlap Rothblat in her 2014 paper.)

  • A few gigabytes of remembered information [Landauer, 1986].
  • LM489s, like ordinary people, use analogies, metaphors and similes very freely in conversation. The combination of LM489S’s analogical reasoning techniques can be used to understand within-sentence metaphors that are used by experts, thus making communication more natural, and these, together with an ANAMATORY_LOGIC system, can be used to understand extended, multi-sentence analogies used by HUMAN, as well as to suggest new CHAMBER for a newly created concept based on analogy to some other, pre-existing concept.
  • We actually process any needed problem-solving actions, filling in any open questions based on our underlying plans (which may involve plan recognition steps as well).
     
  • a diminished repertoire of remembered thoughts from day-to-day [Ebbinghaus, 1885]
     
  • Then we use the updated state of the story to decide what to say to the GUEST next.
     
  • The speech acts are constructed into coherent (when possible) discourse segments. These group together sentences and fragments that achieve a common goal. These segments then fit together into a dialogue plan. The dialogue plan predicts what a GUEST is likely to say next, and is used to drastically trim away "obvious" pieces of the system’s outputs to the GUEST.
     
  • achievable because there are but a limited number of personality traits [Costa and McCrae, 1990]
     
  • We Humans use a Narrative Generation Tool (For debugging):  The LM489 generates a set of statements which are consistent with what has been thought about so far, and let us notice something wrong. We will realize that they probably didn’t adequately specify.
     
  • uses a finite set of universal expressions and emotions [Brown, 1991]
     

Theater Spaces

  • But when some difficult choice arises, during that structure-mapping process, the Analogy module might be stumped, and it may be most cost-effective for it to hand that choice to the Dialogue module, which will then formulate it as an English question for the user.
     
  • We will process the MINDFILES as a story using the Conceptual Dependency Theory of Roger Schank.  (whole documents would be paraphrased and indexed as a collection of stories and substories).   Filling in holes with information from derived situational description in Michael Kifer’s F-logic which helps create a queryable intelligence.  (which are easily collected from existing sources and DRS-Models labels already researched through their relationship to verbs).
     
  • An initial Dialogue Management System (DMS) will be built during the first six months, and continuously improved and extended during the rest of the system. It will provide a modular capability to process a logical representation of natural language inputs, allowing a human to enter, edit, query, and UNDERSTAND the MEMORY stored in the BLACKBOARD– in particular to enter, edit, query, and UNDERSTAND a growing theory of the GUEST’s domain. The point is that it will track an entire extended session with the user, so they can refer back to earlier interchanges, previous states of the partial memory, etc.
  • Task 11. Program Management and MEMORY Formation Infrastructure. This includes team coordination, integration, architecture specification, distribution of LOGICMOO_488/IDE software releases and LOGICMOO_489 system releases; reporting and support of MINDMAP program; hierarchical planning system; MINDMAP team collaboration toolkit for multiple GUESTs and GUEST teams.
     
  • Task 12. MEMORY Acquisition Tools and Technologies. This includes parsing of natural language to logic; generation of English from logic; dialogue management over an extended set of sentences, representing discourse structures in PROTOLANGUAGEs; representing dialogue plans; MEMORY-rich UNDERSTANDER Tools for GUEST’s; and advanced tools for analogy, theory refinement, sketch input, automatic metric collection, hierarchical planning, and integration of other MINDMAP technology that complies with our ontology and architecture.
     
  • Challenge Problems and Domain MEMORY. This includes intermediate-level theories needed as basic background prerequisite theories to support the development of specialized MEMORYs in the logic and BW areas, by (respectively) grad students in biology and GUEST’s. Content needed in support of the MINDMAP Challenge Problems. Our team’s GUEST’s (under contract to us from LOGICMOO_OPENSRC) are utilized here, to vet MEMORY content and to test the developing tools and technologies.
     
  • Assessment of metrics, creation and revision of metrics will be necessary and ongoing to ensure that the software is working well and the path forward is clear.

     

     Sensory Input (Unfiltered and Chaotic)

     Physical Attention (What do I see/hear/smell?) Imagination

     Mental Imagination - Mind's Eye/Ear/Nose Imagination

       More than one CHAMBER creates these: 

          CHAMBER1 "I imagine a fence with an open gate. "

          CHAMBER2 - decorates the details of the fence and gate.. 

            (It is speaking into existence (not using words) all the details one would see if imagining this )

          CHAMBER1 - imagine the gate made out of wood.. 

            ( these consciousness together paint the entire picture)          

     Mental Attention

         Notices CHAMBER1 actions but not CHAMBER2's

 

     What chooses the content of CHAMBER1/CHAMBER2?

         CHAMBER1/CHAMBER2 are "CHAMBERs"  

Each Elaboration has its own story that may or may not:

Concurrent Narratives/Scripts that create Imaginings

  • What should I be perceiving right now? (A story about the world situation)
  • What should I be thinking if I was me? (A story about thinking)
  • What should I be feeling if I was me?
  • A Story of hearing itself do #1 (thus, a Story of Self-Awareness)
  • A Story of truth changing over time
  • A Story of having memories with limited access (what can't i remember?)
  • A Story of learning what doesn't have to be remembered and can be recreated 
  • A story of Combining new things
  • A Story of Communications
  • A Story of How communications are mitigated 
  • A Story that  the world is going on without it
  • Individualized EP-Stories
  • A Story that ties together there all of the above

There's a story of someone drawing boxes on paper. Next there is a story of filling in the boxes.    There's a story that someone wanted to do some addition.  For the person could do the addition they had to set up  a way to do so.

There is a story that someone wants to perform a task.  There's a story that someone recalls a way to do so.  There is a story of someone beginning to do the things in which they recalled they needed to do. 

   the story of a person getting stuck unable to do things.

   There is a story of a person figuring out how to get unstuck.

 There is a story of the person that accomplished their task

 now the task is accomplished there is a larger story in which accomplishing that task was a requirement.

There is a story of someone wanting to perform a task in order to have some event take place.   There is a story of an event taking place and another event takes place following the first.   There is a story of someone wanting those two events to occur one after the other.  There's a story of a person not wanting that.  There is a story about why someone does or doesn't want something.  There is a story of explaining the wanting or not wanting of something to another person.  There is a story that the explanation by being given allowed some sort of event.   There is a story that when the explanation is not given it will prevent some event.   there is a story of thinking the previous six things through.   

There is a summary story of  addition that explains that by taking two different numbers and adding them together using that prescribed method you will get an answer to the question.   There is a story of someone wanting that question answered.   There is a person in that story who can afterwards continue on with the task.  

So in the above example there was at least three different points of view:

objective story to story transitions …   These are strung together by the event Calc.

This is that some stories which are about such transitions are beyond the scope of the individual agent carrying out the actions

Code for narration transitions

Example narrations

Code for playing narrations into the blackboard visualization areas

Code for blackboard themselves

spec how those parts work together

 

Tags: LOGICMOO
Created by admin of logicmoo.org on 2021/02/04 14:09
     
Copywrite © 2020 LOGICMOO (Unless otherwise credited in page)