Conscious and Unconscious Aspects of the Mind

Organizing the world into the form of a story is similar to the work performed by consciousness. However, one can assume that a large part of the story generation process is an unconscious automatized process. Indeed, the human being is unaware of how past experiences are encoded and stored into their memory, and how they are retrieved or recollected. The detailed mental process of creating or reading a fiction is also almost impossible to explain. However, the human being generally possesses intuition with regard to controlling of the self. In the theory presented in this paper, the relationship between the conscious and unconscious aspects of the mind is explained as the interaction between the narrator-self and a story’s self-organization, including its relationships with other stories.

We classify the principles of the self-organization of a mental world into five types of generative actions, i.e., hierarchical, connective, contextual, gathering, and adaptive. An integrative cognition is explained with these generative actions in the form of a distributed multiagent system of stories.

Events perceived in the real world (well in NomicMU Game) which become event sequences

 Smells/Tastes/Feels/Sounds Visual Animations

  • Start/Continue/Stop

Exemplar - Agents, Objects, and their Subparts

  • Appear/Remain/Disappear
  • Property Changes
  • Translocate

Meaningful Conveyances

  • Agents make Understandable Gestures

  • Agents Say Understandable Things

  • Objects have writing on them

  • Radios play songs

Narratives may be replayed to create a copy of that of that original PrologMUD

  •  This PrologMUD can be thought of as a Virtual World which LOGICMOO hallucinates.

  • This Virtual World can be re-experienced just the previous world was (as perceived events.)

  • Those perceived events can be re- “played” to create a copy in which LOGICMOO hallucinates yet another Virtual World of the Virtual World.

  • This can be done indefinitely; those copies may become simpler or more complex (will be explained later (as pipelines))

The event sequences (which are narratives) are equivalent to “internal dialog “

  • “internal dialog” may be modified and then “played back” to created Imagined Worlds

  • An Imagined World can be re-experienced and thus create a new set of perceived Imagined events

  • Those perceived Imagined events can be re-“played” to create a copies of those Imagined Worlds

  • Those Imagined Worlds can be compared to each other the differences can constitute a hybrid PrologMUD

Internal dialog can be compared to other narratives without involving worlds at all.

  • The differences can be made into other narratives (thus internal dialogs)

  • This is used to generalize, specialize or condense Internal Dialogs.

  • It also may further isolate out the actions and recombine them to perceive them as new action sequences

  • Those perceived action sequences can be “played” into copies of PrologMUD

We may create pipelines between the above elements

  • Those elements again are: Events, Internal Dialog, Actions,

  • Pipelines may combine, split and recombine these into Events/Actions, and Internal Dialog 

  • Thus creating Worlds/PrologMUDs/Imagined Worlds.

Increasing and decreasing specificity within the narrative pipelines

Back to Parent

GRAPHICAL PRESENTATION
(of this document)

https://docs.google.com/drawings/u/0/d/sYWokCohC5sOCv-6DYQPyuQ/image?w=766&h=742&rev=1&ac=1&parent=14q98Ia7hvM7s4fPx07vfxzaV5uqWFeHjVYIt9CqzsQI

Sequegens can be sensory

 So far, in narratives we, have talked about non-sensory.

Here is an example of a Visual Sequegen Codex

Untitled Project ‐ Made with Clipchamp (2).gif

(from Dustin Lacewell dlacewell@gmail.com ldlework.com)

..blah blah blah..
 

We've waited in discussing Codex's until now.

..blah blah blah..

Platform

The Goals:

Avoid scaling problems [like vision/sensor bloat] until we have artificial generalized intelligence.

Put robots in a simple MUD world but make sure the world still supports extremely complex human actions/interactions.

Use pre canned scripts to teach the robot in that environment.

(They assume by problems of the type that humans solve, the "hidden features" of the mind, such as consciousness, will emerge.)

This is a step from the type of AI that neural networks are going for -they assume by problems of the type that humans solve, the "hidden features" of the mind, such as consciousness, will emerge. We think to model human consciousness we need to build a system that is capable of completing some of the same processes a human does.

What LOGICMOO Simulates:

A) Creation of an internal dialogical record of what happens within it.

B) Simulated "reliving" is done by replaying the internal dialog that happened while experiencing something the first time.

C) The ability to host multiple internal dialogs at once

D) Transferring an experience (is done by transferring internal dialog.)

D) transferring an experience is done by transferring internal dialog.

E) PrologMUD's current Precepts list is considered an internal dialog and feeds into larger processes.

This Internal Dialog is an EventCalc stream.  

When the robot starts it has several built in internal dialogs available, and it has the ability to evaluate

Most internal dialogs are only available in certain circumstances

Individualized EP-Stories are

 "Stories of …"

 "Stories of …"

  • what should I be thinking if I was me? (A story about thinking)

https://lh4.googleusercontent.com/D-wCt1DMWvSGwqQIExtFa8H_ok7-gIev014vp_4DnRa3vSgjioaTkLnGDN9EysM-UlZiMGnC_jMdK23L4N3ToPStsAWO6qFWID_CLS5JaBWNivMuejEAakPxvbJ9ytUPxi-Hg4o

  • what should I be perceiving right now? (A story about the world situation and importance)
  • what should I be feeling if I was me?
  • when the explanation is not given it will prevent some event. 
  • the world is going on without me
  • the explanation by being given allowed some sort of event.  
  • wanting to perform a task. 
  • recalling some story
  • remembering something difficult 
  • writing instead of saying something to someone
  • writing a thank you letter to someone.
  • visiting a friend or family member.
  • truth changing over time
  • thinking the previous six things through.  
  • There is a person in that story who can afterwards continue on with the task. 
  • the person that accomplished their task now the task is accomplished 
  • telling someone you love them.
  • starting up a conversation with a stranger.
  • wanting to perform a task in order to have some event take place.  
  • wanting those two events to occur one after the other. 
  • wanting that question answered.  
  • drawing boxes on paper.
  • beginning to do the things in which they recalled they needed to do. 
  • showing someone a cute dog video.
  • showing someone a cute cat video.
  • of a person getting stuck unable to do things.
  • listening to a story from someone''s life.
  • learning what doesn''t have to be remembered and can be recreated 
  • kissing someone on the cheek.
  • How communications are mitigated 
  • high-fiving someone.
  • hearing itself do #1 (thus, a Story of Self-Awareness)
  • having memories with limited access (what can''t i remember?)
  • giving someone Reddit Gold.
  • giving someone a pleasant surprise.
  • giving someone a hug.
  • filling in the boxes or blanks of a form.  
  • donating money to a charity.
  • doing a favor for someone.
  • cracking a joke and making someone laugh.
  • Communications (with at least one example of fail correction)
  • comforting someone who is feeling down.
  • A story of Combining new things
  • catching up with someone you haven''t talked to in a while.
  • buying a gift for someone.
  • an event taking place and another event takes place following the first.  
  • a summary story of addition that explains that by taking two different numbers and adding them together
  • using a prescribed method you will get an answer to the question.  
  • a person not wanting that. 
  • a person figuring out how to get unstuck.
  • which accomplishing that task was a requirement.
  • why someone does or doesn''t want something. 
  • learning how to do what someone else is doing by watching
  • explaining the wanting or not wanting of something to another person.
  • A Story that ties together all of the above

Overview of operations inside LOGICMOO's bot

There are multiple MUDs running inside a single Virtual Robot
 

MUD#1 - Space for learning from VirtualTrainer#1

MUD#2 - Vantage point of bot's in a simplistically imagined world

MUD#3 - The MUD that real players play in.

BOT#1 - Bot in MUD#1 that is tethered to a VirtualTrainer 

BOT#2 - Bot in MUD#2 that records information from BOT#1

BOT#3 - Bot in MUD#3 The Virtual Robot with actual humans

We designed MUD#1 to transfer its logic to MUD#2 without too much fuss. 

We tethered BOT#1 to a VirtualTrainer in MUD#1 so that wherever the VirtualTrainer goes the bot (LM489) goes.  LM489 bots have an empty MUD which serves as it's imagination called MUD#2 .   MUD#1 is the playable PrologMUD.   MUD#2 is a version of PrologMUD that is not playable and only requires "sequences of precepts" and creates minorly stateful objects.   Each time a "sequence” happens in MUD#1 the MUD#2 records these as "valid mud sequences".   The idea here is that MUD#2 is slowly "programmed" as interaction in MUD#1 takes place.  This will lead to many misunderstood possibly "useful simplifications" .   
 

CAS-AM:   
Recap of what I’m guessing is happening:  The NARRATION COMPONENT. We tether a bot to a human player in MUD#1 (the shared game world), so the bot is always with the player to create and document the map in Mud 2. It records the map and important events, but not in much detail. 
 

  • Maybe we need LM489 (the AGI), the Bot (who interacts and lives with players) and the bots(which live inside the Bot and do unconscious things)

  • In a world of spies, you have spies themselves (they monitor things and sometimes do a small job) (ex: Narration Component), and their supervisor contact (who decide when they should do important actions) (also Narration component), and Their Boss (who told them the goal for the political arena) (???) and that supervisors boss The Director (LM489). The lower level agents don’t really make those big calls (the plot of every spy movie is a lower level agent trying to act like a higher-level agent?) It’s a hive mind - and you’re trying to describe the jobs of a hivemind. And this is the part that tries to make sense of sensory input. 

  •  

In other words, the "sequences" very quickly constitute "valid MUD sequences" (VMSes) in MUD#2.    MUD#2 is extremely permissive and allows every sequence it has seen to magically take place.   After all, it is not the reality, it is the imaginary world for LM489.    (note, MUD#2 data expires rather quickly).   The idea is eventually LM489 will attempt to model what worked in MUD#2 inside of MUD#1 which mostly should cause errors because MUD#1 follows actual rules.   When errors take place,  LM489 has to correct these things.  LM489 has some canned correction dialogs (CCDs) built in already that are programmed/vetted in terms of what already is known to work in both MUDs.      (CCDs themselves are NarrationPLLs) The idea here is that CCDs will train LM489 to interact and become an expert at using MUD#1.   Before getting our whatnots in a bunch, the point here is to not emulate anything at all close to the real world.  Or even real-world type learning but to ensure we have at least a "transfer model" for transferring bits and pieces between uninformed mind-slices that constitute the make-up of LM489's total mind areas. These several simpleton's each whom each have only [mostly unshared] sub-slices of MUD#2 .

  • CAS-AM Recap of what I’m guessing is happening:  Mud 2 records the map and world’s “valid mud sequences” (VMS) without judgment. This data is being saved and serialized into PLL. When the two different Muds are compared there will be incongruencies, and there need to be, as LM489 will seek to correct these errors in its internal world.   LM489 has some “canned correction dialogs” (CCDs) built in already that are programmed/vetted in terms of what already is known to work in both MUDs.  (CCDs themselves are Narration PLLs)    Each player-connected bot is actually made of many smaller bots who work in unison, with each tiny bot tasked with successfully completing a relatively simple action (like navigation or an acting like eating).  (I’m imagining this like making 9 year olds do the daily activities of congress, it's not great, but to an alien it would look equivalent. Then you can get a review board of a higher level to decide what the worst mistake was.) Our only goal above was to prove to our project we can get our transfers written in PLL.  And test that the PLLs, when broken, have a CCDs that mitigates the repairs. (This repair process is a developmental milestone.)

  •  

The bot's next goal is to convert as many VMSes (Valid MUD Sequences) into VLSes (Valid Language Sequences)

Next we up our game by having a human giving a description of the VMS while it is happening.  This means as the human moves around and acts from MUD#1 (where MUD#1 send precepts the human announces what they want to do "I am going to eat because i am hungry" then they perform actions in the MUD#1.  "take food from the backpack.  eat the food. I am no longer hungry."  LM489 sees the food appear into the humans hand.  LM489 makes food appear in a human's hand in MUD#2.  LM489 sees the eating act.  LM489 hears the human is no longer hungry."   ( MUD#2 replicates the narrative speaking of the human as it is going on as well as all the changes).   LM489 already has a model that allows it to replace the human with anybody in MUD#2.     Sound like more canned stuff?  Yes, we still are cheating.  What we are doing in this phase though is ensuring that we can have Canned Monkey Scripts (CMSes) since we will have several a-priori CMSes (CMSes are again NarrationPLLs) which are called proto-memories that are analogous to what mechanism that is used in animals that allows them to store very simple behaviours like walking.   Why we adorn these with announcements like  "I am going to eat because i am hungry" is that we are creating an infrastructure that operates from an "internal dialog" and not from any other seedlings.

  1. So we have so far described a system that can look at a MUD#1 and fully transfer the low-level description into MUD#2 rule base from three types of PLLs..

    VMS- Valid MUD Sequences  (observed in MUD#1s and transferred to MUD#2s)
  2. CMS - Canned Monkey Scripts (VMS that have spoken intents and outcomes)
  3. CCD - Canned Correction Dialogs (when MUD#2 VMS won't transfer correctly back to MUD#1 we use CCDs to correct them)
  4.   Example:
  1. "I expected $A to work, but it didn't, may we discuss $A ?"
  2. wait for confirmation..
  3. Ask initial categorizations "Is $A an action I can do?"
  4. store the results of AConvert this to a STRIPS notation: (always-rule (preconds (At ?User1) (Unknown ?ConceptA)) (postconds (stable-system) (knownAbout ?ConceptA))) ...

Before getting too deep we will attempt to summarize the rest of the non-technical overview:

How VMSes are combined
CMS's spoken intents/outcomes are added to narrations that become VMSes now called  VLS (Valid Language Scripts)
How CCDs are adapted to work for VLSes (Valid Language Sequences)


  This correction process happens by having semi-canned dialogs with a ...

Non-Technical Overview

The restriction of using limited resources

A user application when defined,  provides the PRS system a set of knowledge areas. Each knowledge area is a piece of procedural knowledge that specifies how to do something, e.g., how to navigate down a corridor, or how to plan a path (in contrast with robotic architectures where the programmer just provides a model of what the states of the world are and how the agent's primitive actions affect them). Such a program, together with a PRS interpreter, is used to control an agent.

An interpreter is responsible for maintaining current beliefs about the world state, choosing which goals to attempt to achieve next, and choosing which knowledge area to apply in the current situation. How exactly these operations are performed might depend on domain-specific meta-level knowledge areas. Unlike traditional AI planning systems that generate a complete plan at the beginning, and replan if unexpected things happen, PRS interleaves planning and doing actions in the world. At any point, the system might only have a partially specified plan for the future.

PRS is based on the State, Goal, Action framework for intelligent agents. State consists of what the agent believes to be true about the current state of the world, Goals consist of the agent's goals, and Actions consist of the agent's current plans for achieving those goals. Furthermore, each of these three components is typically explicitly represented somewhere within the memory of the PRS agent at runtime, which is in contrast to purely reactive systems, such as the subsumption architecture.

Our PRS system is in the business of organizing these:

 
Pseudonyms
 
Actions (compound and simple) done by agents.Intentions, Plans, action primitive,  Taking a bite of food, Chewing Food
Exemplars Objects and Structures

Types of
Object, Agents, Smells, Tastes

  Joe. Food. Myself. You.
States properties of ExemplarsWorld State  Joe has some food.  
Percepts and used to detect the aboveObservations,   Joe takes a bite of food
Goals by agents.Desires  Joe wants to be full
Beliefs about states of how the world is presently arrangedImaginary world  Joe is hungry, Joe took a bite of food
Event Frames Narratives that may contain any or all of the above

Frames, Events,

Memories

(All of the above) + Joe is a person whom was hungry

 and then

he took a bit of food

Explanation Narratives

Rulify the why Procedural nature and nuances of the above into contaminants and sequences

Text, PPLLL, English, PDDL

(All of the above) + Joe is a person whom was hungry 

 so that is why

he took a bit of food

Psychology Section:

The programs SAM/PAM by Roger Schank was one of the first most viable starts of AI. His theory may be viewed as one version of the Language of Thought hypothesis (which Schank calls 'Conceptual Dependency' theory, abbreviated as CD). Although much of his work was based on natural language "understanding". He defined, at minimum, what the tenants understanding might look like. From this very start opponents will use the Chinese Room argument against this language. I'll ignore this because we've agreed "Artificial" is fine when it comes to machine intelligence. Those who have seen the source code of SAM realize that it is a system whose job is to find a "best fit" on programmed patterns. What it does is create a language that "best fit" can exist. We see that to take that initial program really into our world, millions of facts and rules are required to be put into the system. Before we attempt to add these millions of facts and rules we have to define a very clear meta language (above C.D.).

"Inner self" exists in some (game?) world that is separate from the outer environment. It probably has objects and actions not defined or restricted by spatial coordinates. It probably has bio-rythemy (dictated by some biochemistry) weather like system that is controlled autonomic-ally and may even be irrelevant to the situation a self-aware being is in.   That process is an "internal dialog" like a computerized poetry or story generator simply constructing stories. Everything that the inner voice says has to be consistent and hopefully relevant to the rest of the system. The speed in which a system operates even in the real world processing is only at the speed of the internal voice.

"Self awareness" means that in order for a program to operate it must [be forced to] "observe" its execution transcript in the same language in which it interacts with it's environment. One's own thoughts and plans are just as much part of the world we live in as the outside environment. The inner environment has many cause-effect rules as the outside does of physics. We (and the program) strive for control (satisfaction of goals) of the inner world as much as the outside. One definition of "Personality" I learned in school was "The manner of skill in which a person exerts their intentions to the control of their environment'' We say a person has a well developed personality when they have found a way to make their environment (others around them) comfortable while they are satisfying their immediate goals. I believe that in order for a person to function at a high skill level here they must master and win at the games of their inner self. The concept of "inner self" is what is supposedly so hard to define for AI scientists. So before defining what "it is" we are better off implementing the framework in which an inner self could operate in. I think that C.D. representation or CycL might provide sufficient data types for whatever processor we define in this document.

"speech is a behavioral act" Also we can actually have silent speech acts called internal dialog. Usual internal dialog can be listened to. Think quietly "I can hear myself think" in your own voice. now do it in another person's voice "I can hear you think". Try to have a thought that has no voice. Now paraphrase that voiceless thought back with your own voice. My voiced version was "That chair is made out of wood" But i had to pick out something in my environment or some sensory memory that I never bothered voicing: "wow that was a salty steak last night" Perhaps you can come up with thoughts in which there are no words for. Generally with enough work you can write some sort of description in which words are used. This has led research to decide that all thoughts may be defined in speech acts. Maybe all thought is a behaviour (you are trained to do linguistics.. internal voices give us positive feedback (Pavlov comes in)). I mean from the very level of composing a lucid thought had to be done via some rules close to linguistics.

Items when the system starts out

Actions (compound and simple) done by agents.
 

 Intentions, Plans

  Action primitives
 

  • Containment relationships such as 
    • Equivalencies and Implications (physical and otherwise),
    • What is not contained in what
       

Goals by agents.
 

Desire

DesiredStates

Percepts and Exemplar Are used to Form

  • Containment relationships such as 
    • Equivalencies and Implications (physical and otherwise),
    • What is not contained in what
  • Sequences relationships such as 
    • What happens Automatically.
    • What happens by Choices made by Agents.
    • What has never happened
    • What can't ever happen

Percepts and used to detect the above
 

Beliefs of Events

Observations, 

   Percepts and Exemplar Are used to Form

  • Containment relationships such as 
    • Equivalencies and Implications (physical and otherwise),
    • What is not contained in what
  • Sequences relationships such as 
    • What happens automatically.
    • What happens by Choices made by Agents.
    • What has never happened
    • What can't ever happen
Exemplars Objects and Structures

Beliefs of Types of:

  Objects, 
  Agents, 
  Smells,
  Tastes

Types of:   Objects, Agents, Smells, Tastes

 Percepts and Exemplar Are used to Form

  • Containment relationships such as 
    • Equivalencies and Implications (physical and otherwise)
    • Physical containership
    • What is not contained in what
       
  • Sequences relationships such as 
    • What happens Automatically.
    • What happens by Choices made by Agents.
    • What has never happened
    • What can't ever happen

States properties of Exemplars
 

Beliefs of World State

Imaginary world

World State/Imaginary world

     Percepts and Exemplar Are used to Form

  • Containment relationships such as 
    • Equivalencies and Implications (physical and otherwise),
    • What is not contained in what
  • Sequences relationships such as 
    • What happens Automatically.
    • What happens by Choices made by Agents.
    • What has never happened
    • What can't ever happen
Event Frames Narratives that may contain any or all of the above

Imagination and Memories
 

  • Combining States and Precepts and Event Frames Narrative
  • Combining existing with Explanation Narratives creates new ones
  •  

Explanation Narratives

Rulify the why Procedural nature and nuances of the above into contaminants and sequences

Text, PPLLL, English, PDDL Explanations in Phrases/Words 
            that could represent such Narrative Frames
 

Combining existing with Explanation Narratives creates new ones
  

   Such Explanations organize these things into:
 

  • Containment relationships such as 
    • Equivalencies and Implications (physical and otherwise),
    • What is not contained in what
  • Sequences relationships such as 
    • What happens Automatically.
    • What happens by Choices made by Agents.

Runtime Item Creation

  1. Using defined and discovering a narrative procedure that describes how the states are put together.
    some(State)-Implies-some(State)
     
  2. Defining and discovering a narrative procedure that describes how the actions are put together.
    some(Action)-Follows-some(Action)
    some(Action)-Implies-some(State)
    some(State)-Implies-some(Action)
     
  3. Defining and discovering a narrative procedure that describes how the goals are put together
    some(Goal)-Implies-some(Goal)

     
  4. Defining and discovering a narrative procedure that describes how natural language is put together.
    some(WordClasses)-Follow-some(WordClasses)
    some(WordClasses)-Contain-some(Words)

       Non- LOGICMOO PRSs only do a subset of the above (See the list below)
All other planning systems seem to only be in the business of defining a 

  1. defining and discovering a narrative procedure that describes how goals, states, actions are put together.
  2. Defining and discovering 
  3. defining and discovering a narrative procedure that describes the goal.
  4.  

More assumptions

Starting with the restaurant script by Schank we might have an inner script called "the first things we think about at the start of the day". For some of us in order for items to make it on the list they have to first be qualified by "what is relevant for us to think about", "what do we have time to think about'', "what deserves our attention" and "what things do I already think about each morning no matter what". My point is that we have definite rules (personality) in which we use to keep our inner self compliant. First this may sound like some phase of goal based planning, but that is not the point of this paragraph the goal is to point out there is a sense of ontologizing our inner world simply as on the outside. Imagine how simple it would be to write a flowchart of diagnosing why an engine wont start and realize it'd be just as simple as picking out what the first things we need to think about the start of the day would be. Again not for a planner but just to understand how we label the rules of such an enterprise. This meta language can have vague operators such as "this is more important than that" or "i want to talk to this person" and "each day I have to put gas in the car".. The reason I declare this stuff as "easy" is because if someone was to as "why?" We'd be able to explain in some ready made language script. The point where some things are harder to explain is when we've either formed a postulate that cannot be further be simplified ("i am hungry" "chicken tastes great and I can't explain it") or when the explanation is something that came from the autonomic instant weather system like: "it just came to my mind". Things will come to mind often because they by tradition just do.

In Sci-fi, we like thinking androids will solve everything in life the same way they would play a game of chess. We imagine them short circuiting when they encounter unexplainable emotions, situations or people. So is that AI useful? I wont say short circuiting is useful but say such an AI is exactly what we all want. We want a tireless logic machine taking in the big and small picture and computing the most brilliant "act" or "hypothesis" for the moment that it is in. We want to sit by it's side and explain how we think and feel so that it can inherit those same behaviors. We hope to do that in English. Answering many questions it has for us about the exciting new world we have brought it into. How far is that from a reality? Initially very very far. It is important to define the types of questions we'd enjoy answering because those are the exact ones we think "make us human."

https://lh5.googleusercontent.com/GqSYOYx4MtaCn7gIYqaLHdDfHs-Ek8xwOCHBuIPtlwzmaIJ4Vi_TxhnLgLOURy6XsTJdtDw8eIAbR71rJseBVENHEnApjdC19mIvasWJlLUqM11w3NUjCgFbekyj746bEjhlRhg

Steps  (Wrote the steps in  2006 - so needs a contemporary rewrite that is less NL-ish) 

  1. Define a MUD world model in STRIPS notation using Schanks C.D. language of anything/everything that we'd like the robot to be able to do. 
    "here is how to gather wood and build a fire to achieve warmth"
    "you want warmth because it makes you feel good"
  2. Simply these models into the most concise featureless version possible. 
    "do X, then do Y to achieve Z" 
    "wanting Z because it makes you feel A" 
    "A is good"
  3. Extract the stop-words that are left: "do" "then" "wanting" "makes you" "feel" "is". Even: "good"
  4. decide the ontology of X,Y,Z,A
  5. write a small system to create new X,Y,Z,A's variables.
  6. Define these mbuild rules in the original way you did step 1 and repeat until you get back to this rule.
  7. Repeat the same steps 1-4 for your stop-words.
  8. save this off as a new STRIPS notation
     
  9. put your rules of legal construction of such sentences back into STRIPS form so that only valid sentences can be generated. out comes: "do sit then do sit" ..
  10. find and create ways of stopping such exceptions (make a DSL)
  11. simplify your exceptions language created for detecting
  12. this new "exceptions language" repeat steps 1-8 on it
  13. run the sentence generator again.. When i say "sentence generator": i mean really it is a "rule generator".. hopefully seemingly generating a great number of rules.
  14. reduce the X,Y,Z,A into only a small set of literals and see if you can ever make the generator ever stop. 
    You should be able to…
  15. rewrite the generator to allow yourself to predict exactly how many rules it can produce at any given time if you haven't already done so.
  16. invent new sets of X,Y,Z,As that together make good sense. Determine what ontological basis you went by. example: GoCabin->Sitting->Comfort->Good ontologically: "chairs are comfortable and found in cabins"
  17. Again steps: 1-8.. on step 7.3: "foundIn" "is" .. remember, step 3 before had found "is".
  18. Are you creating a new language yet? or have you been reusing the same language you created the very first time? Decide that your stop-word generation should not be the same as the first time.. create new versions of "is". like "feeling_is_goal" and "goal_is_subgoal"
  19. define a program that will have generated everything you have done up to now.. including automatic forking the definition if "is"... based in a DSL. Use no more than candidate items per datatype. (the limit imposed mainly for debugging)
  20. rewrite this program now entirely in a STRIPS format that will generate exactly the kind of template you just created.
  21. use a version of a STRIPS like planner to generate the said templates.
  22. create a framework that pumps these templates into a generator system that consumes them.
  23. in the framework allow the generators to pump output into another STRIPS like planner. Decide why the first and 2nd level of planners inputs are incompatible (due to collision?). If so, make sure collisions don't happen and they are totally separate. During this process you may have seen some capabilities. Find sane ways to leverage those compatibilities.. If none is found, worry not.
  24. figure out if you've created an optimization problem (size and scope of data) If so, find solutions shaped like "taxonomic pairs solution". Decide these "shapes" are in fact tenants of your language.

Taking a break but will resume the steps shortly Much of this Workflow sounds like writing a prolog program that is domain specific.. then rewriting the program to remove the domain.. In a way it very much is except ontologizing is added the same way as required in CycL.. Correct, the point of this initial bit is to flex the C.D. representation into something more semantic than what Schank initially taught. The reason he stayed away from this is he needed to build a working NL representation based on his 7-10 primitives (which are easily anglified to an explanation (see XP) ).. You are doing the same. Except you are designing the base primitives that have no definition other than to dictate the discourse of representation. It wasn't the solidness of the primitives that made his work easy, it was the fact that XP (explanation patterns) make absolute sense (they are intended to do so!) You are going to make a system that can not "think" but in the Chinese room sense is stuck only transcribing things that can make sense. No matter how many random number generators are used, the system will be incapable of a non lucid thought. "Thought?'' Yes, we are building a program that is forced into pretending it is always thinking. The internal representation of Schank's forced it to tell detailed and lucid descriptions of scenes. The process of explanation of A,B,C,D,E,F proved the listener rather have heard A,B,D,E steps and assumed to create in their own mind the missing pieces. The user became impressed then they asked how did you get from B->D and the program this time around doesn't leave out C. I believe the dialog of the mind is a similar implementation. We have some very long thought chains but only have to deal with partial descriptions at a time. We are optimized to hide away C and the robot would be well off to emulate that same behavior. not yet finished explaining...

Back to some more steps...

In step 5 "write a small system to create new X,Y,Z,A's" This was not using a dialog based model. It would be time to explore what a dialog for this system would look like. It would also be good to next ontologizing the phases of such dialog.

Dialog phases (pre as well):

  1. A was observed in some way and has not yet been in the system.
  2. "I have recognition A.. may we discuss A?"
  3. wait for confirmation
  4. Ask initial categorizations "Is ?A an action I can do?"
    "Is ?A an object that exists in the world?"
  5. store the results of A

Convert this to a STRIPS notation: (always-rule (preconds (At ?User1) (Unknown ?ConceptA)) (postconds (stable-system) (knownAbout ?ConceptA))) …

https://lh6.googleusercontent.com/MZcJLvIDUzJrrcDotPVuXcGoGLuhCZU43BBQRfYiFkqvzvYDMU3vtroUHfYYJok_zUp9g42LBfDuHEYmXEW1_ISt3miGBFgiXaHDdDj7ZWncQePabJXf8GbfNBYwKPG8ozfHZhM

https://lh3.googleusercontent.com/nlxBgMlzwVy7lZ410rseYg8qgEEIZ_4FI8rbzxNjB5SMWGaaar1wJBAtVXv-EE_lUC56j0-Fwl9zQsibgg2p_4Wh2yCOtNhYU4vjS9e9IM1aWw-4UED-n6L7kuvevCCR6aoqJ8c

https://lh5.googleusercontent.com/iuNgK_QEhDY7PmdXNvBiNkzJx5Lm0xF5Y97-IUlRgsRpMtIT5y-eEMkcGGjo074qvworgGMcNYIlMYwaha4dQOFmg_K6KcabYAE_RphjJzfBcQg8pcoZMgg1fRCYTDeG9qQWSuc

  •  
  •  
  • Introduction – Points and phrases that are conversational, sensational (meaning engaging the senses, not just hype), and will draw a reader in.
  • Hypothesis/Topic – Lead the introduction into my main hypothesis or point. What will this article be about (without saying “In this article, I will tell/show/teach you,” which is easily one of my Top 5 biggest pet peeves in all online writing. It’s pure laziness.)?
  • Experiment/Research Item/Fact #1 – What is the first thing I’d like to say or that I’ve learned about the hypothesis/topic? What notecards and information pieces apply here?
  • Experiment/Research Item/Fact #2 – What is the second thing I’d like to say or that I’ve learned about the hypothesis/topic? What notecards and information pieces apply here?
  • Experiment/Research Item/Fact #3 (and so on) – What is the third thing I’d like to say or that I’ve learned about the hypothesis/topic? What notecards and information pieces apply here?
  • Analysis and Theory – Based on everything from above, and my initial hypothesis and theories, what did I learn? And what do I want to share with others?
  • Conclusion – A summary (I like the phrase TL;DR, meaning “Too Long; Didn’t Read”) that offers enough commentary to tie everything from above together in a way that the reader can click off knowing they fully understand what you were saying — or for someone who skimmed, because it was so long that they didn’t fully read it, it teases and entices them to go back and actually pay attention.
Tags: LOGICMOO
     
Copywrite © 2020 LOGICMOO (Unless otherwise credited in page)