Show last authors
1 (% style="background-color:#ffffff; color:#091154; font-family:Arial; font-size:10.5pt; font-style:normal; font-variant:normal; font-weight:400; text-decoration:none; white-space:pre-wrap" %)We consider the essence of human intelligence to be the ability to mentally (internally) construct a world in the form of stories through interactions with external environments. Understanding the principles of this mechanism is vital for realizing a humanlike and autonomous artificial intelligence, but there are extremely complex problems involved. From this perspective, we propose a conceptual-level theory for the computational modeling of generative narrative cognition. Our basic idea can be described as follows: stories are representational elements forming an agent’s mental world and are also living objects that have the power of self-organization. In this study, we develop this idea by discussing the complexities of the internal structure of a story and the organizational structure of a mental world. In particular, we classify the principles of the self-organization of a mental world into five types of generative actions, i.e., connective, hierarchical, contextual, gathering, and adaptive. An integrative cognition is explained with these generative actions in the form of a distributed multiagent system of stories.
4 Events perceived in the real world (well in [[NomicMU>>NomicMU]]) which become event sequences (a Sequegen narrative).
6 (((
7 Smells/Tastes/Feels/Sounds Visual Animations
9 * Start/Continue/Stop
10 )))
12 {{box cssClass="floatinginfobox" title="**Contents**"}}
13 {{toc/}}
14 {{/box}}
16 (((
17 Exemplar - Agents, Objects, and their Subparts
19 * Appear/Remain/Disappear
20 * Property Changes
21 * (((
22 Translocate
23 )))
24 )))
26 (((
27 Meaningful Conveyances
29 * (((
30 Agents make Understandable Gestures
31 )))
32 * (((
33 Agents Say Understandable Things
34 )))
35 * (((
36 Objects have words on them
37 )))
38 * (((
39 Radios play songs
40 )))
41 )))
43 (% class="wikigeneratedid" id="HNarrativesmaybereplayedtocreateacopyofthatofthatoriginalPrologMUD" %)
44 Narratives may be replayed to create a copy of that of that original PrologMUD
46 (((
47 * (((
48 This PrologMUD can be thought of as a Virtual World which LOGICMOO hallucinates.
49 )))
50 * (((
51 This Virtual World can be re-experienced just the previous world was (as perceived events.)
52 )))
53 * (((
54 Those perceived events can be re- “played” to create a copy in which LOGICMOO hallucinates yet another Virtual World of the Virtual World.
55 )))
56 * (((
57 This can be done indefinitely; those copies may become simpler or more complex (will be explained later (as pipelines))
58 )))
59 )))
61 (% class="wikigeneratedid" id="HTheeventsequences28whicharenarratives29areequivalentto201Cinternaldialog201C" %)
62 The event sequences (which are narratives) are equivalent to “internal dialog “
64 (((
65 * (((
66 “internal dialog” may be modified and then “played back” to created Imagined Worlds
67 )))
68 * (((
69 An Imagined World can be re-experienced and thus create a new set of perceived Imagined events
70 )))
71 * (((
72 Those perceived Imagined events can be re-“played” to create a copies of those Imagined Worlds
73 )))
74 * (((
75 Those Imagined Worlds can be compared to each other the differences can constitute a hybrid PrologMUD
76 )))
77 )))
79 (% class="wikigeneratedid" id="HInternaldialogcanbecomparedtoothernarrativeswithoutinvolvingworldsatall." %)
80 Internal dialog can be compared to other narratives without involving worlds at all.
82 (((
83 * (((
84 The differences can be made into other narratives (thus internal dialogs)
85 )))
86 * (((
87 This is used to generalize, specialize or condense Internal Dialogs.
88 )))
89 * (((
90 It also may further isolate out the actions and recombine them to perceive them as new action sequences
91 )))
92 * (((
93 Those perceived action sequences can be “played” into copies of PrologMUD
94 )))
95 )))
97 (% class="wikigeneratedid" id="HWemaycreatepipelinesbetweentheaboveelements" %)
98 We may create pipelines between the above elements
100 (((
101 * (((
102 Those elements again are: Events, Internal Dialog, Actions,
103 )))
104 * (((
105 Pipelines may combine, split and recombine these into Events/Actions, and Internal Dialog
106 )))
107 * (((
108 Thus creating Worlds/PrologMUDs/Imagined Worlds.
109 )))
110 )))
112 (% class="wikigeneratedid" id="HIncreasinganddecreasingspecificitywithinthenarrativepipelines" %)
113 Increasing and decreasing specificity within the narrative pipelines
115 * (((
116 Can produce both generalized and condensed versions of Internal dialog.
117 )))
118 * Douglas Miles claims this was integral to his solving  [[Egg Cracking Problem>>doc:Main.Psychology.MemoryAsNarrative.Egg Cracking Solution.WebHome]]  (or [[The Egg Cracking problem>>]])
120 [[Back to Parent>>doc:Main.Developer.LOGICMOO Overview.WebHome]]
123 = [[GRAPHICAL PRESENTATION>>doc:Main.Technical Description.Pipeline_Presentation.WebHome]]
124 (of this document) =
127 [[image:]]
130 === Sequegens can be sensory ===
132 So far, in narratives we, have talked about non-sensory.
134 Here is an example of a Visual Sequegen Codex
136 [[image:||alt="Untitled Project ‐ Made with Clipchamp (2).gif" height="374" width="374"]]
138 (from Dustin Lacewell [[>>url:]])
140 ..blah blah blah..
143 We've waited in discussing Codex's until now.
145 ..blah blah blah..
147 = Platform =
151 The Goals:
153 Avoid scaling problems [like vision/sensor bloat] until we have artificial generalized intelligence.
155 Put robots in a simple MUD world but make sure the world still supports extremely complex human actions/interactions.
157 Use pre canned scripts to teach the robot in that environment.
159 (They assume by problems of the type that humans solve, the "hidden features" of the mind, such as consciousness, will emerge.)
161 This is a step from the type of AI that neural networks are going for -they assume by problems of the type that humans solve, the "hidden features" of the mind, such as consciousness, will emerge. We think to model human consciousness we need to build a system that is capable of completing some of the same processes a human does.
164 What LOGICMOO Simulates:
167 A) Creation of an internal dialogical record of what happens within it.
169 B) Simulated "reliving" is done by replaying the internal dialog that happened while experiencing something the first time.
171 C) The ability to host multiple internal dialogs at once
173 D) Transferring an experience (is done by transferring internal dialog.)
175 D) transferring an experience is done by transferring internal dialog.
177 E) PrologMUD's current Precepts list is considered an internal dialog and feeds into larger processes.
179 This Internal Dialog is an EventCalc stream. 
181 When the robot starts it has several built in internal dialogs available, and it has the ability to evaluate
183 Most internal dialogs are only available in certain circumstances
185 Individualized EP-Stories are
187 "Stories of …"
190 "Stories of …"
192 * what should I be thinking if I was me? (A story about thinking)
194 [[image:||height="588" width="500"]]
196 * what should I be perceiving right now? (A story about the world situation and importance)
197 * what should I be feeling if I was me?
198 * when the explanation is not given it will prevent some event.
199 * the world is going on without me
200 * the explanation by being given allowed some sort of event. 
201 * wanting to perform a task.
202 * recalling some story
203 * remembering something difficult
204 * writing instead of saying something to someone
205 * writing a thank you letter to someone.
206 * visiting a friend or family member.
207 * truth changing over time
208 * thinking the previous six things through. 
209 * There is a person in that story who can afterwards continue on with the task.
210 * the person that accomplished their task now the task is accomplished
211 * telling someone you love them.
212 * starting up a conversation with a stranger.
213 * wanting to perform a task in order to have some event take place. 
214 * wanting those two events to occur one after the other.
215 * wanting that question answered. 
216 * drawing boxes on paper.
217 * beginning to do the things in which they recalled they needed to do.
218 * showing someone a cute dog video.
219 * showing someone a cute cat video.
220 * of a person getting stuck unable to do things.
221 * listening to a story from someone''s life.
222 * learning what doesn''t have to be remembered and can be recreated
223 * kissing someone on the cheek.
224 * How communications are mitigated
225 * high-fiving someone.
226 * hearing itself do #1 (thus, a Story of Self-Awareness)
227 * having memories with limited access (what can''t i remember?)
228 * giving someone Reddit Gold.
229 * giving someone a pleasant surprise.
230 * giving someone a hug.
231 * filling in the boxes or blanks of a form. 
232 * donating money to a charity.
233 * doing a favor for someone.
234 * cracking a joke and making someone laugh.
235 * Communications (with at least one example of fail correction)
236 * comforting someone who is feeling down.
237 * A story of Combining new things
238 * catching up with someone you haven''t talked to in a while.
239 * buying a gift for someone.
240 * an event taking place and another event takes place following the first. 
241 * a summary story of addition that explains that by taking two different numbers and adding them together
242 * using a prescribed method you will get an answer to the question. 
243 * a person not wanting that.
244 * a person figuring out how to get unstuck.
245 * which accomplishing that task was a requirement.
246 * why someone does or doesn''t want something.
247 * learning how to do what someone else is doing by watching
248 * explaining the wanting or not wanting of something to another person.
249 * (((
250 A Story that ties together all of the above
251 )))
253 **//Overview of operations inside LOGICMOO's bot//**
255 There are multiple MUDs running inside a single Virtual Robot
258 MUD#1 - Space for learning from VirtualTrainer#1
260 MUD#2 - Vantage point of bot's in a simplistically imagined world
262 MUD#3 - The MUD that real players play in.
264 BOT#1 - Bot in MUD#1 that is tethered to a VirtualTrainer
266 BOT#2 - Bot in MUD#2 that records information from BOT#1
268 BOT#3 - Bot in MUD#3 The Virtual Robot with actual humans
270 We designed MUD#1 to transfer its logic to MUD#2 without too much fuss.
272 We tethered BOT#1 to a VirtualTrainer in MUD#1 so that wherever the VirtualTrainer goes the bot (LM489) goes.  LM489 bots have an empty MUD which serves as it's imagination called MUD#2 . MUD#1 is the playable PrologMUD. MUD#2 is a version of PrologMUD that is not playable and only requires "sequences of precepts" and creates minorly stateful objects. Each time a "sequence” happens in MUD#1 the MUD#2 records these as "valid mud sequences". The idea here is that MUD#2 is slowly "programmed" as interaction in MUD#1 takes place.  This will lead to many misunderstood possibly "useful simplifications" .   
275 CAS-AM:   
276 Recap of what I’m guessing is happening:  The NARRATION COMPONENT. We tether a bot to a human player in MUD#1 (the shared game world), so the bot is always with the player to create and document the map in Mud 2. It records the map and important events, but not in much detail. 
279 * (((
280 Maybe we need LM489 (the AGI), the Bot (who interacts and lives with players) and the bots(which live inside the Bot and do unconscious things)
281 )))
282 * (((
283 In a world of spies, you have spies themselves (they monitor things and sometimes do a small job) (ex: Narration Component), and their supervisor contact (who decide when they should do important actions) (also Narration component), and Their Boss (who told them the goal for the political arena) (???) and that supervisors boss The Director (LM489). The lower level agents don’t really make those big calls (the plot of every spy movie is a lower level agent trying to act like a higher-level agent?) It’s a hive mind - and you’re trying to describe the jobs of a hivemind. And this is the part that tries to make sense of sensory input.
284 )))
285 * (((
287 )))
289 ==== ====
291 (% class="wikigeneratedid" id="HInotherwords2Cthe22sequences22veryquicklyconstitute22validMUDsequences2228VMSes29inMUD232.A0MUD232isextremelypermissiveandallowseverysequenceithasseentomagicallytakeplace.Afterall2Citisnotthereality2CitistheimaginaryworldforLM489.A028note2CMUD232dataexpiresratherquickly29.TheideaiseventuallyLM489willattempttomodelwhatworkedinMUD232insideofMUD231whichmostlyshouldcauseerrorsbecauseMUD231followsactualrules.Whenerrorstakeplace2CA0LM489hastocorrectthesethings.A0LM489hassomecannedcorrectiondialogs28CCDs29builtinalreadythatareprogrammed2FvettedintermsofwhatalreadyisknowntoworkinbothMUDs.A028CCDsthemselvesareNarrationPLLs29TheideahereisthatCCDswilltrainLM489tointeractandbecomeanexpertatusingMUD231.Beforegettingourwhatnotsinabunch2CthepointhereA0istonotemulateanythingatallclosetotherealworld.A0Orevenreal-worldtypelearningbuttoensureA0wehaveatleasta22transfermodel22A0fortransferringbitsandpiecesbetweenuninformedmind-slicesthatconstitutethemake-upofLM48927stotalmindareas.Theseseveralsimpleton27seachwhomeachhaveonly5Bmostlyunshared5Dsub-slicesofMUD232." %)
292 In other words, the "sequences" very quickly constitute "valid MUD sequences" (VMSes) in MUD#2.  MUD#2 is extremely permissive and allows every sequence it has seen to magically take place. After all, it is not the reality, it is the imaginary world for LM489.  (note, MUD#2 data expires rather quickly). The idea is eventually LM489 will attempt to model what worked in MUD#2 inside of MUD#1 which mostly should cause errors because MUD#1 follows actual rules. When errors take place,  LM489 has to correct these things.  LM489 has some canned correction dialogs (CCDs) built in already that are programmed/vetted in terms of what already is known to work in both MUDs.  (CCDs themselves are Narration[[**__PLL__**>>url:]]s) The idea here is that CCDs will train LM489 to interact and become an expert at using MUD#1. Before getting our whatnots in a bunch, the point here** is to not emulate anything at all close to the real world**.  Or even real-world type learning but to ensure **we have at least a "transfer model"** for transferring bits and pieces between uninformed mind-slices that constitute the make-up of LM489's total mind areas. These several simpleton's each whom each have only [mostly unshared] sub-slices of MUD#2 .
295 * (((
296 CAS-AM Recap of what I’m guessing is happening:  Mud 2 records the map and world’s “valid mud sequences” (VMS) without judgment. This data is being saved and serialized into PLL. When the two different Muds are compared there will be incongruencies, and there need to be, as LM489 will seek to correct these errors in its internal world.   LM489 has some “canned correction dialogs” (CCDs) built in already that are programmed/vetted in terms of what already is known to work in both MUDs.  (CCDs themselves are Narration [[**PLL**>>url:]]s)    Each player-connected bot is actually made of many smaller bots who work in unison, with each tiny bot tasked with successfully completing a relatively simple action (like navigation or an acting like eating).  (I’m imagining this like making 9 year olds do the daily activities of congress, it's not great, but to an alien it would look equivalent. Then you can get a review board of a higher level to decide what the worst mistake was.) Our only goal above was to prove to our project we can get our transfers written in [[**PLL**>>url:]].  And test that the PLLs, when broken, have a CCDs that mitigates the repairs. (This repair process is a developmental milestone.)
297 )))
298 *
300 == The bot's next goal is to convert as many VMSes (Valid MUD Sequences) into VLSes (Valid Language Sequences) ==
302 Next we up our game by having a human giving a description of the VMS while it is happening.  This means as the human moves around and acts from MUD#1 (where MUD#1 send precepts the human announces what they want to do "I am going to eat because i am hungry" then they perform actions in the MUD#1.  "take food from the backpack.  eat the food. I am no longer hungry."  LM489 sees the food appear into the humans hand.  LM489 makes food appear in a human's hand in MUD#2.  LM489 sees the eating act.  LM489 hears the human is no longer hungry." ( MUD#2 replicates the narrative speaking of the human as it is going on as well as all the changes). LM489 already has a model that allows it to replace the human with anybody in MUD#2. Sound like more canned stuff?  Yes, we still are cheating.  What we are doing in this phase though is ensuring that we can have Canned Monkey Scripts (CMSes) since we will have several //a-priori// CMSes (CMSes are again NarrationPLLs) which are called proto-memories that are analogous to what mechanism that is used in animals that allows them to store very simple behaviours like walking. Why we adorn these with announcements like  "I am going to eat because i am hungry" is that we are creating an infrastructure that operates from an "internal dialog" and not from any other seedlings.
304 1. So we have so far described a system that can look at a MUD#1 and fully transfer the low-level description into MUD#2 rule base from three types of PLLs..
305 \\VMS- Valid MUD Sequences  (observed in MUD#1s and transferred to MUD#2s)
306 1. CMS - Canned Monkey Scripts (VMS that have spoken intents and outcomes)
307 1. CCD - Canned Correction Dialogs (when MUD#2 VMS won't transfer correctly back to MUD#1 we use CCDs to correct them)
308 1. Example:
310 1. "I expected $A to work, but it didn't, may we discuss $A ?"
311 1. wait for confirmation..
312 1. Ask initial categorizations "Is $A an action I can do?"
313 1. store the results of AConvert this to a STRIPS notation: (always-rule (preconds (At ?User1) (Unknown ?ConceptA)) (postconds (stable-system) (knownAbout ?ConceptA~)~)~) ...
315 Before getting too deep we will attempt to summarize the rest of the non-technical overview:
317 How VMSes are combined
318 CMS's spoken intents/outcomes are added to narrations that become VMSes now called  VLS (Valid Language Scripts)
319 How CCDs are adapted to work for VLSes (Valid Language Sequences)
321 ----
323 This correction process happens by having semi-canned dialogs with a ...
325 = **//Non-Technical Overview//** =
327 ==== The restriction of using limited resources ====
329 A user application when defined,  provides the PRS system a set of knowledge areas. Each knowledge area is a piece of procedural knowledge that specifies how to do something, e.g., how to navigate down a corridor, or how to plan a path (in contrast with robotic architectures where the programmer just provides a model of what the states of the world are and how the agent's primitive actions affect them). Such a program, together with a PRS interpreter, is used to control an agent.
330 \\An interpreter is responsible for maintaining current beliefs about the world state, choosing which goals to attempt to achieve next, and choosing which knowledge area to apply in the current situation. How exactly these operations are performed might depend on domain-specific meta-level knowledge areas. Unlike traditional AI planning systems that generate a complete plan at the beginning, and replan if unexpected things happen, PRS interleaves planning and doing actions in the world. At any point, the system might only have a partially specified plan for the future.
331 \\PRS is based on the State, Goal, Action framework for intelligent agents. State consists of what the agent believes to be true about the current state of the world, Goals consist of the agent's goals, and Actions consist of the agent's current plans for achieving those goals. Furthermore, each of these three components is typically explicitly represented somewhere within the memory of the PRS agent at runtime, which is in contrast to purely reactive systems, such as the subsumption architecture.
333 Our PRS system is in the business of organizing these:
335 | |(((
336 ===== Pseudonyms =====
337 )))|
338 |**Actions** (compound and simple) done by agents.|Intentions, Plans, action primitive,| Taking a bite of food, Chewing Food
339 |**Exemplars** Objects and Structures|(((
340 Types of
341 Object, Agents, Smells, Tastes
342 )))| Joe. Food. Myself. You.
343 |**States **properties of Exemplars|World State| Joe has some food. 
344 |**Percepts **and used to detect the above|Observations, | Joe takes a bite of food
345 |**Goals** by agents.|Desires| Joe wants to be full
346 |**Beliefs** about states of how the world is presently arranged|Imaginary world| Joe is hungry, Joe took a bite of food
347 |**Event Frames Narratives** that may contain any or all of the above|(((
348 Frames, Events,
350 Memories
351 )))|(((
352 (All of the above) + Joe is a person whom was hungry
354 **and then**
356 he took a bit of food
357 )))
358 |(((
359 **Explanation Narratives**
361 Rulify the why Procedural nature and nuances of the above into contaminants and sequences
362 )))|Text, PPLLL, English, PDDL|(((
363 (All of the above) + Joe is a person whom was hungry
365 **so that is why**
367 he took a bit of food
368 )))
370 **Psychology Section:**
371 \\The programs SAM/PAM by Roger Schank was one of the first most viable starts of AI. His theory may be viewed as one version of the Language of Thought hypothesis (which Schank calls 'Conceptual Dependency' theory, abbreviated as CD). Although much of his work was based on natural language "understanding". He defined, at minimum, what the tenants understanding might look like. From this very start opponents will use the Chinese Room argument against this language. I'll ignore this because we've agreed "Artificial" is fine when it comes to machine intelligence. Those who have seen the source code of SAM realize that it is a system whose job is to find a "best fit" on programmed patterns. What it does is create a language that "best fit" can exist. We see that to take that initial program really into our world, millions of facts and rules are required to be put into the system. Before we attempt to add these millions of facts and rules we have to define a very clear meta language (above C.D.).
373 "Inner self" exists in some (game?) world that is separate from the outer environment. It probably has objects and actions not defined or restricted by spatial coordinates. It probably has bio-rythemy (dictated by some biochemistry) weather like system that is controlled autonomic-ally and may even be irrelevant to the situation a self-aware being is in. That process is an "internal dialog" like a computerized poetry or story generator simply constructing stories. Everything that the inner voice says has to be consistent and hopefully relevant to the rest of the system. The speed in which a system operates even in the real world processing is only at the speed of the internal voice.
375 "Self awareness" means that in order for a program to operate it must [be forced to] "observe" its execution transcript in the same language in which it interacts with it's environment. One's own thoughts and plans are just as much part of the world we live in as the outside environment. The inner environment has many cause-effect rules as the outside does of physics. We (and the program) strive for control (satisfaction of goals) of the inner world as much as the outside. One definition of "Personality" I learned in school was "The manner of skill in which a person exerts their intentions to the control of their environment'' We say a person has a well developed personality when they have found a way to make their environment (others around them) comfortable while they are satisfying their immediate goals. I believe that in order for a person to function at a high skill level here they must master and win at the games of their inner self. The concept of "inner self" is what is supposedly so hard to define for AI scientists. So before defining what "it is" we are better off implementing the framework in which an inner self could operate in. I think that C.D. representation or CycL might provide sufficient data types for whatever processor we define in this document.
377 "speech is a behavioral act" Also we can actually have silent speech acts called internal dialog. Usual internal dialog can be listened to. Think quietly "I can hear myself think" in your own voice. now do it in another person's voice "I can hear you think". Try to have a thought that has no voice. Now paraphrase that voiceless thought back with your own voice. My voiced version was "That chair is made out of wood" But i had to pick out something in my environment or some sensory memory that I never bothered voicing: "wow that was a salty steak last night" Perhaps you can come up with thoughts in which there are no words for. Generally with enough work you can write some sort of description in which words are used. This has led research to decide that all thoughts may be defined in speech acts. Maybe all thought is a behaviour (you are trained to do linguistics.. internal voices give us positive feedback (Pavlov comes in)). I mean from the very level of composing a lucid thought had to be done via some rules close to linguistics.
379 ==== ====
381 ==== Items when the system starts out ====
383 |(((
384 **Actions** (compound and simple) done by agents.
387 // Intentions, Plans//
389 // Action primitives//
391 )))|(((
392 * Containment relationships such as 
393 ** Equivalencies and Implications (physical and otherwise),
394 ** What is not contained in what
396 )))
397 |(((
398 **Goals** by agents.
401 //Desire//
403 //DesiredStates//
404 )))|(((
405 Percepts and Exemplar Are used to Form
407 * Containment relationships such as 
408 ** Equivalencies and Implications (physical and otherwise),
409 ** What is not contained in what
410 * Sequences relationships such as 
411 ** What happens Automatically.
412 ** What happens by Choices made by Agents.
413 ** What has never happened
414 ** What can't ever happen
415 )))
416 |(((
417 **Percepts **and used to detect the above
420 //Beliefs of Events//
421 )))|(((
422 //Observations, //
424 Percepts and Exemplar Are used to Form
426 * Containment relationships such as 
427 ** Equivalencies and Implications (physical and otherwise),
428 ** What is not contained in what
429 * Sequences relationships such as 
430 ** What happens automatically.
431 ** What happens by Choices made by Agents.
432 ** What has never happened
433 ** What can't ever happen
434 )))
435 |**Exemplars** Objects and Structures
436 \\//Beliefs of Types of~://
437 \\// Objects, //
438 // Agents, //
439 // Smells,//
440 // Tastes//|(((
441 //Types of: Objects, Agents, Smells, Tastes//
443 Percepts and Exemplar Are used to Form
445 * Containment relationships such as 
446 ** Equivalencies and Implications (physical and otherwise)
447 ** Physical containership
448 ** What is not contained in what
450 * Sequences relationships such as 
451 ** What happens Automatically.
452 ** What happens by Choices made by Agents.
453 ** What has never happened
454 ** What can't ever happen
455 )))
456 |(((
457 **States **properties of Exemplars
460 //Beliefs of World State//
462 //Imaginary world//
463 )))|(((
464 //World State/Imaginary world//
466 Percepts and Exemplar Are used to Form
468 * Containment relationships such as 
469 ** Equivalencies and Implications (physical and otherwise),
470 ** What is not contained in what
471 * Sequences relationships such as 
472 ** What happens Automatically.
473 ** What happens by Choices made by Agents.
474 ** What has never happened
475 ** What can't ever happen
476 )))
477 |**Event Frames Narratives **that may contain any or all of the above|(((
478 //Imagination and Memories//
481 * Combining States and Precepts and Event Frames Narrative
482 * Combining existing with Explanation Narratives creates new ones
484 *
485 )))
486 |(((
487 **Explanation Narratives**
489 Rulify the why Procedural nature and nuances of the above into contaminants and sequences
490 )))|(((
491 //Text, PPLLL, English, PDDL Explanations in Phrases/Words //
492 // that could represent such Narrative Frames//
495 Combining existing with Explanation Narratives creates new ones
498 Such Explanations organize these things into:
501 * Containment relationships such as 
502 ** Equivalencies and Implications (physical and otherwise),
503 ** What is not contained in what
504 * Sequences relationships such as 
505 ** What happens Automatically.
506 ** What happens by Choices made by Agents.
507 )))
509 ==== ====
511 ==== Runtime Item Creation ====
513 1. Using defined and discovering a narrative procedure that describes **how the states are put together.**
514 some(State)-Implies-some(State)
516 1. Defining and discovering a narrative procedure that describes **how the actions are put together.**
517 some(Action)-Follows-some(Action)
518 some(Action)-Implies-some(State)
519 some(State)-Implies-some(Action)
521 1. Defining and discovering a narrative procedure that describes **how the goals are put together**
522 some(Goal)-Implies-some(Goal)
523 \\
524 1. Defining and discovering a narrative procedure that describes **how natural language is put together.**
525 some(WordClasses)-Follow-some(WordClasses)
526 some(WordClasses)-Contain-some(Words)
528 Non- LOGICMOO PRSs only do a subset of the above (See the list below)
529 All other planning systems seem to only be in the business of defining a
531 1. defining and discovering a narrative procedure that describes **how goals, states, actions are put together.**
532 1. Defining and discovering
533 1. defining and discovering a narrative procedure that describes the goal.
534 1.
536 = **More assumptions** =
538 Starting with the restaurant script by Schank we might have an inner script called "the first things we think about at the start of the day". For some of us in order for items to make it on the list they have to first be qualified by "what is relevant for us to think about", "what do we have time to think about'', "what deserves our attention" and "what things do I already think about each morning no matter what". My point is that we have definite rules (personality) in which we use to keep our inner self compliant. First this may sound like some phase of goal based planning, but that is not the point of this paragraph the goal is to point out there is a sense of ontologizing our inner world simply as on the outside. Imagine how simple it would be to write a flowchart of diagnosing why an engine wont start and realize it'd be just as simple as picking out what the first things we need to think about the start of the day would be. Again not for a planner but just to understand how we label the rules of such an enterprise. This meta language can have vague operators such as "this is more important than that" or "i want to talk to this person" and "each day I have to put gas in the car".. The reason I declare this stuff as "easy" is because if someone was to as "why?" We'd be able to explain in some ready made language script. The point where some things are harder to explain is when we've either formed a postulate that cannot be further be simplified ("i am hungry" "chicken tastes great and I can't explain it") or when the explanation is something that came from the autonomic instant weather system like: "it just came to my mind". Things will come to mind often because they by tradition just do.
540 (% class="wikigeneratedid" id="HInSci-fi2Cwelikethinkingandroidswillsolveeverythinginlifethesamewaytheywouldplayagameofchess.Weimaginethemshortcircuitingwhentheyencounterunexplainableemotions2Csituationsorpeople.SoisthatAIuseful3FIwontsayshortcircuitingisusefulbutsaysuchanAIisexactlywhatweallwant.Wewantatirelesslogicmachinetakinginthebigandsmallpictureandcomputingthemostbrilliant22act22or22hypothesis22forthemomentthatitisin.Wewanttositbyit27ssideandexplainhowwethinkandfeelsothatitcaninheritthosesamebehaviors.WehopetodothatinEnglish.Answeringmanyquestionsithasforusabouttheexcitingnewworldwehavebroughtitinto.Howfaristhatfromareality3FInitiallyveryveryfar.Itisimportanttodefinethetypesofquestionswe27denjoyansweringbecausethosearetheexactoneswethink22makeushuman.22" %)
541 In Sci-fi, we like thinking androids will solve everything in life the same way they would play a game of chess. We imagine them short circuiting when they encounter unexplainable emotions, situations or people. So is that AI useful? I wont say short circuiting is useful but say such an AI is exactly what we all want. We want a tireless logic machine taking in the big and small picture and computing the most brilliant "act" or "hypothesis" for the moment that it is in. We want to sit by it's side and explain how we think and feel so that it can inherit those same behaviors. We hope to do that in English. Answering many questions it has for us about the exciting new world we have brought it into. How far is that from a reality? Initially very very far. It is important to define the types of questions we'd enjoy answering because those are the exact ones we think "make us human."
543 [[image:||height="486" width="564"]]
545 = **Steps//  (Wrote the steps in  2006 - so needs a contemporary rewrite that is less NL-ish) //** =
547 1. Define a MUD world model in STRIPS notation using Schanks C.D. language of anything/everything that we'd like the robot to be able to do. 
548 "here is how to gather wood and build a fire to achieve warmth"
549 "you want warmth because it makes you feel good"
550 1. Simply these models into the most concise featureless version possible. 
551 "do X, then do Y to achieve Z" 
552 "wanting Z because it makes you feel A" 
553 "A is good"
554 1. Extract the stop-words that are left: "do" "then" "wanting" "makes you" "feel" "is". Even: "good"
555 1. decide the ontology of X,Y,Z,A
556 1. write a small system to create new X,Y,Z,A's variables.
557 1. Define these **mbuild **rules in the original way you did step 1 and repeat until you get back to this rule.
558 1. Repeat the same steps 1-4 for your stop-words.
559 1. save this off as a new STRIPS notation
561 1. put your rules of legal construction of such sentences back into STRIPS form so that only valid sentences can be generated. out comes: "do sit then do sit" ..
562 1. find and create ways of stopping such exceptions (make a DSL)
563 1. simplify your exceptions language created for detecting
564 1. this new "exceptions language" repeat steps 1-8 on it
565 1. run the sentence generator again.. When i say "sentence generator": i mean really it is a "rule generator".. hopefully **seemingly** generating a great number of rules.
566 1. reduce the X,Y,Z,A into only a small set of literals and see if you can ever make the generator ever stop. 
567 You should be able to…
568 1. rewrite the generator to allow yourself to predict exactly how many rules it can produce at any given time if you haven't already done so.
569 1. invent new sets of X,Y,Z,As that together make good sense. Determine what ontological basis you went by. example: GoCabin->Sitting->Comfort->Good ontologically: "chairs are comfortable and found in cabins"
570 1. Again steps: 1-8.. on step 7.3: "foundIn" "is" .. remember, step 3 before had found "is".
571 1. Are you creating a new language yet? or have you been reusing the same language you created the very first time? Decide that your stop-word generation should not be the same as the first time.. create new versions of "is". like "feeling_is_goal" and "goal_is_subgoal"
572 1. define a program that will have generated everything you have done up to now.. including automatic forking the definition if "is"... based in a DSL. Use no more than candidate items per datatype. (the limit imposed mainly for debugging)
573 1. rewrite this program now entirely in a STRIPS format that will generate exactly the kind of template you just created.
574 1. use a version of a STRIPS like planner to generate the said templates.
575 1. create a framework that pumps these templates into a generator system that consumes them.
576 1. in the framework allow the generators to pump output into another STRIPS like planner. Decide why the first and 2nd level of planners inputs are incompatible (due to collision?). If so, make sure collisions don't happen and they are totally separate. During this process you may have seen some capabilities. Find sane ways to leverage those compatibilities.. If none is found, worry not.
577 1. figure out if you've created an optimization problem (size and scope of data) If so, find solutions shaped like "taxonomic pairs solution". Decide these "shapes" are in fact tenants of your language.
579 Taking a break but will resume the steps shortly Much of this Workflow sounds like writing a prolog program that is domain specific.. then rewriting the program to remove the domain.. In a way it very much is except ontologizing is added the same way as required in CycL.. Correct, the point of this initial bit is to flex the C.D. representation into something more semantic than what Schank initially taught. The reason he stayed away from this is he needed to build a working NL representation based on his 7-10 primitives (which are easily anglified to an explanation (see XP) ).. You are doing the same. Except you are designing the base primitives that have no definition other than to dictate the discourse of representation. It wasn't the solidness of the primitives that made his work easy, it was the fact that XP (explanation patterns) make absolute sense (they are intended to do so!) You are going to make a system that can not "think" but in the Chinese room sense is stuck only transcribing things that can make sense. No matter how many random number generators are used, the system will be **incapable** of a non lucid thought. "Thought?'' Yes, we are building a program that is forced into pretending it is always thinking. The internal representation of Schank's forced it to tell detailed and lucid descriptions of scenes. The process of explanation of A,B,C,D,E,F proved the listener rather have heard A,B,D,E steps and assumed to create in their own mind the missing pieces. The user became impressed then they asked how did you get from B->D and the program this time around doesn't leave out C. I believe the dialog of the mind is a similar implementation. We have some very long thought chains but only have to deal with partial descriptions at a time. We are optimized to hide away C and the robot would be well off to emulate that same behavior. **not yet finished explaining...**
581 Back to some more steps...
583 In step 5 "write a small system to create new X,Y,Z,A's" This was not using a dialog based model. It would be time to explore what a dialog for this system would look like. It would also be good to next ontologizing the phases of such dialog.
585 Dialog phases (pre as well):
587 1. A was observed in some way and has not yet been in the system.
588 1. "I have recognition A.. may we discuss A?"
589 1. wait for confirmation
590 1. Ask initial categorizations "Is ?A an action I can do?"
591 //"Is ?A an object that exists in the world?"//
592 1. store the results of A
594 Convert this to a STRIPS notation: (always-rule (preconds (At ?User1) (Unknown ?ConceptA)) (postconds (stable-system) (knownAbout ?ConceptA~)~)~) …
596 [[image:||height="470" width="412"]]
598 [[image:||height="828" width="640"]]
600 [[image:||height="655" width="814"]]
602 *
603 *
604 * **Introduction **– Points and phrases that are conversational, sensational (meaning engaging the senses, not just hype), and will draw a reader in.
606 * **Hypothesis/Topic** – Lead the introduction into my main hypothesis or point. What will this article be about (without saying “In this article, I will tell/show/teach you,” which is easily one of my Top 5 biggest pet peeves in all online writing. It’s pure laziness.)?
608 * **Experiment/Research Item/Fact #1** – What is the first thing I’d like to say or that I’ve learned about the hypothesis/topic? What notecards and information pieces apply here?
610 * **Experiment/Research Item/Fact #2 **– What is the second thing I’d like to say or that I’ve learned about the hypothesis/topic? What notecards and information pieces apply here?
612 * **Experiment/Research Item/Fact #3 (and so on)** – What is the third thing I’d like to say or that I’ve learned about the hypothesis/topic? What notecards and information pieces apply here?
614 * **Analysis and Theory** – Based on everything from above, and my initial hypothesis and theories, what did I learn? And what do I want to share with others?
615 * **Conclusion** – A summary (I like the phrase// TL;DR//, meaning “Too Long; Didn’t Read”) that offers enough commentary to tie everything from above together in a way that the reader can click off knowing they fully understand what you were saying — or for someone who skimmed, because it was so long that they didn’t fully read it, it teases and entices them to go back and actually pay attention.
Copywrite © 2020 LOGICMOO (Unless otherwise credited in page)