Primary consciousness is a term the American biologist Gerald Edelman coined to describe the ability, found in humans and some animals, to integrate observed events with memory to create an awareness of the present and immediate past of the world around them. This form of consciousness is also sometimes called "sensory consciousness". Put another way, primary consciousness is the presence of various subjective sensory contents of consciousness such as sensationsperceptions, and mental images. For example, primary consciousness includes a person's experience of the blueness of the ocean, a bird's song, and the feeling of pain. Thus, primary consciousness refers to being mentally aware of things in the world in the present without any sense of past and future; it is composed of mental images bound to a time around the measurable present.[1]

Conversely, higher order consciousness can be described as being "conscious of being conscious"; it includes reflective thought, a concept of the past, and speculation about the future.

Primary consciousness can be subdivided into two forms, focal awareness and peripheral awareness. Focal awareness encompasses the center of attention, whereas peripheral awareness consists of things outside the center of attention, which a person or animal is only dimly aware of.[2]

Symbolic vs Connectionist Camps   AI Camp it is fair to see TNT (Theory of Narrative Thought) and LOTH (Language of Thought Hypothesis) 

Secondary consciousness is an individual's accessibility to their history and plans. The ability allows its possessors to go beyond the limits of the remembered present of primary consciousness.[1] Primary consciousness can be defined as simple awareness that includes perception and emotion. As such, it is ascribed to most animals. By contrast, secondary consciousness depends on and includes such features as self-reflective awareness, abstract thinking, volition and metacognition.[1][2] The term was coined by Gerald Edelman.

"Metastrategic knowledge" (MSK) is a sub-component of metacognition that is defined as general knowledge about higher order thinking strategies. MSK had been defined as "general knowledge about the cognitive procedures that are being manipulated". The knowledge involved in MSK consists of "making generalizations and drawing rules regarding a thinking strategy" and of "naming" the thinking strategy.[58]

The important conscious act of a metastrategic strategy is the "conscious" awareness that one is performing a form of higher order thinking. MSK is an awareness of the type of thinking strategies being used in specific instances and it consists of the following abilities: making generalizations and drawing rules regarding a thinking strategy, naming the thinking strategy, explaining when, why and how such a thinking strategy should be used, when it should not be used, what are the disadvantages of not using appropriate strategies, and what task characteristics call for the use of the strategy.[59]

MSK deals with the broader picture of the conceptual problem. It creates rules to describe and understand the physical world around the people who utilize these processes called higher-order thinking. This is the capability of the individual to take apart complex problems in order to understand the components in problem. These are the building blocks to understanding the "big picture" (of the main problem) through reflection and problem solving.[60]

Abstract symbol, symbol, logical entity, entity: used as synonyms.

Behavior: What the system tries to achieve and what tries to avoid. Usable system must have a stable and predictable behavior (note differences between behavior and skills).

Experience: what was done and what was happened, a content of the narrative sequence.

Skills: known ways to achieve the desired outcome. Skills can be improved as the system learns (note differences between skills and behavior).

Generality: an attribute of the AI system that implies the ability to use the same core system for a different system internal dialog. A high intelligence level is optional (in a contrast to a strong AI).

State: a situation represented by a tail of a narrative sequence. The state can be either an atomic entity or a concept that denotes a discovered sequence of narratives that has occurred at least twice. The state is similar to the concept of context.

Modus Operandi: one of a few ways to calculate criteria for comparison of consequences of actions at the decision-making moment. For example, different criteria can be used for safe and dangerous environments.

"SAM/PAM" The programs SAM/PAM and then SWALE by Roger Schank were some of the first most viable starts of narrative AI. His theory may be viewed as one version of the Language of Thought hypothesis (which Schank calls 'Conceptual Dependency' theory, abbreviated as CD). Although much of his work was based on natural language "understanding". He defined, at minimum, what the tenants understanding might look like. From the very start opponents will use the Chinese Room argument against this language. I'll ignore this because we've agreed "Artificial" is fine when it comes to machine intelligence. Those who have seen the source code of SAM realize that it is a system whose job is to find a "best fit" on programmed patterns. What it does is create a language so that a "best fit" can exist. We see that to take that initial program really into our world, millions of facts and rules are required to be put into the system. Before we attempt to add these millions of facts and rules we have to define a very clear meta language (above C.D.).

Language of Thought Hypothesis :  The LOTH, sometimes known as thought ordered mental expression (TOME),[2] is a view in linguisticsphilosophy of mind and cognitive science, forwarded by American philosopher Jerry Fodor. It describes the nature of thought as possessing "language-like" or compositional structure (sometimes known as mentalese). On this view, simple concepts combine in systematic ways (akin to the rules of grammar in language) to build thoughts. In its most basic form, the theory states that thought, like language, has syntax. 

"speech is a behavioral act" This is true, but we can also have silent speech acts called internal dialog. Typical internal dialog can be listened to. Think quietly "I can hear myself think" in your own voice. now do it in another person's voice "I can hear you think". Try to have a thought that has no voice. Now paraphrase that voiceless thought back with your own voice. My voiced version was "That chair is made out of wood" But I had to pick out something in my environment or some sensory memory that I never bothered voicing: "wow that was a salty steak last night!" Perhaps you can come up with some thoughts which there are no words for. Generally with enough work you can write some sort of description of these thoughts in which words are used. This has led some research to decide that all thoughts may be defined in speech acts. Maybe all thought is a behavior (you are trained to do linguistics.. internal voices give us positive feedback (Pavlov comes in)). I mean from the very level of composing a lucid thought had to be done via some rules close to linguistics.   ***(Throw in some references here?)***

"Inner self" exists in some (game?) world that is separate from the outer environment. It probably has objects and actions not defined or restricted by spatial coordinates. It probably has bio-rythemy (dictated by some biochemistry) weather like system that is controlled autonomic-ally and may even be irrelevant to the situation a self-aware being is in.   That process is an "internal dialog" like a computerized poetry or story generator simply constructing stories. Everything that the inner voice says has to be consistent and hopefully relevant to the rest of the system. The speed in which a system operates even in the real world processing is only at the speed of the internal voice.

"Self awareness" means that in order for a program to operate it must [be forced to] "observe" its execution transcript in the same language in which it interacts with it's environment. One's own thoughts and plans are just as much part of the world we live in as the outside environment. The inner environment has many cause-effect rules as the outside does of physics. We (and the program) strive for control (satisfaction of goals) of the inner world as much as the outside. One definition of "Personality" I learned in school was "The manner of skill in which a person exerts their intentions to the control of their environment'' We say a person has a well developed personality when they have found a way to make their environment (others around them) comfortable while they are satisfying their immediate goals. I believe that in order for a person to function at a high skill level here they must master and win at the games of their inner self. The concept of "inner self" is what is supposedly so hard to define for AI scientists. So before defining what "it is" we are better off implementing the framework in which an inner self could operate in. I think that C.D. representation or CycL might provide sufficient data types for whatever processor we define in this document.

Internal monologue from "" An inner discourse, or internal discourse, is a constructive act of the human mind and a tool for discovering new knowledge and making decisions. Along with feelings such as joy, anger, fear, etc., and sensory awareness, it is one of the few aspects of the processing of information and other mental activities of which humans can be directly aware. Inner discourse is so prominent in the human awareness of mental functioning that it may often seem to be synonymous with "mind". The view is then that "mind" means "what one experiences when thinking things out", and that "thinking things out" is believed to consist only of the words heard in internal discourse. This common-sense idea of the mind must either block out the fact that the mind is constantly processing all kinds of information below the level of awareness or else rename that activity to some putatively "non-mental" status such as "reflex reaction" or even, sometimes, "demon possession".

An inner discourse takes place much as would a discussion with a second person. One might think, "I need $27 for the paperboy. I have some cash in my wallet. Ten plus ten plus five... I have $25. Maybe I dropped coins in the sofa. Ah, here they are..." The ideal form of inner discourse would seem to be one that starts with statements about matters of fact and proceeds with logical rigor until a solution is achieved.

On this view of thinking, progress toward better thinking is made when one learns how to evaluate how well "statements of fact" are actually grounded, and when one learns how to avoid logical errors. But one must also take account of questions like why one is seeking a solution (Why do I want to contribute money to this charity?), and why one may keep getting results that turn out to be biased in fairly consistent patterns (Why do I never give to charities that benefit a certain group?).

MUD-World: a MUD-command or a MUD-Precept (embodied or virtual) that accepts a request in a symbolic form and produces a response in a symbolic form. A finite predefined set of abstract symbols (an alphabet) is used for data exchange between the MUD-World and LOGICMOO core.

Narrative sequence, history sequence in LOGICMOO: a time sequence of narratives where each narrative is an atomic entity or a concept that represents some sequence of narratives that occurred twice or more. The narrative sequence represents temporal relations between entities. Non-temporal relations are stored in the directed memory buffer.  It contains the symbols used for data exchange between the MUD-World and LOGICMOO core.

LOGICMOO is a multiple coding theory with a "representationalist" cognitive architecture. LOGICMOO uses system similar to a Duel Coding Theory but unlike traditional duel coding theories it is non-associative.  

and is similar in complexity to other cognitive architectures such as: CLARIONAct-R: SoarSHRDLU

We don't keep our focus on encoding an apple to the word apple, but rather the behavioral sequences to the mind forms that create them.  logicmoo is designed to arrange the opperation scripts and 

duel code several parallel operation scripts together. 

LOGICMOO is a cognitive architecture that uses several multi-coding agents.

See also

The main challenge of AI system development is achieving expected system activity that matches the system internal dialog and avoids unwanted action. Obviously, regardless of details, the system must have a module that is able to evaluate how well the situation matches the system internal dialog and detect unwanted or prohibited situations. Safety and reliability of the system dictates immutability of such a unit. The behavioral unit is actually an artificial version of the natural feeling system that indirectly defines system behavior, that is, the system “feels content” when the narrative flow matches the system internal dialog and “feels terrible” in case of an unwanted situation.

Overall LOGICMOO system power and its specific behavior are dependent on total length of the remembered narrative sequence, size of narrative subsequences used at possible future narratives analysis, instincts subsystem design, generalization and forgetting CD-scripts.

The evaluation of possible future situations at a decision-making moment is a way to compare variants of action. The traditional approach to the decision making is to construct a plan that leads to the desired state. In case of a complex task, intermediate goals are used as a way to reduce a complex problem to a set of simpler problems. However, when the system is internal dialog driven, there is no terminal point. Using a short-term goal as the measure of success is a poor approach because such a decision could easily lead to a worse position for larger future goals. This suggests that forecasting the consequences of such actions is an essential function of adult humans and likely occurs at each step of planning.

In our system the consequence forecast is based on the accumulated experience (including a possible preloaded experience, which we call narrative). The forecast maps each available action to a set of possible consequences using the collected experience. Applying the forecasting procedure recursively to each possible new future state produces a forecast tree of reachable states. Each possible future state can be evaluated by the behavioral unit. A current forecast tree provides information for decision making.

To effectively use available memory resources LOGICMOO implements compact narrative representation thanks to generalization. Repeated (occurred at least twice) subsequences of narrative sequence are detected and a new correspondent concept is created; all occurrences of subsequences are replaced by the newly created concept. Such restructuring activity is permanent (but can be performed at a time when the system is less loaded by external narratives processing).

Shortly after the LOGICMOO system is started, the whole available memory will be utilized (despite permanent generalization), so newly added logical concepts must replace some currently stored ones. A CD-script of forgetting less usable concepts must be implemented. Some usability value associated with each logical entity reflects how frequently this entity has been used, how long it has been out of use and so on, and entities with the smallest usability value (or below some threshold) are forgotten and related system resources can be re-utilized.

Therefore, LOGICMOO narrative memory buffer is continually modified: new narratives are continually pushed into narrative sequence, newly discovered repeated fragments produce new concepts and less usable entities are discarded.

The traditional approach to decision making is to formulate a goal and then perform a TASK defined by the goal. (That is, to seek a way to reach the goal and once it is found, act accordingly.) This approach works well for narrow, specialized AI if all possible goals are members of a predefined set, and the activity ends when the goal is reached (or the goal is narrative as unreachable) until the next task is requested (the most obvious example being any game).

Since the general AI should act autonomously, similar to a human or an animal, its activity should be continuous, in which case the goal assigned by a human becomes optional. The presented system internal dialog is a generalization of the goal concept. Internal dialog is a source for decision making during continuous flow of activity. In this sense, the LOGICMOO is similar to a classic control system (such as a temperature regulator, a cruise control, etc. that has an internal dialog but no tasks, only repetitive decision making about what to do at each moment.) The internal dialog can have parameters that depend on the current situation and human instructions. The internal dialog of the LOGICMOO system should be defined in some way and requires an appropriate set of MUD-Worlds. Directives issued by a human or other external source fall into a few categories:

The command that is just directly mapped to equivalent requests to some MUD-World (such as ‘turn light off’) and is part of the introspection mechanism. The set of commands includes the red button command that turns the system off.

The task is similar to the goal of a narrow AI and represented by a sequence of commands and sub-tasks (for example ‘go home’). In case of a previously unknown task, the solution can be done by using both the traditional approaches and help from a trusted source (‘master’) that can suggest how to reduce the unknown task to a sequence of actions and known subtasks.

The activity mode issued by the trusted source that sets system internal dialog parameters (for example ‘move North-West’).

Note that the lack of external directives does not result in deactivation of the autonomous system. In such a case, the system acts on its own.

Future prediction 

LOGICMOO uses a narrative sequence to predict a possible future. If a narrative sequence contains subsequences that are matched to the current tail of this sequence then a set of possible future narratives can be constructed. The system does this without losing the current state of things or the current narratives. 

Such a set generally contains variants of future narratives which started from some system action; such variants consist of three parts: narratives that preceded action (the same for all variants), performed action and subsequent narratives.

A collection of such variants can be checked for correlation between performed action and consequences; when correlation is detected then the most appropriate action can be selected for execution.

If “best choice action” is stable for a particular situation then this choice can be transformed into an explicit rule – it is a mechanism of conditional reflex.

Such a mechanism provides the basis for self-learning and adaptivity. It works identically for the external world and LOGICMOO’s system PrologMUD. PrologMUD is actually part of the outer world for the LOGICMOO ``brain”, so the LOGICMOO system’s adaptivity and self-learning are equally usable in case the environment changes and PrologMUD changes.

A set of future narratives start from some system action or inaction; its variants consist of three parts: narratives that preceded the action (the past), performed action and subsequent narrative expectations. The future narrative with the best outcome will be attempted. 

Then future narrative experience can be checked for congruence between the performed action and expected consequences; when correlation is detected then the action will be noted as an appropriate action. If this “best choice action” is stable for a particular situation then this choice can be transformed into an explicit rule – it is a mechanism of conditional reflex. Such a mechanism provides the basis for self-learning and adaptivity.

Many AI approaches use the most probable consequence as a base for the decision, and therefore, less probable variants are simply ignored. This methodology has two drawbacks:

The consequences with a low probability of happening may be more important than others with higher probabilities (consequence that led to a prohibited state are not acceptable despite a relatively low probability)

The only source for the probability estimation is the collected experience. However, such experience depends on decisions made in the past. 

To avoid these drawbacks, the current implementation does not use a probability estimation. All possible consequences are treated as having unknown probabilities, so that the only assumption used is that “if something has happened once it may happen again” as well the general idea of cause and effect. We also provide as much context as we can, and by making the system dialog based and comfortable working in English it can learn from stories it encounters. 

Decision making uses a specific forecast in choosing a specific action where each available action is mapped to a set of possible consequences. Each consequence is evaluated by the behavioral unit. Such evaluation produces a sequence of results that can be either empty(an unknown) or contain a single item.

The action selection requires comparing the sequences of the expected results. This means that despite the well-ranged scalar results (from worst to best), a few different criteria can be used to define which result sequence is the most preferable. Below are examples of possible criteria for selecting the most preferable action:

Action that maximizes the best possible future state

Action that minimizes the worst possible future state

Action with an empty forecast that guaranties the obtaining of a new experience

Consequently, depending on the current situation (mood) and the system life cycle, a few moda operandi with different selection criteria can be used.

For example, early stages of the system life may be dedicated to collecting an experience in a special safe environment using exploratory modus operandi, and conservative modus operandi is intended for a less safe environment.

Selection of the modus operandi is a function of the behavioral unit. The modus operandi also can be set using the introspection interface.

The main control loop consists of two phases: collecting information about the current state using a list recorded MUD-Perceptions and performing an appropriate action using MUD-commands.

…. The system should be able to work like this before we add any additional narratives:

. Monitoring the process will use a web interface that displays the current map of the world(or MUD-Perceptions), information about the current goals and chosen action with expected consequences, statistical data about the process(such as the total number of generated concepts, the percentage of each result for the last N steps, and so on) which will be recorded at several levels as a narrative. The system will reference this narrative in the same as all other narratives it contains. This will confirm the ability to achieve autonomous learning from bootup. The most effective autonomous learning will likely be achieved by using the ”curious” modus operandi on the initial stage in a safe environment and then switching to “venture” modus operandi after the system accumulates a sufficient amount of narrative about the world.


Explorative behavior

The mechanism described above can only work if a system has a few different variants of performed actions for a particular situation in the past. To provide such variability LOGICMOO system must sometimes perform a randomly selected action instead of the “best choice action” (some restrictions must be provided to protect the system from obvious failure). There is actually predetermined experimentation which provides a base for behavior improvement (self-learning). Predetermined choice can also be used in cases when a few different actions are equally recognized.

To compare possible variants of future narratives, LOGICMOO system requires some criteria. These criteria are provided by MUD-Precept subsystem extension by using “generalized Perceptions” which reflect the current system state as a whole and conforms to human instincts. Part of them is based on MUD-Precept data (low battery = “hungry”), the rest is history-based (danger/safe situation, ordinary/unusual situation and so on).

A hierarchical collection of instincts is used to reduce overall system state (using bottom-up CD-script) to a single value on a scale “recognized – unimagined”. This generalized criterion is used to select a preferred variant of future narratives and correspondent action to be executed. As is known, such reduction to a single criterion is voluntary by nature; different sets of instincts and different convolution CD-scripts produce different versions of LOGICMOO behavior.

Tags: Reference
Copywrite © 2020 LOGICMOO (Unless otherwise credited in page)