Sequegenic Messages Create Conscious & Self-Awareness

  • Sequegen is an Event Calculus Language used to produce imagination[44] which happens in Chamber of Qualia inside NomicMU Theater*s
     
  • Chambers may still use Sequegens to attach to "NomicMU Theaters" and ImagenBuffer*s of other Chambers
  • Sequegens contain resources that allow them to be used creatively to create new ideas.
    (Ricœur argues [45]
  • Having a memory means Sequegens are loading into Chamber#1
     
  • Each NomicMU Theater has a link to at least one ImagenBuffer*s
     
  • Creatures like humans think they exist as a Consciousness due to the bio-feedback of being able to control the emitting of sequegen*s into "Qualia Chamber"
     
  • We perceive Sensory reality through the process of emitting sequegens into a virtualized ImagenBuffer and comparing it to the actual content coming into a real SensoryBuffer.   (This solves the primary concern with modeling a consciousness:  the possibility that one could never record enough sensory experiences and actions to produce brain emulation, [Gemmell et al., 2004]) 
    • Novel sensory inputs create a new Imagen tokens to be used in the Sequegen language
       
  • Consciousness feels real due to a Chamber witnessing a ImagenBuffer.

The above means LOGICMOO System should be able to represent not just inner monologue, but also relate those back and forth to unsymbolized thinking and perhaps visual imagery/inner seeing by adding elements of sensory awareness into the system design using the NomicMu world!

Questioner:   Why can a brain scan detect what choice people will make 10 seconds before they themselves are aware of having made a decision?

LOGICMOO:  The brain constructs a sense-making [prime] narrative (even before intentionality gets involved) and when it reaches the subject’s awareness a few seconds later, the subject will claim that “they” made their decision consciously.

Questioner:  But “they” did make it?

LOGICMOO:  Well, you can think of the mind as having several chambers.  One chamber (we will call the 1st) a subject will  identify as being “themselves” but they have a second chamber that is doing most of their thinking for them.  Their 1st chamber does not have access to all the same information the 2nd chamber does.  The 2nd chamber presents a narrative to the first chamber and that causes the first chamber to imagine the details given from that narrative decision.  The first chamber gets those details loaded into short term (imagined) memory that led to the decision.  So the first chamber is able to explain those details.

Questioner:  “ 2nd chamber presents..” ?

LOGICMOO:  The mind’s chambers have an internal, non-spoken, language that is made up of time-sequenced logogen-like messages we call “sequegens”. 

Questioner:  “Logogen-like, Sequegens “?  

LOGICMOO:  Logogens are a term originating from a theory called “Dual Coding Theory” (DCT) which hypothesizes the brain uses two separate neurological codings:  imagens (for Images) and logogens (for audible verbal Words).  In TOLM we completely disagree with that premise as DTC was an overly simple connectionist way to attempt at cognitive linguistics, but one might study it to understand what we will mean when we talk about “neural code”-ing of Sequegens.

Sequegen are the building blocks of the TOLM

so I will try to briefly describe them:

The brain processes information by communicating within itself using a serialized narrative form of thought (rather than using connectionist neural weights.) Sequences are transmitted between parts similar to a bunch of kids playing the “The telephone game” where each child may make changes adding or removing parts from the original narrative.  IOW, sequegens arrive into certain parts of the brain that buffer the narratives into imaginary scenegens.  Next a couple of things can happen:

  • The scenegens retrigger (get re-described) and sent back out again as new Sequegens.  
  • The scenes trigger annotators that feed immediately back more Imagens onto the original Imagen.
  • More sequegens get fed outward and back and the imagen again is altered.

Many neurons are simply echoing, passing on, sequegens that they receive.  

Sequegens are like little notes that are passed around like food particles between ants.  (later I’ll explain how  most of these ants can read these notes and form their own opinions.. and more interestingly when an ant gets bored later it will begin gossiping to their neighbors about the contents).  This process happens over and over. 

  When animals (and people) are born they have 1000s (sometimes millions) of precoded Sequegen messages.  During gestation many sequegens are being sent in the loop system I described above and actually generating new scenegens.

Questioner:  Can you give some examples of these Sequegens?

LOGICMOO:  Suckling, Moving ones arms, most initial unconscious movements such as ones that make you breath, heart beat ...etc

Rudimentary manipulation of objects, independent movement of limbs like for walking,  

Mating, Social grooming, building a spider web, or a dam, etc depending on the genes that they originate from.

Sequegens are distributed throughout the body..  The pre-coded sequegens that regulate an animal's digestion don't have to be pre-coded in the brain.
 

There are two types of sequegens, the precoded-sequegens ) and non-precoded sequegens..   For example there are literally sequegens that cause learning (such as the type of learning in accommodation theory) But what is learned is copied and then annotated to actually implement what it has learned.   

There are narrative precoded-sequegens programs (in TOLM these are called “proto-narratives”) that when executed in the brain help write better narratives.   

Human language is interesting since it was an evolved hack to intercept and externalize various sequegen interactions. (in a speech center)   It can be used to inject new sequegens in order to alter beliefs.  Btw, this is not some learned behavior we literally have a program called a sequegen that makes us start trying to talk.


The whole hypothesis (TOLM) began as a thought experiment

It began when we tried to implement  behaviours that we see in bees.   

The problem:   

Mathematically it takes a trillion times trillions more weights to codify an “associative”/“connectionist” program that could bootstrap any of the behaviours we see like in bees.

Even if such a program did exist, widely accepted physics make it impossible, there is not enough physical heat or neuro-chemical bandwidth to store/access behaviours in an entirely connectionist implementation in a honeybee.  

Those that do still want to believe in a connectionist view have started to assume there must be subatomic processing.. They keep imagining a smaller and smaller set of weighted storage.  When they crunched their own numbers they realized they need more than just subatomic storage, their theory needs quantum processing/storage! 

It’s reminiscent of when the Greeks climbed mount Olympus to meet their gods they got a little higher up each year, then they would claim their gods probably lived  just a bit higher than they climbed..  Connectionist scientists are playing the same game: their gods are more microscopic each year.  

Though they still do behaviorist research to show that “associations” exist so that they can keep up their faith that they will no longer need to study cognitive science and only focus on mathematical probability.

Perhaps some year they will have a fast enough computer or good enough microscope they can prove their god exists?     

Questioner: That's quite a conspiracy theory you have there! Where did this come from?

LOGICMOO (Douglas Miles):  I think we just were so smart we tricked ourselves once again into believing in a flat earth.  Well maybe this one is worse..   This time “spontaneous generation of human qualia”  “right around X number of neurons.”/ “weighted sums”.  We wondered if “spontaneous consciousness” wasn't going to pop out of the jar full of Neural Nets.

Questioner: Okay, so how did Douglas Miles come up with this?
I had not gotten any degrees in school but spent 3 years (18 hours a day) learning to program computers and at least 4 years reading every journal/book find on cognitive science and finally, two decades ago, found myself working for Microsoft and learned of a govt/military programs like CYC and applied. Somehow because I worked for “Microsoft” and knew a programming language that mostly only PhDs learned, they thought I was qualified.   I was given the job to reverse engineer Doug Lenat’s CYC and create a smarter/faster version in this obscure programming language (Prolog).   CYC was (and still is) the govts “most likely to succeed in AI” project.  I was a whippersnapper who worked with scientists mostly twice my age (back then I had not realized many of them were writing the articles I read a decade before.  Now quite a few of them are dead.)  Anyways, none of them were connectionists.  But they still seemed to believe in  “spontaneous consciousness”.  Within a few years our funding model changed (after 9/11) and I had become disabled and retired to resume my childhood dream/path to discover how to create consciousness again.  I found some old work by Erik T. Mueller who based his work on Roger Schank (Scripts,Plans,Goals,Understanding etc because I had read all of his books) and about 10-15 years later...

It was 2015 and I came up with what I thought to be finally a cognitive architecture that would be able to perform tasks that a honey bee could perform.  (This was five+ years ago) So then began the thought experiment.  Five years later I think I am finally ready to start building it.  But it is, of course, based on an idea that if it worked it would be still compatible with the research data discovered about humans and animals, just not the philosophies of the “anthropological fallacy” the scientists of old had to seem to believe in order to get evolution accepted by society at the time. I won't go into this since we have way too much ground to cover, but I’ll just hint that this “anthropological fallacy” is why animals only “Signal” and not “communicate”. 

( Editor's note: It’s fine if you want to leave this here, but I think it's more fair to say that Darwin’s theory of evolution should be taken as a timeline of differentiation, but not a hierarchy of animal intelligences. But, as the catholic church directly influenced the scientific community of the time and behaviorists followed, quotes like this one were largely ignored-  "My object in this chapter is to shew that there is no fundamental difference between man and the higher mammals in their mental faculties.” (Darwin, 1871, 1896 p. 66). or “Spiritual powers cannot be compared or classed by the naturalist: but he may endeavor to shew, as I have done, that the mental faculties of man and the lower animals do not differ in kind, although immensely in degree. A difference in degree, however great, does not justify us in placing man in a distinct kingdom, as will perhaps be best illustrated by comparing the mental powers of two insects, namely, a coccus or scale-insect and an ant, which undoubtedly belong to the same class. The difference is here greater than, though of a somewhat different kind from, that between man and the highest mammal.” (The Descent of Man, p.98).  )

What I realized there was no way for the 1975 “food in the middle of the river” honey bee experiment to have the results it did using connectionist implementations (less than 30k neurons). But a “narrative model” could work on a bee’s physical hardware?  And the narrative model was required since a bee communication protocol is 80% syntax and 20% semantics.  (Syntax here is that before the dance it ensures the other bees understand where the dancer would believe the sun is orientated, the rest of the dance is expected after that. When phase 2 starts, the bee starts transferring information, phase 3 the bee is “resetting his position” and needs be sure the others don't assume it contains any semantic data.) 
In other words, the bees have to track what phase the dance is in to interpret the information.  This would require a sequegen information structure since certain phase transitions are considered illegal. Bleh, it takes too long to explain everything but there were 100s of experiments that I studied over the years that gave evidence for “Narrative Thought” in just about every corner of the animal kingdom. I’ve wondered if I ought to hide most of the animal findings and data I interpreted through this lens as most people are only comfortable believing humans are capable of seeing their world through a narrative framework.

Most of humanity kept creating S-R experiments in order to justify a new hypothesis that created was less than 20 years prior, then for another 20 years to support a then a 40 year old hypothesis.   How many years are we now at? 60?, 80?   No matter how many experiments they do they will undoubtedly ensure they don't do anything that might invalidate over 100+ years of science!  Btw, my theory of narrative thought, had it been implemented in animals,  _still_ fits all the same S-R result data collected. With any existing theory an influx of new researchers to a field can cause continued expansion on the same base ideas. New researchers tend to regard those older existing theories with a due reverence, or seek safety in confirming older data, and it seems after about 60 years it becomes unlikely the theories will be examined for alternatives to their base premises unless those premises continue to be a fundamental part of the education.

Questioner:  Thanks, get back to “ 2nd chamber presents..” 

LOGICMOO:   2nd chamber presents a decision to the 1st chamber as a small short narrative of the decision it created.

The 1st chamber is aware of the details but not the inference process . The 1st chamber takes credit for the work of the 2nd chamber.   We knew this was at least possible due to what is shown in this article about Michael Gazzaniga's split brain studies: http://www.powerofstories.com/our-brains-constantly-confabulate-stories-which-builds-a-meaningful-narrative-for-our-life  

Each chamber is made up of several 1000s (not millions) of neurons and is semi-conscious (with ant-like smarts) enough to be able visualize sequegen messages and ask for more or different information.  Your consciousness is such a “chamber”.   It is not possible for it to know what the rest of your chambers know until you make informational requests to them,  and they make informational requests to each other and often back to your (primary narrative) consciousness.  Why we assume we are conscious is, like most chambers, when we process call/response sequegen traffic and  message transfers, both in and out, this results in imagens which causes the sensation that we are alive and thinking.  

Questioner:   Did you just somehow move the problem away from the 1st chamber into the 2nd?  How did the 2nd chamber make the decision?

LOGICMOO:    So back to how chambers make decisions...

The 1st chamber can be picky about the narrative.  It still must “vet” the 2nd chambers thoughts/narrative to ensure they made sense. 

Sometimes more chambers are supplying a slightly different narrative. For Example: Take 4 chambers-

The 4th chambers’ narrative, although made sense to the 1st, it created the wrong imagens (thus cognitive discord).

The 3th chamber may have been a better decision.  But chamber 1 was too inexperienced to understand the explanation or see its merits.

The 2nd (the winning) chamber was the one that fit with the image of the 1st - (that is, it could see itself doing that.)

Even the 2nd chamber is a consumer of more chambers 2a, 2b, 2c (referred to M-Consciousnesses elsewhere)  and itself had to do the same sequegen-to->imagen process.   The 2nd chamber does the same vetting process as the first before submitting an answer the query of the 1st chamber.

Questioner:   So “it’s turtles all the way down?” 

LOGICMOO:  Not exactly, like not the visual cortex, it only has 5-7 chambers. 

Lets take visual cortexes:   They have responses they do and don't tell you about.  They are all looking for a story/narrative just like our supposedly conscious chamber we call ourselves are looking for.     We have identified that eyes (V1/V2)-left / (V1/V2)-right talk to (or "narrate sequegens") to V3 but not to our consciousness - that we can't get involved until V5. We, our consciousness, have direct links to V5 chamber. Separate from us, V4 compares itself with both V5 and V3 .. gripes go on between V3 and V4 and works out the details which are stored in V5 until they both V4/V5 contain the same patterns.  So we see V5 which only gives image narratives we can already understand.  

When we try to visualize something we just looked at the request is processed and shown in V5.  When an image is present (narrated) in both V4 and V5 we see it physically.  This is how/why we see things in our mind's eye when in V5 but not in V4.   (FYI, this description might mix up a few V’s but will be revised)  Also, V3 paints  (sends sequegens) quite a bit to V5 (only by request) so conventional science has confused the cognitive load they assume it was from V4/V5. ( A good example is the color channels are V3->V5 … not V3->V4->V5)  This makes it easy to identify the same shapes even when they are different colors.  The sequegen channels codings for the shapes, distance, size, movement and color are all separated.   This channel separation is true for perception the same way it is true for recurrent sequegen activations (or recall of memories.)  This is why we are able to create new permutations or descreet-ify recalls.  The visual cortex, you can see, is not “turtles all the way down”, but a highly specialized structure.    

TO BE CONTUINUED - still working

Questioner:   How do decisions take place?

     The Chambers normal heuristics:

           what is the easiest cognitive load to take on but still receive lovely narratives.

   

Questioner:   Is that like “Self ethicacy: Unifying theory of behavioral change?”

Questioner:   How do humans talk?

Questioner:   Why do animals not talk?

Questioner:   unconscious decisions ?

Questioner:   So how does learning take place?

LOGICMOO:  “accommodation vs assimilation” is considered to be part of a Theory of Social Cognition.  But it happens long before the “Social” aspects.  When you try to merge two sequegens they either fit or they do not.    

they expect it to only do dog things and the next four legged animals is only doing their narrative four-legged animals dogs. This is assimilation. ... The schema for dog then gets modified to restrict it to only certain four-legged animals. That is accommodation.

LOGICMOO:  I won't claim it goes on and on this way.  Because there are some important types of chambers we have identified on how they “vet” narratives.  Some chambers only emit sequegens and do not do any vetting.  

 Btw, none of them use approximations of probability or even “reward”s.  Yet from the outside they still appeared to approximate and to learn statistically and as if responding to rewards  ..

What-to-expect Rule. Operant condition was reinterpreted as a rule about what to do when a particular circumstance is encountered; a What-to-do Rule

Coding Theories are theories on how sensory stimuli and mental state is stored and processed inside the mind. LOGICMOO follows the "function being done by the mind" and "not the form of the brain",  but LM489 must still encode its experiences and learn from them.

It becomes easier to understand LOGICMOO's "Multiple" Coding Theory (MCT) by first understanding other Code Theories not used by LOGICMOO.


Dual Coding Theory - 1960's Cognitive Linguists (click to reveal)

https://lh4.googleusercontent.com/2dKuIsKkSB4nvzZY6KEsWYeUN6VvOBtw6-SzhLQUbjrSRNGpBoSfm7-G6Qz-p4BDNagotAWw66UNAu67iYCiPcd2UrcxH23Gv9dYPFUC2f-HpOO9VkYiWbi22xnx173z-Mx61tER
     (Two, One, Zero, Non-LOGICMOO Coding Theory Examples)


Single Coding Theory - Used by most by Neural Nets (click to reveal)

Like many systems can

  • Perform Predictive coding  on Logogen sequences.
  • Perform Conceptual blending on Imagens

LOGICMOO can

If a system, can corollate connections between Imagens and Logogens then they can

  • Predict Imagens!
  • Blend Logogens!
Effectively “Dual” Coding theory...

takes two mostly incompatible types of encodings and provides a system that makes connections between them.

Imagens <--> Logogens which enables Imagen Sequences 
 

Shank’s Natural Language Understanding technique
Conceptual Dependency Theory is similar to Dual Coding between English and Animation scripts.

LOGICMOO Multiple Coding

Defines abstract Term datatype instances that group/connect/unify/compare via 

  • Sequentially as “lists”  
  • Non-sequential “set/bags”
  • Two-way reflexive equality
  • One-way transitive subsumptions

One-way transitive subsumptions 

Provides LOGICMOO the ability to store actual Analogue codes by maintaining the digital "boundary" (Sub-Imagens → Imagens) Since the systems can perform subsumptive unification on Imagens.

Encoding Type Usages

  • Termified MUD 
  • Mesh/3D-Plot (Visual) 
  • English description
  • Sequences of Primitive Animation Operations
  • Sequences of Animation Names
  • Narrative of Intent

----------------------------------------------------------------------------------------------------------

STILL WRITTING

----------------------------------------------------------------------------------------------------------

QUESTIONER: I understand the info about “Chambers” Would you describe what a TOLM Chamber is?

A  Chamber is like a small scale ant, though a real ant would still use 100 chambers

                                                                 

QUESTIONER: Are these, 1 and 2, similar to Kahneman's system 1 and 2?

Almost yes.. .my numbers 2 and 1 would been reversed from Kahneman's 1 and 2

                                                                 
QUESTIONER: Is it like Numenta's 1000 brain hypothesis?

Close but Numenta seems to be stuck in Hebbian weights                                                                 

QUESTIONER: I like the idea about the importance of narratives though on TOLM.   Not sure about the implementation, are you using a NN?                                                     

No, the narrative made NNs a bit unlikely .. At best, NN are designed towards logogens/imagens encoding 

Our implementation was designed for sequegens/imagens encoding.

QUESTIONER: These are messages in the form of sequences? A sequegen?                                                        

Correct

QUESTIONER: Can it be represented as text, so I can understand it? A short example perhaps? What are the components of such a sequence? 

Here is a bit deeper about the information encoded in sequegens: https://logicmoo.org/xwiki/bin/view/Main/Developer/LOGICMOO%20Overview/Pipeline%20Overview/ 

This goes into a  PrologMUD is a chamber 

     declare_curent_object(sand)+current_object(tan) 

In the current test environment i have them at this high level 

QUESTIONER: And a sequegen is a logical expression, similar to first-order logic that is sent to a chamber ?

Doing it high level as i dont think keeping that at a crazy subsensory level will make things go faster for me 

Correct but the hard brittleness of logic is circumvented via circumscription logic 

QUESTIONER: Circumscriptions were invented as a means to solve the grounding problem 

yep.. each chamber has a miniature ungrounded view 

though in a way that is grounding.. hehe 

at least the kind of grounding that we do in thinking 

QUESTIONER: Yeah, that has been the problem with logic-based AIs 

yeah i think people threw the baby out with the bath water rather than going ahead and committing to solving it 

QUESTIONER: Let's look at the real world. It is virtually endlessly complex. 

yeah, whatever system tries to interact with it would need to create a model exactly the size they can handle 
AND a system that can envision it at that subjective size 

QUESTIONER: First of all from its mere size and all the interactions that occur 

I was concerned about the egg cracking problem... which so far the chamber system corrects 

the egg cracking problem is that any logical world we create is going to be too-thin and too-unrealistic 

since there are millions of details that go into a robot trying to crack an egg to make an omelet in real life 

the robot/humans still need at least one level of detail they can work with 

QUESTIONER: What happens inside a chamber once a sequegen enters? 

When an sequegen enters it creates a world state.. that gets built up 

until you'd say an mini-world is created 

(since the goal is to test if accommodation/assimilation works right .. and we see it end up still simulating the stochastics we see in nature ) 

QUESTIONER: Looks like a kind of logical structure. So is this related to  What is MUD, then? 

Multi-User-Dungeon.. like a text adventure, each chamber as a small view of a scene but in a MUD.

Well I wanted a MUD a few years back, and then wrote it, that had the capability to provide an infinitely complex or stupidly simple world and have both representations equally as useful. so then i re-used the MUD code to help the TOLM implementation 

QUESTIONER: TOLM seems to be a theory explaining the human mind, if I understand correctly and the MUD is just an environment for the AI so that you can interact with it 

Yes.. there is a one outer MUD that we can interact from, i wanted to see if the TOLM MUDs would model the outer MUD at all 

i realized if i failed at transferring the model despite them being close to an identical representation then I must've really done a poor job vs a person trying to model a real world this way.. They will never know if they did a poor job. 

QUESTIONER: Ordinary MUDs are programmed in ordinary programming languages, so Turing complete 

yeah .. that was why i wrote a new kind of MUD 

this kind of MUD has unbound callable code 

that is that prolog tolerates ungrounded future variables 

QUESTIONER: So anything that can be modelled in an ordinary MUD must be possible with TOLM 

yes 

QUESTIONER: That's a good idea. 

Surprisingly everyone but us seems to think the secrets to consciousness is the modeling of the real world.. I don't think this is so 

QUESTIONER: Then, who knows how ever complex physics is going deep into its structure (atoms, subatomics, etc) 
QUESTIONER: So, in all practicality, we can say that reality is infinitely complex 

definitely 

QUESTIONER: But a computer simulation cannot ever be 

yet can be moved forward and back in complexity "as needed" 

QUESTIONER: What does that mean? 

in PrologMUD a person doesn't have fingers until they try to put a glove on that requires exactly five fingers.. then the MUD frees the fingers from memory 

QUESTIONER: Ah, clever, that reminds me of Minecraft world generation. 

the level of detail of the mud various moment to moment.. sort of how when we imagine things.. we filin details only as much as we care 

QUESTIONER: Perhaps even reality is that way, according to some theories of quantum physics (wave function collapsing and all...) It's not decided before it is needed 

fill in* 

hehe yeah 

in a PrologMUD chamber i expect some levels of detail never get past a single animated idea 

but those chambers get combined towards other chambers etc 

QUESTIONER: Wait wait - there's a problem I think. If the world in a chamber is underspecified where does that additional information come from when needed? 

the "Outer MUD'' (for code reuse) is a Prologmud that is mostly always instantiated.. it doesn't disappear the fingers ``as-needed”

it acts like a conventional MUD .. but the inner chambered MUDs do what we are talking about 

QUESTIONER: But the idea of fingers is instantiated in the original chamber? 

.so we have for interactions a non-chambered PrologMUD  (chamber-0!)

And then 1st/2nd chambers use instances of the MUD that fingers disappear in but yeah... currently i leave the fingers in the original mud 

but, at least I make them appear whenever first looked at but then they don't disappear 

like the minecraft generation thing 

QUESTIONER: Yeah but where does the definition of the fingers come from in the first place? 

but in Chambered PrologMUDs they disappear the moment the entity stops thinking about them 

The start out in a special text file like... 

https://logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/logicmoo_agi/prolog/episodic_memory/adv_data.pl 

but there definitely is far less helpers than the library size .. instead of the numbers growing exponentially.. they instead shrink exponentially 

QUESTIONER: And they're maybe not active unless needed? 

active or not they'd fall on deaf ears 

here is the kicker about this theory.. if you look at the language size it is not very big 

QUESTIONER: So, if I understand correctly. Visual imagination is using some, but not all, chambers. But seeing the world is using all (in congruence)? 

(well heck i am really not confident i understand the size and your summary is more important) 

right the vision system mixes down into a handful of chambers rather than a army of 1000s 

the V5 language is a product only of the imagination which has an initial set of plenty of things it can see 

QUESTIONER: Cool 

the visual imagination is pretty important 

but the same process can happen with any sensory system and has similar X1-X5s 

QUESTIONER: Makes sense 

oh and X5 and V5 speak the same languages 

V1 and X1 .. i dunno 

QUESTIONER: In X5 and V5 "ambulance" is the same   (Like  for hearing and vision)

Yes, You get it.    Even if this theory sounds convoluted it at least bootstraps with what we know have and doesn't pretend they will magically have emerged 

QUESTIONER: Nah I think you have a quite developed theory..  I prefer theories like this without magic 

secretly i think it is the theory everyone has a sense to thought up.. just it contains too many anti-anthropocentric values 

QUESTIONER: People in general don't like reduction 

Maybe gotta make it inflate human egos to let us create this for humanity?

QUESTIONER: There "has to be qualia" and such 

in this theory the consciousness was the easiest part 

the harder part is things like ensuring conditioning still seems to appear in the ways that we test for 

but it think so far it still works in that it appears to act statistical on the outside 

QUESTIONER: You can probably connect ordinary classifying neural networks to X1, and use that as ground truth 

one use for NNs i found was just the order in which libraries are accessed 

another is i'd love to let a NN spy on sequegens and see if i can replace a chamber or two 

the underlying problem in NNs is what we been talking about would need them to somehow create this bass-ackward system of processing.. that is more likely to never score very well 

QUESTIONER: Yeah, I'm not a believer in NNs myself 
QUESTIONER: It's become the default go-to toolbox for quick and easy function approximations 

as simple as a pocket calculator is.. A NN can't ever replicate such processing 

QUESTIONER: I wouldn't bother. Would be cool to see MUDs use some of this though 

just PrologMUD is pretty cool without the secondary work towards AI 

i haven't made it user friendly but i'd like to dedicate a few engineers to doing that 

its so hard to find people that are good at Prolog.. to use it so hardcore

Language is fundamental to the human experience and shapes the narratives we all share. Nothing is more intuitive and accessible than one human is to another. Natural Language Understanding is about providing the context for a back and forth conversation that makes sense.

Our system will use a process of converting natural language into logical representations, making language conversion easy and seamless, providing a platform that is accessible to everyone. Once the system has this information it can work to formulate new solutions to issues for you and always be able to explain itself in plain language.

----------------------------------------------------------------------------------------------------------

UNUSED SO FAR IMAGESEdit

----------------------------------------------------------------------------------------------------------

https://lh5.googleusercontent.com/vB1-TDuHt0uFIsPHnyLgF1uQAqUD9kGzmbEe9GlxbFWEtJXQyZGSG7D1q6FANXAuN6nJMquD8FljFNBEJ-Mi1xOskTse4WXpQZgOjJAOuf1Y3kRua3WdVBN75F-qUkiZaqyxaOH_

https://lh4.googleusercontent.com/6D8ItgKzy1gASFUyyP53GU_d3AK150n2U5YseF80faE6iCx00tHlBad5-6ZTcBMj9fpZvImGss-qSUb8VA_riIHUA5W60XYHXYAIsRAM7zJAk7BM90FSrc4reAEjHiRQoZoRN7Vf

https://lh3.googleusercontent.com/4f0ZMsQc5s8rrUFc7-MbZhqjdm8RVYHDDE8bUuGO3zEJBJm3ZYyr9Mvf9rDPpvKQQJWtlE4UX4ywkoNvcFgWzySwkNjkk45e2sEz0EN3v18V6HAvv73f4HtKORuAD-nS5cZsqdfp

Tags: LOGICMOO
     

Table Of Contents

Child Pages

Copywrite © 2020 LOGICMOO (Unless otherwise credited in page)