First, lets establish the baseline

Popular ML camp assumes a "critical mass" can create Human-Level Awareness as they assumed happened in nature with human evolution.

To understand them better, let us create two questions:

Q1_LEARNING

  •  What cognitive structures make a creature capable of learning [some task]? 

Q2_PREFORMING

  • What cognitive structures makes a creature capable of carrying out a task?  
     

Solving Q1_LEARNING

Machine Learning (ML) researchers think they need to answer Q1_LEARNING first.  They expect that it will inform their answer to Q2_PREFORMING.  Researchers imagine they will figure out what structures allow a human to learn something before they provide the mental structures of the resulting post-learned task.

Hebbian science currently claims to have figured out what structures allowed "simple animals" to learn something. 

Yet, that science is still incapable of providing, or even proposing, the "post-learned" mental coding structures that would exist in said simple animals.

Society has been studying this for a long time and are coming to realize we simply don’t know enough about neurochemistry to model the form of a brain, and have it work. We cannot model the forms of neurobiology well enough to divine the function, or model consciousness. Maybe we can try to actually model the functions it performs instead?

Hybrids have already been developed

Symbolic/GOFAI - has modeled some of the functions pretty well, But we realized it alone would never be able to do that due to "brittleness".  (Although, in a later discussion, we will explain a non-stochastic approach that overcomes these obstacles from our Egg Cracking Solution)

Machine Learning's rediscovery was exciting to many. (Yes, they had it in the 60's - and it wasn't all they'd hoped for).  Machine Learning sacrifices the details of Q2_PREFORMING in order to try to spend time discovering Q1_LEARNING.

Symbolic AI is the opposite (they might be closer to asking the right set of questions). These two main methodologies currently that have shown some promise but both stink at what the other is good at.  Thus people have been proposing a hybrid of methodology.   This sounds good and many great papers have been written on how people should do this for over 20 years.  Sadly these hybrids have already shown to not be good enough.

Anthropocentric fallacy has lead to theory of “Emerged Awareness”

Some people still believe that many animals, for example bees, are merely stimulus/response (S-R) due to low neuron counts.  They think of metacognition as a "complex behavior": special to humans and some higher mammals. 

Since they observed that complex behavior seems to emerge from simpler behaviours (such as observed in bird flocking behavior), they assume human-like intelligence is achieved by first solving simpler problems   

They think they need to first approximate "primitive" animal S-R first:

  • An insect that avoids obstacles and hunt down rewards

  • Tell apart images of friend and foe

They think it will give us a promising baseline that can then be expanded, scaled, and optimized toward more complex and intelligent behavior.

Next they assumed by solving common everyday problems of the type that humans solve, the "hidden features" of the mind, such as awareness, will magically emerge metacognition: so far we have developed the ability to

  • Play Atari games

  • Salience detection over textual documents

  • Transpose audio speaking into text

  • Read some text in English and answer questions about it

  • A system that can play chess/go really well

Its a wonderful bonus that these systems (like neural networks) can do this without prior experience, after several hundreds of thousands of tries, but that does not invalidate any other system that works on startup. 

Some "false" positive aspects to this development path

LOGICMOO imagines they are seeing "incremental progress" which leads to a confirmation bias they are on the correct path.

We know that the size and complexity of those problem domains can increase as computers get faster and mathematics can make their system more efficient.

By creating a system that can do all the above tasks well, they assume one can with more "training" keep improving with experience.  "learning is taking place”  

Next they want to make the same model solve several problems from above “at once”.  Such as a single NN that was trained in all those activities and did not have any sort of substantial learning degradation when switching between them.

Because they think that's what happened in natural evolution, with bigger more complex versions of simple S-R  animals evolved their way into the pinnacle that is human. They believe they have shown ability to use lower animal survival work in NNs and that same system does higher/harder things like Q&A, so they have shown that those primal neurons can be brought up to human level tasks.  Eventually they think we put their AI into the real world. Only then would a consciousness emerge.

To paraphrase Lenat's "critical mass" theory for CYC:  Once CYC finally gets enough Expert Systems rules the puzzle will fall into place and suddenly it makes "SkyNet is self-aware"   OK, perhaps we all thought Lenat’s idea to be suspect-  After all, Humans don't work that way!   But how was Lenat's hypothesis much much different from those that study NNs?    

The ‘goto’ answer is "Yeah, so. humans are NNs.. evolutions made NN,  it worked for us”

Here is how LOGICMOO is different

LOGICMOO's Alternate Theory

Awareness (sometimes called consciousness) is, at the very least, the ability to experience ones own thinking process.  Depending on how we experience those thoughts, we form a sense of self ("self-awareness").   We grow accustomed to a "presence" and personify this presence as being ourselves.  For humans, self-awareness is something we dread losing, that is, we have a drive to "preserve" that presence.   Douglas thinks “self-awareness” has enough hardware somewhere between 4k to 11k neurons…  ( see https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons  for a list of "self-aware" animals ) 

"Could it be possible that most all creatures got this very early in evolution?"

Could this be the mechanism that motivates animals to preserve there own selves? 

The rest of the neurons

We go on to positing rest of the neurons constitute a simple animal behaviours a-priori pre-programmed as if “behavior scripts” as well as many empty behavior scripts  (The more nerves you have the more a-priori tricks your “snail-level consciousness” can do once your brain matures enough)

ToEc: "No, because humans are born blank slates!" Oh, so many ToEc believers think humans are born special unlike beavers, spiders and birds.  

Every 8k cluster of nerve cells could perhaps contain a small bit of agency.  Not full agency but enough to have their ‘own story’ just like any other simple animal.  So when a cluster communicates, they make their peer cluster somehow have part of the memory of the other?

DNA Challenge

What would the spider, after being born, need to know to build a web?  This data would need to fit into traits that can be passed down to offspring.  Narrative coding, through a scripting form that acts similar to a “language”, is the  most compressed and reliable way to pass in DNA.   (Stochastic coding wouldn't be small enough to pass down.)

Maybe we'd need to understand the encodings that are passed down thru DNA?

 

Tags:
     
Copywrite © 2020 LOGICMOO (Unless otherwise credited in page)