"folk-science" which has not been proved, yet somehow remains unquestioned.   (Things I, Douglas, believe are false!)

  1. Complex behaviors like walking in humans are built upon other less complex behaviours like crawling.. Crawling upon scooting.  Animals don't go through this because they are pre-programmed to do these things instinctually.  
  1. Humans are blank slates (tabula rasa).. Having to learn everything from their environment.  Being so moldable humans are more intelligent.
  1. Neurons can be conditioned also with reward/punishment, which makes sense that beings made up of neurons learn in the same way   Observational trainability of neurons inductive behaviour in animals is as a result of observational trainability of the synapsis!  
     
  2. RNNs (Recurrent Neural Networks) have exhibited this behavior.. We even got them to start to learn to move when installed in animals with 6 legs and eventually walk. 
     
  3. That humans might not learn to talk if not pressured socially to.  We learn to evolve the crying signal to make requests because we are born into a social world. Animals are using purely signals the animal would be unaware of making. 

.  

  1. Since, using language is a learned skill requires lots of extra (and faster) neurons.  Evolution has fine tuned the neurological systems of all animals including humans.   Brains are immense super computers using associations operating so fast..  It is even discovered that in order for the processing (in a NN-like way) neurons work at an immense molecular level for learning.   
     
  2. The molecular processing of neurons gives you enough mental bandwidth for hearing yourself think (of course this started after you were taught language) this is a unique emerging skill.   Also so was your imagination.

  How long would it take to find evidence supporting any of these claims?    YOU AND I HAVE NOT YET OBSERVED ANY EVIDENCE OF ANY OF THE ABOVE BEING TRUE ..just  "circumstantial" evidence for ALL of the above?

Other crazy belief shared by most AI-ers

  1. Non-Hebbian theories of GOFAI failed in the 1980s causing an AI winter.   GOFAI is susceptible to paradoxes and Godel's incompleteness..  Also it doesn't account for your imagination and adaptation! 

Could the study of AI/AGI be damaged by the assumptions above?   What sorts of changes to study would take place if the above was not accepted? 

 Worse even is that the above have begun to become "rules of thumb" around human and animal intelligence! 
 

When I interpret the same evidence it shows me  several counter theories are more likely!

  1.  When the mothers put their infants on the ground, they propped them up in a sitting position, rather than placing them down on their stomachs. As a result of spending all of that time upright, the kids never need to learn to crawl before walking
  1.  The observational trainability of inductive behaviour is as a result of Sequential Judgment
     
  2. If we look at the number of neurons in an animal and look at its behaviour that we _need_ to see a clear connection.  AnimatoryLogic is the only possibility if so..   see The Spider and Beaver
  1. Humans would learn to talk _even_ if not taught to.  Deaf babies don't need to be talked to. they still start babbling.
     
  2. Hearing yourself think requires less neurons. (not more!) Most animals hear themselves think
     
  3. Associative Learned languages would require more neurons  (because associative learning in general requires more neurons) 
  1. Having an Imagination is required in order for any animal to parse its senses.
  1.  the claim that complex behavior must result from a complex system with many different parts with different specific functions (complexity can instead result from just a few simple rules, this is a very important concept)

Animatory-proto-language is a language that when executed creates an imaginary scene inside a virtual world.  This virtual world is the world in which your mind’s eye sees.

The vision process is done by a comparison between what your eyes sees and what the Animatory-proto-language says it should be seeing.

Was it memories or imagination?

If we are discussing things in terms of my theory of an amatory protolanguage  (thinking in proto-narrative language)

birds and mammals, this includes humans thinking in a special language called animatory protolanguage.. When the mind thinks in  protolanguage the imagination (buffer) plays out the details described by.  this is the language that memories are encoded in for example if you are truly trying to picture an automobile .. The visual protolanguage parts  of your brain would draw for you an image of a car... If you are to imagine the car speeding down a racetrack this amatory protolanguage would draw a racetrack with a car speeding down. I would annotate it with just enough detail that you would think that you are actually imagining if you thought about it a little bit harder you would see a more realistic or maybe imagine it to be a cartoon car it would appear more cartoon like, your amatory protolanguage adds the cartoonish visual effects to the car.    

This is not unreasonable to imagine any of us doing since a small organ the size of an insect or a spider contains all the processing power that a single animatory-protolanguage machine would need to operate.

Though secretly I assume that even spiders and small insects are thinking in animaTory protolanguage!  I won't go into these details unless you want me to.  How animaTory protolanguage works is it's an executable language

I think that AGI based on animatory-proto-language is the technique that will be highly successful.  Whenever  an agent within our mind sees such scenes in a virtual world it can recreate a mirror of that virtual scene using its own personal animaTory proto language ..   a special agent is then use to describe the differences between those worlds.  The details emitted.. are actually the recordable thoughts that would convert one to the other.  Interestingly these don't have to actually be "physics aligned" since the level of detail required is only based on how well the virtual world required them to become the same things.

Let's practice using the animatory-proto-language that exists inside of your mind right now

--------------------------------------------------------------------

Exercise(#1) Japanese flag   (You may do the exercise with any flag you as long as it can be created from memory (as in you are able to draw it with colored chalk)   - but it should not be your countries own flag   (unless you have never seen it waiving on a flagpole))

Picture in your mind a Japanese flag.  

Did you picture a red circle on a white background?  Good.

Recall at least one or two times you have seen that flag with your physical eyes.  

Recall at least one time you surely would have seen the flag had you looked for it.

Visualize that flag on the top of a flag pole waving in front of your home.

Attempt now to convert that motion picture to black and white (greyscale).

----------------------------------------------------------------------

Have you ever seen the flag without looking at it?  Most people would infer "of course." Such as on a large Olympics Poster sitting among many other flags on a restaurant menu.   But how do they know if it wasn't, in fact, missing from there? You would have had to look, right? Otherwise, it was not a real memory you had.

Was it difficult to visualize the flag on a flagpole?  Most people would say that it wasn't particularly hard to imagine it had they seen any other flag doing that.  Making the flag waving in the wind or draping down from the poll is not particularly difficult either. In fact, this level of imagination seems to be freely available at very little mental cost.  

When you converted that image to grayscale how much extra processing power did it take?  Like did you have to pause or notice you had to slow the motion of the flag to account for the extrasensory processing you were about to perform?   I doubt you noticed because I doubt you had actually manipulated a "picture stored on neural fibers". that is I doubt we have enough neurons in our heads to afford such low level "play space".. We only imagined we did emoticon_smile

What age do you think you learned this skill?     What other animals could possibly have this skill?  

What events happened in your childhood that could have conditioned you into having this skill?  After a bit of thinking you will probably surmise this imagination just built in (not actually learned behavior, but instinctual)?  What control structures (Such as just now with the flag we used your own inner voice) could you use to create other imagined events?  When you are not using your inner voice it is quite easily going to be your silent version of your AnimatoryLogic?

I know, we (society) somehow have a misconception that any language behavior is very complex and builds from other complex behaviors (though these are built from simpler behaviours).   Thus it makes sense to doubt that animals (even humans) use AnimatoryLogic.  BUT  Imagine how many billions of extra neurons more you would need if only had very lucky associations to makeup and most of all be able _control_ this playspace had you not the benefit of your proto languages? I believe exponentially more neural processing would need to take place. (In fact, I doubt it would even be possible!)

The author had a monumental problem with the exercise:  "I cannot recall even one vivid of memory seeing the flag except for the one in which it gets created .. (yet, the rest of the exercise was very very easy!)  Ok, I suppose that I do recall 40 years ago drawing the flag with crayons in a spiral notebook (turned sideways).. I know I must have been looking at a book full of flags.. the problem is right now i don't recall _seeing_ the book or any flags from it!. The thing i recall most about this flag are the notebook lines and how the circle seemed to be imperfect!  Also I can't replicate that imperfect circle again, though I claim that i can see it right now!... what gives?!.. What I _do+ recall well is recently seeing the South Korean flag and noticing how much it looks like the Japanese flag.  Again, why was every other part of the exercise still so easy?" 

 

Minimal required representation space

The first 5 years of my 34 years of spending every waking moment of my life studying human and animal cognition.. 

  Yes, in 1985 (as a young teen) I built my first Neural Network.   I converted human speech into Allophones.. Then used the 2 layer NN to map the allophones to text … Even had a secondary N-Layer network that paid attention to the Augmented transition network (ATN) to help guide (heuristicly divide what it focused on) about things for example it focused on nouns more once it completed the understanding of an adjective.  (That 2ndary  helper-network was trained (on it own) to predict would parts of speech followed other parts of speech)  It was so good it appeared to divide count nouns from mass nouns ( develop its own vocabulary?!)    but what i was most interested in was seeing if it would find a secret equation that special allophones mapped to different parts of speech..  For example "crawled" and "walked" ended with the same allophone.. it would construct a "past tense verb" from it.  That wasn't happening. I switched from English to Esperanto.. Theoretically I know it would all work though.. I just needed more than 20k ram.  Btw, it did do some things well.   I was well on the way to create a computer to do my stupid english HS homework. 

One issue though.. I made of the name "Minimal required representation space"     I noticed had I wanted to store a single word mapped to the allophone (like a hard-wired mapping of "Dah-ha-guh" (64 allophones x 3 time sequences)  to  DOG  )   I had to train and train and train..  Showing it both words at once and eventually it could show me what it knows.  Though I wanted to go in the reverse direction I needed to train it with them switched..(so I gave up on reverse direction )  Anyways finally computed a saturation point in which every new word it learned made it start to do worse on other words .. I took the RAM space I represented the NN in and that was  M .. then I took the size of the space that made up the data N and could (damn if i can remember them, I am sorry!)   "Dah-ha-guh=DOG"  was  [A14,A34,A15]=[D,O,G]    6+6+6+8+ 6+6+6 bits 44 bits of data..  vs the NN which was 20k.. 

.  I was lucky it could memorize 100 word mappings.. 

Results

    4,400 bits in Protolang

163,840 bits in an NN

(Now imagine what AnimatoryLogic could do with 15,940 extra bits!)

Throughout those years I experimented teaching it sequential memory (such as counting to 100) . I must have read 40 books on psychology to figure out what I needed to teach it.   Anyhow the  "Minimal required representation space" issue was interesting in that I assumed (like people are banking on nowadays) I'd hit a special amount of RAM where the NN would become cheaper to store data in.  It just needed lots of training.. That somehow the Neurons would adapt into a highly condensed perfection like a person's.  In order to reach that they'd need to become self-generative.  So (I won't go on about this) I started working on Generative Nets (which surely will be soon by Google.. it is where "Just in time self training took place" .. the system relearned out of a small dictionary in short term things to solve a problem.. it would seft train again and again until it read the correct mappings) .. Long story it just couldn't feel "Minimal required representation space" problem was going to be solved by NNs _ever_.

However I do use NNs a little (I'll explain later)




 

   

(Un)realism How do we compare two photographs?

We have to understand UnRealism. Despite it is the hardest to accept since it goes against everything we feel and see.  Yet mathematically it makes complete sense.  Unrealism is a codeword that points out the fact that nothing we imagine is anymore unreal than anything else we imagined.

When two digital video cameras look at two photographs they can do a bit by bit compare.   If you have a vertically Misaligned camera, everything will go to hell without the ability to at least detect the degree

When two analog-"stream camera"s (patent pending!) looks at two photographs they can be vertically/horizontally misaligned.  

Each camera specified a the vsync pulse is created to represent the brightest point in the vloop.. (Z-scan raytraced)   Images can can also be horizontally misaligned as 3 hsync pulses happen exactly after the first vsync.  The camera receptor lens is rotated at the exact speed it would take to do 4 Vsyncs.   Not surprisingly the created picture is not for human consumption.   Interestingly, this camera arrangement when used by automation makes image classification so easy a biological promethium can do it!   This is not the best news for computer vision.. Why?  It is that the camera's aperture is a huge mess to emulate digitally. Interestly the design of the camera does not require an exact frame rate.  Also the images captured can be very lossy. But what is special is it is a predictable amount of Loss.  I designed it after seeing how old VCR tape heads used their rotating aperture to cover a wide magnetic tape. And how that creates a new problem for VCRs to need "tracking".  As a human 7 year old child I was able to predict the exact positioning of the tracking knob  by looking at the picture to make it make sense to me.  What my camera does is spin the photoelectric receptor plane at the speed in which the rules that are encoded by what is required to make each frame (at least 4 anyways) start at the same time.  In other words .. if you move the camera slightly to the left the speed of the rotations will spin up or down to create the least amount of sync dissidence.  the key point here is the receiver of the signal might not even notice!   

What AnimatoryLogic ISN'T

Proto Language does not have to be this https://en.wikipedia.org/wiki/Universal_language

or one of these https://en.wikipedia.org/wiki/Proto-language / https://en.wikipedia.org/wiki/Proto-Human_language 

but it does have to be this …

What AnimatoryLogic IS/DOES

   Controls when sensory data is to be emulated.

   Controls Call/Response

   Controls Course motor functions

   Controls Fine motor movement

   Can be arranged to allow one Utterance to replace several (Animal Signaling)   

   Usually is Well formed

   Can be used to shunt/ignore Utterance

   Can be ambiguous

Call/Response:

  This term is borrowed from Music Theory .. see definitions there

   Is that for every action there _must_ be a punctuation.
 

System evaluates information by measure of the presence:

  1. Call/Response
  2. Sequential Script Loop levels (Actually Is Rhythm)
  3. Existence of an known Narrative



 

Using a Call/Response system:

  • Self Call: "i like this moosehead beer because ...X"  
  • Self Response: "sounds good"
     

The requirement to talk ourselves thru things is later useful when defining systems like

 "long division as done on paper"  or  "p implies q"..

 in fact, it has even tricked a few people (like Lenat) into believing we think using logic instead of merely poetically

Sequential/Rhythm Recognition  (Is also rhythm, flubbed in Douglas' first video)

  We _must_ recognise a sequence or we must create a new one to recognise later

  •  Sometimes just the cadence of the inner voices thoughts are good enough to pass our smell test: 
  •   "...and that is what she said +<imagine sound of symbol>" 

I think this is why music and poetry affects us.. since it helps us win at Wittgenstein's language game without requiring the internal call/response mechanisms which we use to self evaluate ("sounds good") 

This internal requirement later becomes grammar recognition and cadence (except in cases of  fox p 2 gene issues)    And why we like to sing and speak, dance and drum.


 

Narrative 

   Everything _must_ happen for a reason

  (see Malcolm Gladwell's "Why sometimes it is not a good idea to ask people what they think" (The coke/pepsi problem))

  Why not?  Because they will try to make up something that "sounds correct" to them (even if untrue)

  Vetting our ideas based on how well we internally describe them to ourselves

 I was reading all of Schank/Abelson for which took me about 30 years to finally realize the big secret.. I was while reading one of my _least_ favorite Schank books ( 'Tell me a story') for the 3rd time...   "You cannot remember anything in which you did not already create a narrative about."    

Adaptive behavior enables a child to get along in his or her environment with greatest success and least conflict with others. This is a term used in the areas of psychology and special education. Adaptive behavior allows learning of everyday skills or tasks

We make up narratives in order to make things begin to make sense. 

By filling in gaps with our internal dialog and re-spinning stories towards optimism or pessimism we can make the world make sense. Regardless of how flawed (logically) that internal dialogue actually is, what matters most is it:  "sounds like something we'd be used to hearing yourself' think.  "  (We grow fond of this mechanism and claim it to be our thoughts) 

Example:  Mice were solving mazes that had different "markers" to be seen along a path to cheese.  Later when they smelled cheese the researcher noticed a sequence of brain activations.  It was tested (against a copy of the maze markers) and the order of that sequences were the same order as was the markers appeared in the maze. A possible conclusion was is the mice created a "sentence".  (in the same way bees dance to create "sentences" to convey locations)  Mice when confronted with the same mazes with missing markers, filled in the gaps mentally. (I think more for affirmation than habit)   And when two markers are switched they would start over to re-babble (mentally) out the previous sequence this time correcting it deciding it .. but to pass litmus it was better to omit (pretending not to see) the out of order marker.

It is not a far reach to believe historically such primal shenanigans lead to several aspects we are all very familiar with:   Such as Prayer.  Or "positive thinking".  Or learning how to "shut off" this internal voice or "quiet the mind"

Exercise with Agencies

As a computer programmer, if someone asked me to write two programs that would drive robots to compete over some resources and need to be equally matched to make it a long competition… however they asked me to make sure one robot was not self aware.. I'd look at them like they are crazy, not because they wrongly assumed I could make a "self aware" robot, but more the opposite.  it was that they would be expecting me to write extra logical structures in order to somehow abstract way important information of who is who.  I would have to assume they must mean that what they think humans are doing.

One of the older theories of cognition is that  animals narrate mentally in a recallable proto-language.  This is because it would be too unwieldy to put every experience into the exact same frame of reference.  So in other words they also insert as part of their narrative what they were thinking about at each step.  Then associate what goes on into the world as to what they were doing.   This doesn't sound much different than current theory? Well the "proto-language" bit really is.  Since later thinkers on this topic like Wittgenstein claimed this was the basis of human natural language. (I do not disagree) But unfortunately the rest of the world assumed this would mean that all animals were performing linguistic acts 24/7.  And signals do not always act the same as linguistic acts ..  and sometimes the signaling creatures are just too simple.

Why it relates to language is:

I can begin talking before I know exactly what I was going to say, so by the time I get done saying it I will have learned what I meant.  Had something unintelligible been said, or something that violates sensibility been said, someone or something could point that out to me. 

  Replaced that with.

I can begin thinking before I knew exactly what I was going to think, so by the time I get done thinking it, I will have learned what I would conclude.  Had something unimaginable been thought,  or something that violates sensibility been thought, someone or something in my mind point that out to me.

Also I can recite canned expressions I believe to be true.  Perhaps set contexts to elicit better imagery in which i can start describing that imagery in greater detail to myself.  So to control my imagination.  I am also using a various "ears" like:   "Is this who i am or not?" Which might result in "Yes, that is exactly what i would think if i was me!"  Atta boy, keep going".  Other times, despite passing  smell tests, it might come to  "Oh dear I wouldn't be thinking about it in that manner"  So I am able to make some rule changes to ensure the survival of my imaginary mind.   This meta level is evaluating a consciousness so that I even hear myself talk about my thinking.  Also, I can force by creating a separate narrative (in the same proto-language) the justifications around of each my mental agenda transitions.  Is that conscious?  Very much so.  

 And associate _those_ cognitive actions designed to create the narrative dialog of what went on around me.  I could I have used narrative monologue? No because every mental action (call) was to elicit an imagined experience (response) that allowed me to compare my inner view with the world i was attempting to describe. The difference between that  imagined experience (expectation) and the real world is what allowed me to focus on the differences.  For example when I am driving along the road, my imaginary road has no potholes and is this sort of grey uninteresting thing. Yet when something appears the narrative is out of sync and I do [mental] damage control to add whatever it was into my current narrative.  I will be able to recall the sighting of that pothole as a narrative event avoided it and would be able to tell you all about it.  Yet had I been driving on a road with many many potholes I would have quickly added "many many potholes" and would no longer be avoiding them nor would i been remembering each one (would create a weaker memory) despite having more extreme connections with my sensory life.  Also in this example is that I am not learning (remembering) based on constraints involving the sensory aspects as much as I am limited in the amount of information contained in my narrative.  This not because i am a complex organism.. It is because i am a simple organism!.. That is I cannot remember anything I didn't expand at least some energy attempting to think about.   

My memories are often indexed around these "why would i have that thought about that?" ("Because i just experienced that") Unfortunately this created a problem for behaviorists who were testing for associative memory.  When I remember the inner dialog narrations that talked me thru experiences so I might appear to be reacting as if i am learning from associating experiences instead of associating internal dialogs.  And indeed that was the data leading to the false conclusions that make up today's theories.    

There is no reason to object to associative learning by observation and experience.  The problem is only about where these associations are tied to: falsely assuming they are related to the encoding of my sensory data instead of the encoding of my narrative.

There were some people whom showed that the associations appeared also to be linguistic, but sadly this was dismissed due to assuming the linguistics are inductively associated to the sensory data.

Memory Exercise in which require Narrative Memory

 (#2) Recent Beach Trip: 

Imagine last time you visited an ocean beach and answer questions like ...  Who were you with?   Why did you go to the beach in the first place?   How long did you stay?   

Those questions you can very likely answer using that you fit your experience into some sort of narrative (story) that fit into your life.   

Also these questions that you computed answers during the creation of that beach trips narrative:  Did you go into the water?

likely going to be deductive: Was it cold or warm?    

Learning from a narrative

 (#3) Lollygagging at the waterline

Pick a different time at the beach but the most vivid time when you stood on the beach watching the ocean come right up to your feet but stopping short of reaching you.  One that you can remember exactly the foam plowing its way towards where you stood.  Remember a few times where you must of calculated exactly where to stand in relation to the water so that it stops just inch away.  Remember a minor incident when there was ripple of ocean water traveling behind that foam, with a trajectory that was slightly faster and slightly diagonal pushed the seas frontier past where you stood.   

Btw, how was the quality of  "level of visual detail" from the Japanese flag?

←----------------------------todo----------------------->

If we could do #3 experiment with a seagull

 that was standing on a  piece of driftwood (one who flew away to avoid being knocked off when the faster ripple hit).  I posit that both I and the seagull will require an equivalent amount of storage if at least had the same narrative form of memory. The first time when a second ripple or wave hit and she learned that not ripples are alike?  Does she even picture that the sea has a frontiered edge? Does she judge the speed of the wave as  "moving a little too fast" whether it was one that you fly away from or stand her ground on ( the next time), how detailed was her memory?  Or was she using some conditioned associations instead?   Her and I have many sensory similarities, her and I could only judge the speed of the wave by the fact that moving objects create an elongated blur (we judged the speed by the size (neural)). So it was basically  a "larger" object than it should have been. also of course it was moving.  Eventually when you study our neurophysiology that connects our brain to the environment it is apparent nature could not find very many viable changes.. We both have two eyes that react to light by opening/closing the opening, (listing out the sense organs and the nuances of their basic operations) ..We both use dopamine and norepinephrine. The point is just how much we are alike.  Could we assume nature found a different animal brain that would still yield a viable animal to interface all this to?

There is a problem so far with the seagull idea,  (but only as i have so far described)

Effectively I am saying:  "We are all the same basic animal consciousness with the same inner voiced narrative dialogs trapped inside different bodies and with different IQs"  So why are not animals like dogs, cats, horses bird etc (if they think so much we do ) not learning to somehow connect increasingly advanced ways?  Or begin to ascend a maslovian hierarchy?  Is it the same reason believed by connectionists?  (Not enough neurons)    Actually no, but that is what I would have liked to think. Take these examples of some communications issues between a dog and cat.

When the dog comes up to me looking like he desperately is missing something (his stuffed toy)... I can glance over at the chair and he knows exactly what I meant..  "It is under or behind the chair"     If it is the the other side of the room.. I glance there and he knows exactly what I meant.    my dog knows other beings are all capable of this.. He will even answer me this way.  me: "where is your ball?" …  He'll glance over to show me.

The cat on the other hand if she wants something we are totally stuck with language.. I basically have to walk around the house presenting gifts until she says "Bingo, that is it!" .. Even if it was the thing i first glanced at.  She doesn't understand : glancing, pointing, fake throwing.  She just waits for me to stop with my bazaar eye, hand, arm seizures and go under the chair and find the catnip toy.

When the dog wants some of my dinner  Eyes to plate, back to my eyes,, back to plate.. Eyes to the floor.. Eyes to my plate Eyes to floor .. Drawing an imaginary path in which the food needs to travel.  Depending on the imaginary path i draw with my eyes.. the dog knows exactly if he is getting any or not.    If I pretend to misunderstand the request thinking it involves the stuffed toy. (say i glance between the floor and his stuff toy going back and forth and never look at the plate..) He'll even calculate where to stand to block me from seeing the toy to erase from our language book.

Cat on the other hand, wants my dinner, he'll stare directly at me.. then swatt my beard impatiently saying "move out of the way.. it's my turn"  ( not really any begging there).  

Are dogs more capable of learning?  Despite these complex behaviours, they may have reached their pinnacle of success?

Whenever the neighbor kids in the apartment above me  start hollering or wrestling the dog and cat look at me (sometimes worried) trying to understand what next move is.   So for my cat I just "slow blink" about the same intensity level as the noise. and by about the fifth time he understands "you are safe" (all cats are born knowing that slow blinking is the external way to say "No threat here" in their proto-language)  and then goes and lays back down. 

Cats even pay attention to which eye you blink with like in a three way standoff between myself and two other cats.. The cats will blink only the eye that corresponds to my position in the circle while the other eye symbolizes the other cat. 

Dogs on the other hand, who also has extensive vocab about these matters (packs watch each other for instructions) who is very conscientious of eye movements has not cracked the code of what i am saying to the cat.. in dog speak it should have even translated "I am so not worried i am closing my eyes" ..   Even after seeing the cat lay down this was not good enough.  Still he wants me to set some sort of policy about the sound that I seem to be unable to understand in his family's proto-language signals .   

If we are all three the same animals, we humans might have also fully explored our repertoire and are now at the pinnacle?   I know that must sound very unlikely, but might this be the the case? 

TI-bits

"Originally, TI was regarded as a hallmark of human cognition [1] and was thought to be based on logical deduction. More recently, there have been questions about whether TI requires higher-order reasoning or can be solved with associate processes like value transfer [22,23]. Subsequent work has shown that simple associative processes are not sufficient to account for TI performance [1]. As a result, animals that identify transitive relationships when trained with a five-element training procedure are commonly accepted to be capable of TI [24].  "

After reading from: https://royalsocietypublishing.org/doi/10.1098/rsbl.2019.0015 

I will attempt to summarize real quick the experiment:

We created a puzzle that the answers are not obvious using associative learning.  Possibly, in fact, had purely associative learning been used, the answers given by the wasps might have even been worse than 50%  (I am assuming this, if not, it might be good to cook up an experiment to show exactly them finding the wrong answer . However, that would have not easy to show,  because of course we don't have a truly inhabitable control group that uses associative learning.. lol)

The laugh here you are up against the over-inflated belief that all creature regardless are built from flatworm neurons that when a very very small number of neurons are isolated (completely from the rest of the brain), they appear to be trainable.  Thus all animal behaviour is a outgrowth of.

Burrhus Frederic Skinner program model of cognition vs a Ludwig Wittgenstein program model.

  1. the B.F.Skinner(associative learning/memory model) would require a "spiders" genome to have enough genetic material/space to at least organize and completely build up a brain from scratch into all the same neurological state "as if" a hermit crab had to have undergone comprehensive regime of classical conditioning to compel them to build a spider web. Also do a good job at it as well,,  Not only that, is already compelled to. (by compelled, I mean a machine ready to "react" by building the web, and when somehow it gets torn down decide that it needs to build it again)   Btw, this expectation, although drastic, is not completely unbelievable. In fact what the science currently believes.  We are not too surprised due to the fact we don't tend to see genetic "losers" anymore after some millions of generations of handing out darwin awards.  Am I wrong that is this the current popular theory that the animal, after it is born, acts only from an associative memory/training point of view?  Thus implied in this theory is that everything it knows (instinctually) is evidently installed onto that same  associative machinery? 
     
  2.  Now "a" Wittgenstein-esque(proto-language) theory of animal behaviour is running of a "script" of pre-programmed behaviors.   That these scripts are being slightly altered after it is born. How this is different from the above is it expects that from the beginning the animal is immersed in sort of a very simplistic and stereotypical "meta-inner-dialog.".   The spider is born singing to the script "This is how we build a web, build a web, build a web, this is how we build the web, whenever its torn down .. repeat chorus, refrain, this is how we sit in the middle of the web, middle of the web, middle of the web, this is how we sit in the middle of the web, all day long,, chorus etc".. Sounds strange? Not if you imagine that the phrases are very "canned" (the way birds are born with canned songs - "E-G-B-C#-B-F-A-A-A-G").    The minds of beasts silently carry out these phrases even if often are out-of-place (basically on its own clock, ignorant of surrounding tissues) such as ..  This creature brain is geared to work in a "Call/Response"  (term from musical composition) With around 155 parts of the brain potentially able to chatter (often) unintelligibly at each other.  A response is yet another  canned mechanism.  Response might be "E-G-B-B" .   Which now re excites the original "E-G-B-C#",  Which now has turned into what appears to be two different childish composers duking it out.  There is another part of this virtual brain who is sort of "rocking out" (or "getting off") on the musical score,  "E-G-B-C#-E-G-B-B-E-G-B-C#-"  that has no idea or even cares (as long as its music!).   Whenever there is too much silence this part of the brain starts to jibber "A-A,B-B-D-D" hopping into can incite the next musical riot (well in its appraisal, the next masterpiece! ).  Realize that none of these parts are thinking on this meta-level. I am just trying to anthrophisize a bit so you get a clearer picture.  There are actually important evaluations/actions virtually going on "Here is a call!" "Is there a response to my call?"  "Here is the Response", "Here is my response to the response (the next call)" "Will it be appropriate so i can move onto the next sequence?"  (again I anthropomorphize the "my").  Also there are lonely mute parts that sort of enjoy this chaos.  Other parts want to hear mostly repeated scores they have grown comfortable hearing.  Some non-mute that are parts (musically inept) produce calls and responses objectionable to the critics but ignore any criticisms.  Something like hunger actually might seem to not be very good at timing their announcements.  A casual observer of the entire thing sees a cacophony of nonsense.  Whereas the participants are playing out what they think is a talented work of genius in "AnimatoryLogic".  A few parts of the virtual brain here actually understand the language.  And surprisingly "hear themselves think"  But oopsy, they are actually not thinking they, are merely observing).  But taking ownership of it all.   Sort of like when going to a concert some guy jumps up on the stage dances… the crowd annoyed by the guy's ineptitude still cheers the bands music.. but the dancer takes a bow? after the concert he has the gumption to request they hire him to dance for every show!  That guys name is "consciousness".  What the band realizes was he is the son of their producer and unless they do hire him they won't get gigs anymore. Yet if they do they, their audience will think they are idiots.   No bother, at least there is no question as to new pecking order or whom to blame.  Turns out the guy actually possess a talent for remembering every show.. Not in any sensical way.. But at least can rattle of a few city names. (I'll back out of this story before it corrupts too much information.. since I might overstretch the importance of the guy's agency..  as he is just for you and me just a gut feeling that we exist.. that we have something going on and need to protect it).     

You might be thinking #2 more complex than #1?  Probably due to my anthropomorphic descriptions

The problem is that #1  requires more storage as the animal has to to have so many neurons  (more computing power) to be able to to recognise all the possible associentive nuances of every single step (say an animal had to lick clean all four feet. it would need a tons of associative data to program when to not be not be licking its feet.. (like when locomoting)  .. a way to not end up licking clean one foot to move onto the next foot rather than doing foot #1 some 100 times. .. it would have to use associative inference to track when it was time to move to each new foot  to be able track differentiate now for from the previous step)) to encode Transitive Inference on a Burrhus Frederic Skinner software model of cognition than you would ever need on a Ludwig Wittgenstein software model of cognition.  If Bee was a ultra-highly upgraded tapeworm it's skull would need to be the size of football!   For a human to be an ultra-ultra-upgraded bee, it require something exponentially even bigger? I am trying to say that the physics don't add up if you the accept modern theory.

"Well the neuron is so highly specialized over millennia, that each neuron is capable of storing evidently much more than was _originally_ thought possible. Frankly, how it all operates is some big mystery but at least we are now at the beginning and over the next decade will be unraveling it all"   Huh, what did we think was _orignaly_ possible?  And, what is the order of magnitude difference?    Actually as a geneticist you are uniquely aware that there are only so much level of detail that can be transcribed (even at the micro-microscopic level).  You know there at least would need to be enough free space to enforce that a bird of a certain genus whom have never met their own kind will still have the same call pattern as their immediate ancestors   ( talking about birds raised in laboratories to test this out).   Along with tons of other family and genus proprietary psychological behaviours and preferences.  So common sense says there must be enough genomic data bandwidth.  

The Wittenstein model of cognition is the only thing that could make sense !

https://lh6.googleusercontent.com/PgJm1WVner6G8-PzAHHfTmijWXr6E0emvPBpDJ1sas1BdsQJdcrfqbeZzbhXOBdgVOdqTDYrS1c8hqWFCxxcB7IwyV_xCB8T_TNrUlgISXDlqKnzMOKg2eXfIRm7HiyF2k3E8ijZ






 

Laughter/MENTAL BLOGGING

On Sat, Jul 22, 2017 at 10:52 PM, John F Sowa <sowa@bestweb.net> wrote:

Suggestion:  Instead of debating politics, let's discuss Dilbert

cartoons.  Ask what issues about intentionality, collaboration,

ethics, deception, and reasoning are illustrated and/or violated

by the cartoons.  What ontological features make them funny?  Why?

Violation often is the requirements towards "funny" at least one or more very exact logical reasons.   For things to be funny immediately upon hearing or reading them, the being has already started to construct the story-line so when the part that doesn't canonicalize appropriately to the expectations​:​

 

​​"I have had a perfectly wonderful evening, but this wasn't it."

Such​ non-wetware systems we call unwind/unwind (In the DLPP Algorithm sense,

rolling back things to start over).   Animal​'s laughter though is cl​​ setting off an ​magnetic pulse​ (​​E.M.P.​) to clear out the buggered story script.

​So we use laughter as a neurological bomb to handle cases of cognitive discord created from a story.​  But we also ​are ​writing another narrative blog, that describes how and why we are doing and thinking what we are doing and thinking. 

​This models *extremely* well in a Script Applier Mechanism and other Schankian constructs.​

​SAM while recalling certain memories, can and internally "blog" about ​it's research. ​  ​Mental blogs themselves become ​stories (Creates a story about the mental process of story creation it used)  We get to reuse all the same mechanisms on them as we do to understand the world​ to understand the discord in ourselves​. ​ ​The funniest jokes are the ones that make fun of our self narrative.​   

When G​.​ Marx's 8 year old daughter was barred from a club where her friend had brought her to swim because they didn't allow Jews, he said:​   ​​"She's only half Jewish. How about if she only goes in up to her waist?"​  ​

I might find dark humor funny​, but​ not because of​ the content, but instead​ laughing disappointedly at​ ​our lack of negative "normal" reaction​.

When it comes to slapstick humor (or when a child laughs at another child for falling down). Most [experiential] scripts have physiological information attached to them.. like: falling down ourselves and getting hurt, getting caught with our pants down and other embarrassments..   Those blog scripts get created during emotional duress or physiological conditions we want to avoid (can even be something that was exciting and not negative). These scripts happen to be mine-fields in which we must again drop E.M.Ps to clear them out when we activate them. (Also our "blogger self" gets to re-spin the story (reactivation) this time making that previous script not so full of volatility).

Is a very low chinese room 

--- DELETED --- DELETED --- DELETED --- DELETED --- DELETED 

https://lh3.googleusercontent.com/YOgbkke6fvU8bedrWokvR6AgKmNnadKeaLrLwkuMHCm_pM69inzX5kUeTlQqIP1Lm-kdxGBfwpTC10c4suh4GUCYJjgioHT60xNqUNtVYDSro0FYjCxBfp-gHBhiWieTFlnBaBnu

--- DELETED --- DELETED --- DELETED --- DELETED --- DELETED 

(Insects took a very different path .. but in preview ... yes.. even insects still use this same "mind model" that we chordata do.  they are no more and no less machine than the human or seagull .. sorry for this spoiler)  

You mentioned wasps are very much likely not using the B.F Skinner model of deduction in TI.  Of course neither do humans as we don't have enough neurons either to do such processing using "associative" deduction.  I agree and had they it would be just too expensive when compared to proto language processing.  it would be too expect


 

--- DELETED --- DELETED --- DELETED --- DELETED --- DELETED 

   but useful maybe content below 

  a difference say to perhaps make two species of bird have completely different mating calls (bird that have never been around their own species still end up somehow being born with exactly some species specific call).   

Imagine though a conclusion/conjecture that sounds unlikely to you:

 "Harrah! we discovered that each species of bird's personal species noise was not at all encoded in the genes that affect the psychology of birds, it turns out the diameter/tension of the vocal tracts/chords!

For example the HornFetteredLark of South Australia  that is angry about his territory being invaded makes this tune "B-A-A-C#-B-F-A-A-A"  His cousins in Africa makes the sound only  "A-B#-A"  ..  For a while scientists that the shorter duration was to send the message to back off but not attract a predators.     Well evidently it is that their vocal chords are tighter!  Or at least the birds with the tighter vocal chords at least survived in Africa.  "

They are most definitely correct about needing a shorter tune in Africa.   But obviously who would say the above thinks vocal cord anatomy is  complex enough to convert the 15 known vocal signals of birds into x400 genetically diverse yet "perfectly" reproducible bird noises.     (Sorry to get simplistic here but it will help understand something i am going to say later) Nature might in fact that place these genes in the same place it encodes what the bird "prefers to hear!" .. that is the genes that deal with "behavioural" traits.   And it is transcribed on a part of the genome that is not very likely to get swapped out or mutated more often than every 3-5 generations.  This allows situations where the difference between two species that have almost everything else identical physiologically but in this case where they had, by some genetic bad luck, it would result in an immune system susceptible to some common disease.. yet by some happy accident only the birds that didnt like to listen to the same kind of music survived this.   The immune disease genetics most likely had nothing to do with the "song" genetics yet this divisiveness caused survival.

So where am I going with all this by now?    What would be the generic "level of detail" required for to transcribe a neural imprint that would provide the behaviour that is detailed enough to build a web and already know how to yet change her into a hunter if some genes in the backside makes her not a make strong enough web-forming materials? the answer I am guessing is "some fairly non-trivial level of detail".   Lets next surmise that this "behavioral program" has to allow for at least some data loss.  (Note: some data loss is fine as such a program has to work with the inexactness of the possible  environments anyway).   (we should allow "some" associative learning.. but mostly of non vital or simple case-by-case behaviors.)      

          For a minute think about the nuanced difference 

  But let's avoid either cop out.. so.. basically there are a few competing theories of animal intelligence 

  

Purely speculative here:  (You'd know if this argument I am about to make is not as sensical as I might think it is).  Is immagine any of the mildly complex behaviours we see every day.  Such as a spider whom is born knowing exactly how to set up a spider web (and near good food sources to) had we set out to create a small machine to build one spider web in one exact location done here is the pseudo code  "begin-a-b-c-d-end".    Now take that program and try to encode that associatively .. This means that at any given point some associative description of its position has to be created  the program has to decide which is the most likely position it is in.. is into step by step pictures that will be compared  

representation)  be quite complex.  But backing up a step, let us now attempt to rebuild that same decision tree now in specialized associative instructions and the size and complexity of the material will massively bloat!  (If I am confusing there, here is an analogy, imagine two different instruction manuals for riding an escalator.  First manual shows a picture of a person 1) stepping onto,, 2) standing 3) getting off the escalator.    vs the associative training manual would have to show not only those 3 picture but create 100 extra pictures to show you what to avoid doing (such as not doing a handstand while eating an ice cream cone ) and a few more to convince you to stand still the whole time during step 2).   Maybe a better example:   Compare some sheet music (on paper) to the the instructions on paper it might take to  trace of everything a musician  to all the   The sheet music read by an 

 1000 more pictures of every possible il-conceived behaviour you might be tempted to do that would result in getting injured along with permutation of encode them in a way that pictures every thing you'd not want to do ever on an  explains every possible reason a person might avoid doing something  riding an escalator operating an elevator.. the first manual    it would be like trying to design two instruction manuals .. one usable by a 

)  instead of Wittenstonian (Ludwig Wittgenstein)

 f the mounting things Thus for a number of year now, papers were written to prove associative conditioning is possible

 

 Though despite being very "popular", simple associative processes are not sufficient to account for TI performance or _really_ any behavior or viable thinking models.  I am quite a layman, though I have been in private (corporate-funded research) since 1999 in the area of Generalize Artificial Intelligence, please forgive my lack of scientific language.  I'd like to bounce an idea off you that has kept me a little unsettled for the last 6 or so years.  Mostly because I have not found any obvious evidence to the contrary.  But maybe you can help by providing contradictory evidence toward my hypothesis.   First, though, let's set up a thought experiment:

 Wait,  I can't remember what my feet looked like: maybe I had wet sand on the top of my bare feet that day or maybe I was wearing flip-flops (decades ago)?  How come I can't see my feet as vividly right now as I can see ocean water a 1/2 inch deep?  

 I have a sister that we claim has a "photographic memory",  We test her by showing her two photographs 3 minutes apart when something minor changes: She can tell us the chair has been turned 45 degrees.  Curtains are white instead of blue now.    The license plate of the car has one digit changed now.    So when comparing a photograph she saw three minutes ago is she comparing a 3-minute old optic nerve echo of this 2-D photo with a current optic nerve 2-D encoding?    Wow, imagine the neuro bandwidth/processing power of my sister!   Not really, I'll get to this shortly.

 She claims it is not any harder than me comparing   "012345678_"  to "0123_56789"?

 But what if I had not seen numerical digits before?  

So anyways, can my sister go back and remember my footwear?!  (she admits she can't)  But, is it at all stored in her brain right now? (in other words, is the optic nerve's output she supposedly buffers now somehow just scattered and miss-weighted and we just need the right un-scattering + reweighting procedure to put it back together?)  

The current POPTHEORY belief is that indeed the "sensory" data is there?

 So how much storage capacity do you need to record the output of an optic nerve for some arbitrary (say 100) milliseconds of time?

LOGICMOO

So how many milliseconds of video footage did you watch while doing exercise (#2)?   I watched (visualized) at least 3 seconds of total footage during exercise (#3)

How much neuro-sensory bandwidth (raw neuro-mimicking space)  did I dedicate to encoding my beach trip decades ago?!  What about you?

Instead, I accept you and I just utilized a "Movie Making" facility like a theater.  

Current science even finds that answer acceptable.   But they add ..

POPTHEORY:  

"Yes that theater seems to exists we found it is even is connected to the optic nerve in order to access known visual processing sites.  See these papers[...] showing the scans that reveal the activation of the vision processing parts of the neocortex.   So that theater obviously also activates templates from the recurrent mimics [ written at the first time sensory input indexed/multiplexed in awesome space-saving ways] "

 Me...

LOGICMOO

makes sense, so you're saying that sensory input neuro-mimics upon and then later recalled in order to make me feel like I saw it all again.  But really, do you think it is efficient I re-submit it thru the same interpretive process?

POPTHEORY Ok, maybe you are right, the re-processing is probably not required. But the initial processing hardware had to already be present so there is nothing gained by avoiding this step.

LOGICMOO Yes, I see your point, not only that, We still need to create the *now* experience in order to _realize_ we are remembering

 All that is dandy and it extends and supports other current theories

POPTHEORY Over the period of many coming years, everything we think is complex behavior right now, we will discover is mere "conditioning" of neural pathways.  This is so system-wide we can always trace high-level behaviors back to some obscure set of preferences and associations.

 Even that theater as complex as it seems comes down to a set of associations to find the right" visual memories.   Your human brain is special in that evolutionary/survival pressures selected the genetic mutation that allows you to recall things. 

LOGICMOO Wow we humans sure got lucky that throw of the dice! That seems like shaking a box of legos and having the horseless carriage invent itself 

POPTHEORY Sarcasm not lost.  I'd say not a horseless carriage but at least a bobsled. Understand associative structure machines can be very good at adapting and emerging intelligent behaviors.. For example, your bobsled will restructure and become a kayak once evolutionary pressure requires only waterproof bobsleds.    (take the smallest simplest critters as the proof!) .. So simple but the emerged behaviors appear intelligent

LOGICMOO OK, Let's go there

POPTHEORY See most animals, take the brain of a spider don't even have that hardware as go back and think about yesterday

LOGICMOO Why do you believe that?

POPTHEORY  .. Because… 




 

https://lh4.googleusercontent.com/hp2yyWCg4e7UNKu2SDI0yFn8LDVhl2pmJObWsZjQwDJtle33k0KMsIiOWj82k_pMoY01ET8sFsIrby0L_ISGyjP8y0q3peMjbf75AGp6OdsupS829zJaAGAdPmKWbpHjcCy-kspn


 

https://www.youtube.com/watch?v=B3yNPBHR1Z8&feature=youtu.be&t=235


















 

[04:38] <dmiles>  i needed to create  a logical language that could make generalizations

[04:38] <tonyLo2[m] you mean frequency and Confidence in truth?

[04:39] <dmiles>  right i didn't want to use frequency and Confidence  as a means/bandaid to cover up the non expressiveness 

[04:39] <dmiles>  well wanted to see at least what i was up against

[04:39] <tonyLo2[m] synapses still have weight. How is this different?

[04:40] <dmiles>  they do have weight ..but their weights are actually to allow termination beyond the next neuron

[04:40] <tonyLo2[m] I would argue that frequency and confidence are practically equivalent to spike frequency and synaptic weight

[04:41] <dmiles>  so would i had i thought synaptic weight had anything to do with confidence and how much it was reinforced by experience

[04:42] <tonyLo2[m] Confidence is the weight of evidence - i.e. the past experience (in evidence)

[04:43] <dmiles> it makes that false appearance based on the fact it needs more juice to make it to the endpoints

[04:43] <tonyLo2[m] I use a spiking neural network approach in my implementation. So (f, c) is equivalent to (spike rate, synaptic weight)

[04:43] <dmiles>  some weight juice is to jump over its neighbors completely (some frequency is used used to cheat on knowing they just stressed the neighbor neuron so the timing is to "sneak past" the recovering neurons)

[04:44] <tonyLo2[m] That's a different connection/link but quite possible

[04:45] <dmiles>  if a neighbor senses too much weight/connectivity it tries schuff responsibility to a different neighbour

[04:45] <dmiles>  (this is how the system wires itself)

[04:46] <dmiles>  from the outside (to us) we think it is getting more important

[04:46] <tonyLo2[m] If I understand you statement - I don't agree with it emoticon_smile

[04:46] <dmiles>  and all evidence of training of animals supports both our hypothesis.. "see look! it got more connected!"

[04:47] <tonyLo2[m] Oh - 'more connected' can be interpreted as more useful/relevant for sure

[04:48] <dmiles>  the catch is its not about weight it is about finding the right x,y,z neighbour

[04:48] <tonyLo2[m] Although the NARS approach is that connection is dynamic and not fixed because of AIKR



 

Re-Introduction (Current Science Of AI)

 

Tags:
     
Copywrite © 2020 LOGICMOO (Unless otherwise credited in page)