The autism scenarios

The autism is traditionally analyzed in terms of the behavior specifics of an autism agent interacting with the other agents. In this case, the autism can be characterized as a property of hard communication with the agents having the same world knowledge and reasoning machine (excluding the specific belief-pretending reasoning machine). We pose the main autism theorem for the multi-agent scenario: how the lack of proper believing and pretending for the autistic agents implies the general miscommunication problems for the agents. This is the deductive link of microscopic / macroscopic type: how some deviations of the agent properties affect the collective behavior.

We highlight the following 5 steps towards the building of the logical model of the autism and its applications.

  1. Building of the belief and pretending model of the autism
    1. The logical program with autistic/control switch, which understands the set of scenarios and comprehends them (responds) like an autistic/control subject.
    2. Building the stand-alone logical program of pretending.
    3. Highlighting the most adequate representation of belief.
  1. Increasing the estimation accuracy of the reasoning locus of the autism by interactive generation of the more specific experimental scenarios (What kind of beliefs? Which way of pretending?).
  2. Development of the autism revealing system. It will assist psychologists, physicians and parents in setting the proper testing environment to detect the autistic reasoning properties as early as possible.
  3. Design of the autism training system, which implicitly forms the adequate concept of believing and the concept of pretending in the autistic children.
  4. Explanation of the miscommunication phenomena, having the formalization of autistic and normal agents.

This study is focused on the tasks 1a) and 1b). Distinguishing properties of the autism scenarios is that the K scenario component of autistic agents has reduced, "straight-forward" definitions for pretending and believing. We show the adequacy of the MS scenario approach to present the series of psychological experiments and to yield the explanation of the children responses in these responses having the autism/control "switch" in the scenario K.

1. Axioms for knowledge

To extend the concept of knowledge for scenario representation, we consider the following meanings:

  1. An agent knows that some Object possesses some Feature.
  2. An agent knows some object (can identify it , distinguish from other objects).
  3. An agent knows what a concept Feature means.

We highlight that the second argument of know is a meta-variable. The semantics of know depends on the syntactic properties of the formulas this meta-variable ranges over. Below are three cases of definition for know(Agent, Formula). It is clear, that no one of them could be expressed by the other definitions for know (there are three different knowledge modalities).

  1. know(Agent, Feature(Object)) :- const(Feature), const(Object) :- … This is the standard knowledge for the fact (S5).

know(Agent, Feature(Object)) :- var(Feature), var(Object), fail.

if an Agent knows neither Feature, an Object could possess, nor Object, which could satisfy a Feature, then the statement says nothing. The statement that there exists a Feature and an Object such that Feature(Object) holds, is assumed to be non-informative.

2) know(Agent, Object) :- know(Agent, Feature(Object)),

const(Feature), var(Object),

for all AnotherObject there exists AnotherFeature

know(Agent, (AnotherFeature (AnotherObject), Feature \= AnotherFeature,

Object \= AnotherObject, not AnotherFeature(Object) ) ).

For any Object, an Agent knows it if he can identify it in the following way: this Agent should know that there exists a Feature such that Feature(Object) holds, and for any AnotherObject there exists its AnotherFeature such that it is satisfied by the AnotherObject and is not satisfied by the Object.

There is another way to identify an Object by a unique Feature:

know(Agent, Object) :- know(Agent, Feature(Object)),

const(Feature), var(Object),

for all AnotherObject know(Agent, not Feature(AnotherObject)).

3) know(Agent, Feature) :- know(Agent, Feature(Object)) ,

var(Feature), const(Object),

know(Agent, exist AnotherObject Feature(AnotherObject)).

For any Feature, an Agent knows this Feature when he knows, that Feature(object) holds, and this Agent knows anotherObject which satisfies Feature.

Positive, negative introspection and Barkan property are seemed to be satisfied by all three types of knowledge. The knowledge axiom is not applicable to the cases of knowledge of a concept or an object.

To be more precise in presenting these definitions, we note that it is a clause scheme rather than a clause. The latter clause, for example, is derived from the clause scheme by means of instantiating Feature ® feature and AnotherFeature® anotherFeature. Then all the rest of variables are instantiated in the traditional way to obtain the fact, that the particular agent knows the particular object. In contrast to the first order interpretation (the standard logical program), the latter definition requires two steps of the variable instantiation and could be interpreted by a meta-reasoning system.

Now we present the definitions how the agents can get these three kinds of knowledge with the help of the other agents.

1) To make an Agent know a Fact, another Kagent informs that it holds (naive) know(Agent, Fact) :- not know(Agent, Fact), inform(Kagent, Agent, Fact).

In terms of reasoning about action, it looks better with the metapredicate cause

know(Agent, Fact) :- cause (not know(Agent, Fact), inform(Kagent, Agent, Fact)).

Being more specific, we have to explicitly use the modality want(Agent, Formula)

know(Agent, Fact) :- know(Kagent, not know(Agent, Fact)),

want(Kagent, know(Agent, Fact)),

inform(Kagent, Agent, Fact)).

2) To make an Agent identify an Object, another Kagent has to inform it about some distinguishing Feature of this Object and to convince him that AnotherFeature does not work.

know(Agent, Object) :- know(Kagent, not know(Agent, Object)),

want(Kagent, know(Agent, Object)),

inform(Kagent, Agent, Feature(Object)),

inform(Kagent, Agent,

( for all AnotherObject there exists AnotherFeature

AnotherFeature (AnotherObject),

Feature \= AnotherFeature, Object \= AnotherObject,

not AnotherFeature(Object) ) ).

The only way to make an Agent know a Feature is to explain it by example

know(Agent, Feature) :- know(Kagent, not know(Agent, Feature)),

want(Kagent, know(Agent, Feature)),

inform(Kagent, Agent, Feature(Object)),

inform(Kagent, Agent,

exists AnotherObject Feature(AnotherObject),

Object \= AnotherObject ).

Once we have the definition of what it means not to know the concept versus not to know the fact, we are able to present the following scenario.

Bill's Dilemma

There was a guy named Bill who was in quite the dilemma. His wife had disappeared and he was desperately looking for her.

He came upon a policeman and explained his situation. The policeman said, "So, can you describe your wife?"

Bill said, "I don't understand - what do you mean describe her?"

The policeman said, "You know, her features and what she looks like. For example, my wife has long blonde hair, blue eyes and she's thin."

Bill said, "Well, my wife has dark hair, dark eyes and is rather large." With a pause, Bill said, "Actually, how about we look for your wife?"

 

Bill does not know, which computer to buy (he knows that he wants to buy a

computer, but does not know which one

know( bill, buy(bill, UnknownComputer) ),

not know(bill, UnknownComputer).

Further, Bill is not familiar with a property of a computer he wants to buy

know(bill, (computer(UnknownComputer),

Feature(UnknownComputer))),

not know(bill, Feature).

Finally, Bill does not know if Doug knows which computer to buy

know(bill, ( know(doug, buy(bill, UnknownComputer)),

not know(doug, UnknownComputer) )).

2 Axioms for pretending and believing

We first consider the simple axiom system of pretending as a modal operator, which is based on the traditional concept of knowledge, presented elsewhere. Based on the modal operator of knowledge, one is unable to express the series of concepts like "good will" versus "deceptive" pretending. We need the knowledge metapredicate, used in the examples above to give the explicit definitions of these types of pretending in addition to the other types of modalities.

We denote by PiF the agent i's pretend that the fact F holds. We do not use special symbols here to express that the agent i pretends for another agent j and rather show it indirectly.

  1. General definition: an agent i pretends to the agent j that the fact F holds if he knows that j will understand the pretending: a) i knows that j knows that i pretends, b) i knows that F does not hold and that j knows that F does not hold, c) i assumes this pretend will be accepted.

KiKj Pi F & Ki not F & Ki Kj not F ® Pi F .

2) The pretend addressee either accepts the pretend (pretends that he knows the fact) or reject it (not know it).

Pi F -> Pj Kj F v not Kj F.

3) If an agent i pretends that F1 holds for agent j and pretend that F2 holds for agent m, and j can inform m about some fact G (in particular, it could be that G=M1), than i has to keep pretending that the conjunction of F1 and F holds

Kj Pi F1 & Km P i F2 & (Kj G ® KmG) ® F=F1 & F 2 & Pi F .

4) If an agent i pretends that F1 holds and agent j pretends that F2 holds, and this pretending has been accepted by both of them, than both of them aware that neither F1 nor F2 holds.

Pi F1 & Pj F2 & Pj Kj F1 & Pi Ki F2 ® Ki not (F1 & F2) & Kj not (F1 & F2).

 5) An agent can pretend only about his own knowledge

Pi Kj F ® i = j .

To make the pretending axioms adequate for the autism representation, we need the (modal) concept inform (say, write, etc.). Then the general definition will look like

1') pretend(Who, Fact) ® inform(Who, Whom, Fact),

know( Who, know(Whom, pretend(Who, Fact))),

know(Who, not F), know(Who, know(Whom, not Fact)).

 2') pretend(Who, Fact) ® inform(Who, Whom, Fact),

(pretend(Whom, know(Whom, Fact); know(Whom, Fact)).

 3') inform(Agent, Agent1, F1 ), pretend(Agent, F1),

inform(Agent, Agent2, F12), pretend(Agent, F2),

(inform(Agent1, Agent2, F1);inform(Agent2, Agent1, F2)) ®

pretend(Agent, F1&F2).

 6) The definition of pretending, , preserving consistency: If Who has started to pretend that SmthBefore had hold and he knows, that SmthBefore implies Smth, Who has to continue pretend that Smth holds.

pretend(Who, Whom, Smth):- pretend(Who, Whom, SmthBefore),

know(Who, (know(Whom, SmthBefore® Smth)))).

MS definitions of pretending introduce multiple meanings by means of know and inform, which is essential to express the autism pretending.

Now we approach the definitions for believe

believe(Who, Whom, Matter):- not know(Who, Matter),

not know(Who, not Matter),

inform(WhoKnows, Who, Matter),

know(Who, know(Whom, Matter)).

 This is the change of belief:

trust(Agent, AnySource); not believe(Agent, not Fact)),

inform(AnySource, Agent, Fact) ®

change(Agent, believe(Agent, WrongFact),

believe(Agent, Fact)), believe(Agent, Fact)).

 We need to define the companion concept trust. There is (was) no such Matter that Who believed Whom that it holds and then discovered that it does not hold:

trust(Who, Whom) :- not ( inform(Whom, Who, Matter),

believe(Who, Whom, Matter),

know(Who, not Matter) ).

 Important hierarchical features of belief has been recently raized. The fact, that an agent a prefers a belief B1 to belief B2 at time t, is expresses by means of the special predicate pref(a, B1, B2, t) such that certain features of an antisymmetric binary relation hold. Extending the Metalanguage Support style, we consider prefer as a regular metapredicate

" Fact trust(Agent, Agent1,Fact), not trust(Agent, Agent1,Fact )®

believe(Agent, prefer( Agent, believe(Agent1, Fact1), believe(Agent2, Fact2) )).

 We explicitly specify that the Agent changes an old belief into a new one in case it is about the same object.

prefer( Agent, believe(Agent, Feature1(Object)), believe(Agent, Feature2(Object) )).

 

We proceed to the definition of "good will" pretending: Who wants Whom to perform an Action and Who knows that if Whom knows that Who possesses (a property) Matter than Whom performs that Action when Who informs Whom about Matter.

pretend(Who, Whom, Matter(Who)):-

want(Who, inform(Who, Whom, Matter(Who))),

know(Who, not Matter(Who)),

know(Who, ( know(Whom, not Matter(Who))),

inform(Who, Whom, Matter(Who) ).

This type of pretending is required for the autism modeling.

Intentional pretending turns into deceiving when Whom does not want this Action and Who knows that.

pretend(Who, Whom, Matter(Who)):- want(Who, action(Whom, Action)),

know(Who, not Matter(Who)),

know(Who, not want(Whom, action(Whom, Action))),

know(Who, ( know(Whom, Matter(Who))-> action(Whom, Action)) ),

inform(Who, Whom, Matter(Who) ).

Compare this definition of "cheating by means of pretending" with Cheat definition from Section 3.3. One needs to distinguish pretending and deceiving.

We show, what it means to pretend to perceive (understand) somebody else's pretending

perceive(Whom, pretend(Who, Whom, Matter)):-

pretend(Who, Whom, Matter),

inform(Who, Whom, Matter(Who) ),

know(Whom, not Matter(Who)),

know(Whom, want(Who, action(Whom, Action))).

What it means to believe to somebody's pretending (not to understand it)

believe(Whom, pretend(Who, Whom, Matter)):- pretend(Who, Whom, Matter),

inform(Who, Whom, Matter(Who) ),

believe(Whom, not Matter(Who)),

not know(Whom, want(Who, action(Whom, Action))).

3 Autism revealing scenarios

Below are the texts of the autism testing scenarios and their MS representations. These scenarios contain the description of what happened with some agents and questions to the autistic/control agent. We consider this questions as "reaction stimulation" and do not present them explicitly in the formal scenario.

The child is encouraged to "fill" two toy cups with "juice" or "tea" or whatever the child designated the pretend contents of the bottle to be. The experimenter then says, "Watch this!", picks up one of the cups, turns it upside down, shakes it for a second, then replaces it alongside the other cup. The child is then asked to point at the "full cup" and at the "empty cup". (Both cups are, of course, really empty throughout.)

The knowledge of the concept turn_upside_down:

(KU) know(Kid, (turn_upside_down (Exper, Cup) ® not full(Cup) )) 

(H1) not full(cup1), not full(cup2).

(H2) pretend(kid, _, full(cup1), full(cup2)).

(H3) turn_upside_down(exper, cup1) ).

(R) pretend(kid, _, not full(cup1), full(cup2)).

The child was introduced to a doll character, Billy, and three pieces of toy furniture: a bed, a dressing table and toy-box. A story was enacted for the child in which Billy had a ball. Billy put his ball on the dressing-table then went downstairs for breakfast. While Billy was away, his mother came into his room, picked up the ball and put it in the toy-box. The child was then asked four questions: control question 1, "Where did Billy leave his ball?"; known question, "Does Billy know where his ball is?"; think question, "Where does Billy think his ball is?" . We used test question rather than a prediction of behavior question to be closer to asking directly about a mental representation. Finally, control question 2, "Where is the ball really?"

(KUcontrol ) (trust(Agent, AnySource); not believe(Agent, not Fact)),

inform(AnySource, Agent, Fact) ®

change(Agent, believe(Agent, WrongFact), believe(Agent, Fact)),

believe(Agent, Fact)).

(KUautistic ) believe(Agent,Fact) ® believe(AnotherAgent, Fact).

(KU) see(Agent, Fact) ® inform(AnySource, Agent, Fact).

 

(H.1) put(billy, ball, table)

(H.2 ) see(agent, put(mother, ball, box)

(R?) believe(agent, believe(billy, locate(ball, Where))).

(Rcontrol ) believe(agent, believe(billy, locate(ball, table))).

(Rautistic ) believe(agent, believe(billy, locate(ball, box))).

 The autism phenomenon seems to be important to choose the formalism, adequate for the human intelligence. The model of the brain and its specific reasoning, and, in particular, the model of the autistic brain is hardly desirable to build. But the difference between these two models fits into the limited formalism of just two concepts, believing and pretending. So this restricted component of brain activity could be subject to the logical modeling. The experiments of the agent scenario understanding show that various autistic children experience the lack of (formally) the same meanings of believing and pretending.