Chapter 9/11 Future Work and Conclusions

Capturing daydreaming computationally is a difficult task. Nonetheless, this book has shown how behavior resembling protocols of human daydreaming may be produced using a few simple but powerful mechanisms.

In response to external and internal circumstances, the program activates, processes, and terminates multiple concerns. Each concern is an instance of a personal goal or ongoing life objective of the program.

Concerns are motivated by data structures called emotions which determine which concern to process at any given time. The strengths of these emotions, while initially set according to the intrinsic importances of particular personal goals, are subject to dynamic modification as unexpected consequences of a concern are recognized.

The sequences of both fanciful and realistic events which make up daydream scenarios are produced using the building blocks of planning and inference rules and episodes. Planning rules specify methods of varying

degrees of plausibility for breaking down a subgoal into further subgoals, while inference rules specify consequences of various situations.

Daydream scenarios are generated by the planning mechanism which repeatedly applies planning and inference rules to a selected concern. The planning mechanism is employed to generate possible behaviors of the daydreamer as well as the possible behaviors of others.

A mechanism for modifying existing daydream scenarios is the mutation of the objectives of unsuccessful action subgoals. Episodes are aggregates of rule instances applied in a concrete real or imagined situation. Some episodes are hand-coded and provided to the program as input, while others are generated as the program daydreams and interacts in the simulated real world. Episodes are indexed in episodic memory under subgoal

263 objectives, emotions, persons, and other

features. Once retrieved, past episodes may be applied, in whole or in part, to a new concern by the analogical planning mechanism. Episodes reduce search in planning and enable scenario details to be filled in.

The serendipity mechanism recognizes the unexpected applicability of some possibility (external or internal situation or retrieved episode) related to one concern, to another active concern. The serendipity mechanism conducts an intersection search, from a point associated with a new possibility to a point associated with an active concern, through the space of currently accessible planning rules. Found paths are then verified by progressive unification and employed through analogical planning. Once a serendipity occurs, the resulting plan is stored in episodic memory so that in the future a similar plan can be generated without having to chance upon a similar serendipity.

A collection of daydreaming goals augment the program’s personal goals and initiate

useful daydreaming activity. The daydreaming goals of rationalization (generating scenarios to rationalize a failure), roving (shifting attention from an unpleasant failure), and revenge (generating scenarios in which revenge is attained) model the daydreaming which humans perform in order to reduce negative emotional states. The daydreaming goals of reversal (generating scenarios in which a past or imagined future failure is avoided), recovery (generating scenarios in which a goal which failed in the past succeeds in the future), rehearsal (generating possible scenarios for achieving an active goal), and repercussions (exploring and planning for hypothetical future situations) enable the program to improve its future external behavior—to learn.

Learning through daydreaming is accomplished by adding daydreamed episodes to memory: Various alternative scenarios involving a given situation are generated, evaluated as to their realism and desirability, and stored as episodes. When a similar situation arises in the future, the best

and most similar retrieved episode is applied to that situation through analogical planning. Generation of hypothetical future scenarios improves the behavior of the program since negative consequences of various courses of action can be detected in advance and thus avoided.

Learning is also accomplished by adding new planning and inference rules: In response to a side effect real or imagined failure, the reversal daydreaming goal determines what actions might have been taken in order to avoid that failure. Rules are then created to anticipate similar failures in the future and carry out appropriate actions to prevent those failures.

In this chapter, we present: (a) possible extensions to, and applications for, DAYDREAMER, (b) the limitations of DAYDREAMER and how they arise, (c) how these limitations might be overcome in the future, and (d) final closing remarks.

    1. Eventual Applications and


The components of DAYDREAMER described above are largely independent of the particular interpersonal domain of the program. Thus DAYDREAMER provides a framework for the construction of daydreaming systems in other domains. The components of DAYDREAMER which are dependent upon the domain are: (a) planning and inference rules, (b) personal goals, and

(c) episodes. For other domains it may be necessary to add new daydreaming goals to the system. However, the existing set enables the system to daydream—which provides useful functions such as learning from past mistakes and anticipation of future experiences.

In this section, we consider the following domains for the potential future application of DAYDREAMER: autonomous robots, operating systems, creative writing, conversation, art and music, psychotherapy and psychiatry, and education and games.

      1. Autonomous Robots

Currently, DAYDREAMER interacts in a simulated world whose behavior is determined in part by a human user who types in English phrases describing actions and states in that world. Eventually, however, DAYDREAMER could be employed as the controlling program for an autonomous robot interacting in the real world (or some part of the real world). A self-contained robot needs to decide what to do next. Multiple personal goals and motivating emotions provide this function. A household robot, for example, might have such ongoing objectives as: recharging batteries, pleasing owner, preparing meals, keeping house clean, watering plants, and so on. Emotions would be tuned to the intrinsic importance as well as dynamic urgency of a given objective or need. Such an autonomous robot would have to cope with situations, and improve at coping with various situations, without explicit human assistance. If the owner requested Shao Mai for breakfast the

following morning, the robot might start daydreaming about preparing them and realize that some ingredient is missing and needs to be purchased. If the owner were about to leave for a week, the robot might daydream about one of its batteries failing and as a result tell the owner to leave it a spare. Although such a household robot is beyond what can be achieved with current technology, a rudimentary daydreaming robot with simple motion and object sensors and a limited set of spoken input commands could be constructed today.

      1. Operating Systems

The notion of spending free time planning for future events can be applied to computer operating systems. When otherwise unoccupied with processing, the system could daydream about future operations of the user in order to perform those operations in part or otherwise prepare for those operations. For example, while unoccupied with processing in the early morning hours, the system might

anticipate two of its heavy users creating large data files the following day whose size would exceed the remaining space in a certain file system. It might then relocate one of the users to another file system, or send messages to the users informing them of the potential problem. Even during normal daytime processing, the system might anticipate the use of a particular file (such as a mail file upon logging in) and start loading that file. Any operations with side effects (other than those involving performance) carried out during daydreaming would be tentative and subject to rollback using a mechanism such as nested transactions (E. T. Mueller, 1983). In general, maintaining consistency of the system in the face of daydreaming is an interesting topic for future research.

      1. Creative Writing

Since human daydreaming often involves the generation of creative possibilities, daydreaming will be a useful component of a

creative computer program. Constructing a program which assists the user in solving a problem by suggesting and organizing the application of various general brainstorming (Osborn, 1953), lateral thinking (de Bono, 1970), or conceptual blockbusting (J. L. Adams, 1974) techniques is relatively easy. However, a version of DAYDREAMER equipped with suitable representations of the problem domain would be able to suggest specific solutions to a problem.

The plot-oriented aspect of daydreaming (Klinger, 1971, p. 142; S. Freud, 1908/1962) suggests the potential application of DAYDREAMER to the creative domain of story invention.The plots of RATIONALIZATION1 and REVENGE1, for

example, resemble those of several recent motion pictures. DAYDREAMER could provide initial ideas for stories, animations, or live-action films, and once begun, suggest continuations, modifications, or further possibilities.

The program would have to be extended with

appropriate personal and daydreaming goals for story invention. As input, the program would take any constraints of the user, such as characters and their personality traits, initial situations, desired outcome situations, points to convey, and the like. As output, the program would produce several stories satisfying—or even not satisfying— those constraints (and a modified episodic memory of the program for use in generating future stories). The output stories could be generated in English or even fed to a visual invention system for graphical animation.

DAYDREAMER could also be used at an entirely different level in story invention: as a means of generating accounts of the internal thought sequences and daydreams of story characters.

1Metz (1982), however, likens daydreaming to the viewing of a film.

      1. Conversation

J. L. Singer (1981) has suggested that daydreaming allows more creative

conversation and social interaction. An example of this may be found in a protocol of a conversation recorded by Hobbs and D. A. Evans (1980) (although such creative aspects were not the point of their work). The subject, discussing her daughter’s entry to an art contest, interjects the following daydream- like scenario into the conversation:

You send it to a P.O. box What happens

if you have dishonest mailmen and they

see all these things going to an art contest, so they open it up and change it so that it’s being sent from them? (p. 357)

Daydreaming, both prior to and during a conversation, will be useful in conversational computer systems to liven up the quality of the interaction. Furthermore, it has been observed (Schank, 1977) that what one may say in a conversation is a subset of the things that come to mind. Reik (1948) conversely observes that “thoughts are speeches not made, or condemned to silence” (p. 208).

Although modeling “what comes to mind” in conversational contexts involves processes

which not addressed by DAYDREAMER, a system which is able to daydream in conversational situations may indeed provide one component of a conversational computer system.

      1. Art and Music

Artists, composers, choreographers, novelists, and poets daydream in images of their respective media. For example, Housman (1952, p. 91) reports new lines of verse flowing through his mind while taking a walk. According to Copland (1980), composers have “the ability to imagine sounds in advance of their being heard” (p.

24). Shapero (1946/1952) writes that “if [a composer] focuses his attention on a definite key and beats mentally in a chosen meter, musical images will be set in motion in his mind, and the entire musical texture generated in this way” (p. 52).

DAYDREAMER may be adapted for applications in various art forms by the provision of an appropriate set of personal

goals, daydreaming goals, planning and inference rules, and episodes. This task is naturally accomplished by the artist, who would probably start by providing the program with representations of the artist’s own style and techniques, including appropriate visual (Arnheim, 1974), musical, or other principles, works of past artists, and so on. The artist would run the program, examine its products, and revise the rules as appropriate. Of course, in constructing and revising representations, the artist need not be limited to the existing style and technique of that artist. Rather, the artist may experiment with new ideas whose expression or execution in conventional media would be impractical or impossible. Thus the artist might evolve a new style possible only in the computational medium.

A version of DAYDREAMER equipped with suitable representations might be applied in the following ways: First, the program could function as an artist’s apprentice: It could be used to generate ideas which the artist might not otherwise have thought of, or to execute

ideas which the artist might not otherwise have been able to execute. These ideas could then be incorporated into artistic works or discarded as the artist saw fit. Second, if the program evolved to a point where the artist considered its output as valid works of art, the program could function as an artist’s proxy: Works produced by the program would be fully authorized by the artist. In this case, one might adopt the view that it is not the output of the program which is the work of art, but the program itself and its entire range of potential output. Thus such a program might function as art object.

Is it possible to create art in the medium of computation? Can one internalize and habituate oneself to this medium to the extent necessary for artistic expression? There are certainly obstacles to algorithmic art at present: The languages available for expressing computational ideas are generally too rigid, too concerned with details, too fragile in the face of incompleteness and inconsistency. Not every artist will wish to set aside years of training in traditional media

in order to pursue computation. Perhaps we will have to wait until future generations of artists who as children learned about frames, production rules, and unification along with the three Rs.

Many artists would probably choose to withhold their programs from distribution in order to prevent others from learning how their program operates, constructing derivative versions, or producing further works using the program. Nonetheless, there is a necessity to avoid reduplication of effort; some artists and programmers would probably choose to make their basic representations of art and music theory and history available to others. A potential danger, of course, with the wildfire-like spread of artistic computer programs, is that everyone’s work would become homogeneous (not to mention the problems of copyrights and royalties). Artists would therefore avoid basing their work on previous programs—except for a basic set of tools sufficiently rich to enable many artists to construct unique and diverse works (just as

the tools of conventional media enable the production of new and different works).

If an artist’s program were distributed to others, that program and its outputs would be more vulnerable to analysis than are conventional works of art. After all, the program provides an explicit, complete model of how its outputs are generated. Would examination of such a program reduce the apparent creativity of its outputs? First of all, the program might be of such a level of complexity that it would not be possible to understand exactly how it produces a given output. But even if one were able to understand how the program worked, one would have to realize, as with any creative product, that it is much easier to understand a product after the fact than to have thought of it in the first place.

If a new artistic program is derived from a program written by another artist, is that program then a joint work? When, if ever, does a derivative work cease to be the work of the original artist? Is computational art

culture best developed through the sharing of programs or should interaction among artists be restricted to observation of program output behavior?2

Although the productions of an artistic program should not consist of mere “variations,” there would nonetheless appear to be fundamental limits on the quantity of creative products which could be generated. That is, the more we ask a program to generate outputs, the less creative those outputs are likely to seem. Does generating a large quantity of outputs automatically negate the value of those outputs (or might the outputs taken as a collection still be considered creative)? Can it be that artistic value is subject to some sort of law of conservation—that only a certain number of objects or “chunked” classes of objects may be considered creative at a time? We leave questions such as these to art theorists and to future experience with artistic computer programs.

Actually, artists and composers have already

begun to experiment with such programs: Hiller and Isaacson (1959) constructed a program which was used to compose the Illiac Suite for String Quartet; Xenakis (1971) employed computers to assist in his stochastic compositions; H. Cohen (H. Cohen, B. Cohen, & Nii, 1984) wrote a program able to create its own drawings based on heuristic rules; Whitney (1980) has used computers to execute a form of animated visual art based on an analogy with harmonic relationships in music; and so on (see, for example, the reviews and discussions of Hiller, 1984, R. E. Mueller, 1983, Sofer, 1981, Leavitt, 1976,

Cope, 1976, pp. 77-114). What DAYDREAMER adds to this previous work is a framework for the construction of creative programs; this framework consists of the following components: multiple creative tasks directed by emotions and daydreaming goals, recognition and exploitation of serendipitous relationships among tasks, fanciful possibility generation through arbitrary mutation and other methods, storage and later application of previous experiences, learning from mistakes, and knowledge

representation in the form of both generic rules and episodes. Whereas in DAYDREAMER, episodes are employed in generating new daydreams, in an advanced artistic program, episodes might function in a contrary fashion: The system might actually try to avoid creating new works similar to episodes representing previous works.

      1. Psychotherapy and Psychiatry

DAYDREAMER could eventually be employed as a model to test various strategies for treating depression (see, for example, Beck, 1967) and for investigating the processes that might lead to depression. In addition, certain modifications to DAYDREAMER might lead to behavior resembling various pathological conditions of humans.

2R. E. Mueller (1967, pp. 273-274) discusses similar issues.

What is the long-term impact on the behavior of DAYDREAMER of various sets of daydreaming strategies? Which combinations

are more advantageous and which are less so? The RATIONALIZATION, ROVING, and REVENGE

daydreaming goals serve the function of reducing negative emotional states. What would happen to the behavior of DAYDREAMER in the long run if one or more of these goals were removed? What would happen if these strategies were applied excessively? If, for example, RATIONALIZATION were removed, but ROVING were retained, the program would reduce negative emotions but never alter the negative emotions associated with episodes. It might, therefore, remain depressed as a result of “repressing” its negative emotions—immediately diverting attention from negative emotions and not coping with them through long-term modifications such as that provided by


If all daydreaming goals were removed, DAYDREAMER would be unable to improve its behavior in response to performance mode experiences, plan for future ones, or reduce negative emotional states. It would most likely end up in a

negative state. If performance mode were removed, DAYDREAMER would never be able to achieve its personal goals and would therefore also end up in a negative state.

Perhaps it is when certain daydreaming strategies get out of hand that personality disorders such as paranoia begin to develop, as suggested by the following anecdote (reported by Dyer, 1983c):

A man was driving along a lonely road late at night when his car ran out of gas. He remembered seeing a farm house a mile back, and so started walking toward it. As he walked along he thought: “It’s pretty late. If someone were to awaken me at this hour, I might be pretty annoyed.” He kept on thinking along these lines: “The farmer will have to get dressed and siphon gas out of his tractor for me. He may find that very inconvenient. . . ” As he approached the farmhouse he became more and more annoyed.

The farmer was awakened by loud knocking

at his door. When he went down to answer it, there stood a man who barked: “Who needs your stupid gas anyway!” and then stomped off into the night. (Dyer, 1983c, p. 76)

Daydreaming can result in the excessive raising or lowering of expectations; without appropriate controls, daydreaming can cause one to lose touch with reality. In DAYDREAMER, such an effect could be achieved through the removal of reality assessment: The program would then generate external behavior in accordance with unrealistic daydreams; its expectations of the world’s behavior in response to its own would be too high, and it would fail at achieving its personal goals in the short and long term.


Although worrying has its disadvantages, as one can see from the above story, worrying also serves the useful function of anticipating possible future events and planning for them. There is a subtle line between adaptive and

maladaptive instances of daydreaming. J. L. Singer (1975, pp. 180-204) discusses the relationship between daydreaming and psychopathology in some detail.

      1. Education and Games

There are potential applications of DAYDREAMER in education and games for children and adults. Just as LOGO (Papert, 1980) enables one to experiment with the consequences of a few graphics primitives, a suitably simplified DAYDREAMER (perhaps using a representation based on English phrasal patterns instead of slot-filler objects) would enable one to enter a few daydreaming rules, and run the program to see what it generates. DAYDREAMER could eventually be used as part of a shared daydreaming game in which the user and computer would take turns in expanding out possible scenarios in some domain.

    1. Shortcomings of the Program

DAYDREAMER cannot daydream for a long time and cannot generate many novel sequences. Unexpected behaviors of the program were more useful in understanding possible daydreaming pathologies and flaws in the details of the mechanism, than they were for generating truly novel solutions to interpersonal problem situations.

One unexpected behavior was an infinite daydreaming loop. Before a mechanism was added to inhibit the activation of multiple top-level goals with the same objective, the following cycle occurred: The REVERSAL daydreaming goal would generate an alternative scenario in which the first of two personal goal failures resulting from an

earthquake was avoided. (REVERSAL is only able to plan to avoid one failure at a time in the current version of DAYDREAMER.) Since the second personal goal still failed in this scenario, a new REVERSAL daydreaming goal would be activated to avoid the second failure. This in turn resulted in a scenario in which failure of the second personal goal was avoided, but failure of the first still occurred.

Thus another REVERSAL daydreaming goal would be activated for the first personal goal and the cycle would repeat indefinitely.

A flaw that was discovered in the system was its failure to terminate a REHEARSAL daydreaming goal once the activity to be rehearsed was being carried out in reality.

Thus the system might continue to rehearse an action after it had already been completed. This flaw in fact led to an unanticipated, but valid, serendipity: when the waiter serves Guy was received as input during an actual M- DATE experience, this action was detected as serendipitously applicable to a still active

REHEARSAL concern for that M-DATE.

Whether the program’s various mechanisms will be sufficient for the generation of truly novel daydreams as well as long-term learning and emotion regulation remains to be seen. The length of continuous nonredundant daydreaming and its degree of novelty is limited in the current program by several interacting factors:

  • Set of rules and episodes: This set is less

than some “critical mass” (if there is such as thing) necessary for the generation of an endless variety of daydreams.

  • Susceptibility to minor bugs: If a rule or episode is coded incorrectly, daydreams which could be generated by that rule of episode are not generated, or the future behavior of the program is adversely affected. Unification, the basis of most of the program’s operations, is very sensitive to the exact form of representations.
  • Speed of the program: By its very nature, daydreaming eats processing time and a supercomputer was not available. Although many optimizations have been employed, such as rule chaining and indexing of facts in contexts, the program is still quite slow. It takes many hours to find out the results of a given addition to the program. In conjunction with the above problem involving minor bugs, this makes it difficult to add rules and episodes.
  • Degree of program tailoring: Whenever the

program fails to produce a desired daydream, an incorrect rule or mechanism is fixed.

Although it is important for the program mechanisms to function properly, it is less clear to what extent one should be permitted to modify the set of rules. As one continually adjusts rules to attain some desired behavior of the program, one gets the feeling that it is the programmer, rather than the program, who is carrying out the process of search.

When program execution is slow, this advance search by the programmer speeds debugging of the essential components of the program. Unfortunately, the more the program is tailored in advance by the programmer, the less likely it is to generate novel products.

The speed of the program could be improved to some degree through various modifications. For example, contexts never to be used again in the future could be destroyed and the storage reclaimed. Such contexts consume time during garbage collection and may also slow the program down through increased paging because the contexts are

scattered in virtual memory. Ultimately, DAYDREAMER must be reimplemented in a machine with at least the power of a Connection Machine (Hillis, 1985).

The program’s susceptibility to minor bugs could be lessened through the use of relaxed unification. Techniques for relaxed unification include: ignoring variable type restrictions, ignoring a small number of slots which fail to unify, ignoring minor differences of type, and so on. Relaxed unification could also be used as a component of relaxed analogical planning—the application of an episode to goals which match the goal of the episode only in a relaxed fashion. In exchange for a reduced number of rules failing to fire when they should (errors of omission), however, this will increase the number of rules which fire when they should not (errors of commission). Thus “incorrect” daydreams might result, and strategies for pruning incorrect sequences would have to be devised. In addition, the system would become even slower as a result of having to generate and test these additional


Techniques will have to be devised for automatically acquiring the rules and hand- coded episodesnecessary to generate novel daydream scenarios. This would ease the task of extending the program and might at the same time provide a solution to the problems of program tailoring and bug susceptibility. A future DAYDREAMER should be able not only to learn through daydreaming, but also to learn how to daydream.

In the following section, we present an alternative architecture for daydreaming which may help overcome the above shortcomings of the current system. This architecture is presented as one possible topic for future research in machine daydreaming.

    1. Overdetermination of Daydreaming

What factors are responsible for the production of the daydream REVENGE1 (see

page 4)? In the current version of DAYDREAMER, this daydream results from a REVENGE daydreaming goal activated in response to a failed LOVERS goal in LOVERS1: Achieving a position of prestige, that of being a famous movie star, results in the movie star having a positive attitude toward DAYDREAMER so that he will ask her out on a date, so that she can turn him down, so that she can obtain revenge. On the other hand, this daydream can also be seen to fulfill a RECOVERY daydreaming goal resulting from the SOCIAL ESTEEM goal which also failed in LOVERS1: Achieving a position of prestige results in the star having a positive attitude toward DAYDREAMER, resulting in success of the same social esteem goal which previously failed.

How might the above daydream have been generated in a human? There are three logical possibilities (if our assumption that daydreaming goals are the source of daydreams is correct): First, it may be that the daydream is generated in response to the REVENGE daydreaming goal and the daydream

simply happens to satisfy the RECOVERY daydreaming goal. Alternatively, it may be that the daydream is generated in response to the RECOVERY daydreaming goal and the daydream simply happens to satisfy the REVENGE daydreaming goal. The third possibility is that the daydream may have been the result of

3Of course, DAYDREAMER already acquires episodes through interactions in performance mode and also through storage of previous daydreams. combined efforts to satisfy both the REVENGE and RECOVERY daydreaming goals simultaneously.

The latter case is an example of the principle of overdetermination introduced by Breuer and S. Freud (1895/1937, pp. 156, 219). This principle states that a behavior (such as a symptom or dream element) generally stems, not from a single cause, but from a combination of causes. That is, while any single cause is insufficient to result in the given behavior, several causes taken together are sufficient to result in that behavior.

There are numerous examples of

overdetermination in the products of cognition: S. Freud (1900/1965, pp. 311-339) shows how events represented in a dream may be traced to multiple underlying causes. Reik (1948, pp. Reik (1948, pp. 35) provides examples of overdetermination in a daydream protocol. A single conversational utterance may be overdetermined in the sense that it achieves several goals of the speaker simultaneously (Hobbs & D. A. Evans, 1980;

P. N. Johnson & Robertson, 1981). Sentences with double meanings are another instance of overdetermination: In some contexts, an idiom may be employed and understood in both its literal and figurative sense. Readers are able, for example, to recognize both meanings of Kathy always lands on both feet in the context of whether or not she will succeed at a risky parachute jump (R. A. G. Mueller & Gibbs, 1987).

The psychoanalytic conception of overdetermination emphasizes unconscious sources of behavior. In general, one may or may not be conscious of the various elements which contributed to a given behavior. This

section considers the possibility that the stream of consciousness might result from the merged products of many processes, rather than from any single process as in DAYDREAMER. We propose a basic model for the generation of overdetermined behavior, describe some associated problems and possible solutions, and present some initial ideas toward a system for the overdetermined production of daydreams.

      1. An Overdetermination Mechanism

We propose a mechanism for the production of overdetermined behavior consisting of the following components: (a) a large collection of nonconscious, concurrent generative processes which provide the underlying source material for behavior, and (b) a collection of nonconscious, concurrent merging processes which produce the final behavior of the system through merging of the results of several generative processes.

Thus the only elements which rise to the

conscious surface are those which are multiply determined—participants in several nonconscious generative processes. (The term nonconscious is used to avoid confusion with

S. Freud’s [1900/1965] constructs of the “unconscious,” which encompasses thoughts which have a high resistance to becoming conscious, and the “preconscious,” which encompasses thoughts which can become conscious at any moment.)

In one way, this model parallels Chomsky’s (1965) formulation of generative grammar: Generative processes correspond to the phrase structure grammar which generates a deep structure; merging processes correspond to the transformations which map the deep structure into the surface structure; the surface structure corresponds to conscious elements. In other ways, however, it is quite different: Chomsky does not postulate a large number of generative processes. Therefore, there is not much opportunity for overdetermination on a large scale. (A case, however, could be made for such a grammar introducing minor overdeterminations into

the surface structure—one surface structure element could very well correspond to more than one deep structure element.) In addition, whereas the merging processes might introduce distortions or new elements of meaning, transformations in Chomsky’s standard theory “cannot introduce meaning- bearing elements . . . ” (p. 132)

What psychological evidence is there for nonconscious, semantic processing? Habitual behaviors, such as driving to work, become automatic and hence performed with little or no conscious awareness (see, for example, Norman, 1982; J. R. Anderson, 1983). There is some evidence for perception without awareness and the nonconscious priming of concepts (P¨otzl, 1917/1960; Dixon, 1981; Marcel, 1980). What are apparently conscious, voluntary behaviors may be influenced by nonconscious factors: In experiments conducted by Libet (1985) and colleagues, electrophysiological “readiness potentials” (negative shifts in electric potential recorded on the scalp) were found to precede by 400 milliseconds the conscious

intention to perform a spontaneous and voluntary motor act (as measured by subjects’ recall of the position of a revolving spot upon the first awareness of intention to act, and corrected by a control experiment involving the timing of subjects’ reported awareness of skin stimuli). Libet concludes that initiation of such acts begins nonconsciously.

      1. Potential Benefits

If an overdetermination mechanism could be made to work, it would provide solutions to two instances of the problem of elusiveness of concepts. First, overdetermination provides a potential solution to the cognitive modeler’s dilemma: The processes and data entities hypothesized by the cognitive modeler to account for one or more human protocols are merely one set of processes and data entities which might have resulted in those protocols. In fact, the modeler often discovers more than one potential model. While choices can often be made in accordance with experimental data, theoretical assumptions, or principles of

parsimony, at other times the modeler is forced to make an arbitrary choice.

Within the limits of the given theoretical framework, overdetermination would enable the modeler, when confronted with several possible sources of a behavior, to incorporate each of those sources into the model. This does not free the modeler, however, from: (a) finding each of these possible sources in the first place, and (b) eventually showing that these sources are valid (either that they have analogues in humans or that they are useful from an engineering or other standpoint).

Second, overdetermination provides a potential solution to the scaling problem in artificial intelligence programs: Once a given rule has been constructed, the programmer will frequently discover exceptions which render that rule incorrect in certain situations (see, for example, Michalski & Winston, 1986). Typically, the rule is then modified or the rule is discarded and a new rule is constructed which operates properly.

However, when the system consists of a large

number of rules, this process has a tendency to diverge: Whenever one problem is fixed, other problems are created; it also becomes very difficult to add further rules to the system without introducing new problems. (See the related discussion by D. E. Smith, Genesereth, and Ginsberg, 1986, pp. 347-


Overdetermination would enable many conflicting rules to exist simultaneously. Exceptions would be handled, not by modifying existing rules, but by adding new rules. The behavior of the system in a given situation would be defined not by the products of single rules, but by the merged products of many rules. The rules would in effect vote on the final behavior. As a simple example, if, in a particular situation, a rule of attitudinal congruity (Heider, 1958) indicated a positive attitude toward some person while two defense mechanism rules (A. Freud, 1937/1946) indicated a negative attitude toward that person, the negative attitude would win out.

Overdetermination would thus enable the constructions of larger and more complete systems, because the inconsistency problems associated with large collections of rules in classical systems would no longer apply; it would no longer be necessary to ensure that each rule is wholly correct.

Overdetermination thus adopts the “scruffy” approach to cognitive science (Abelson, 1981).

      1. Problems for the Mechanism

There are many problems which arise in attempting to construct an overdetermination model of the stream of thought: If conscious sequences of imagined events are overdetermined by several nonconscious sequences of events, then how are those sequences of events merged together?

Interlacing events from the two sequences is not likely to result in a coherent sequence.

Yet daydreams are coherent.How are coherent and continuous merged sequences generated? If two sequences are merged only

if they are identical, there is the risk that two processes would never happen to generate identical sequences of events and that no conscious scenarios would ever be produced. If two sequences were merged when they differed only in, say, the persons and physical objects involved, similar sequences might still be rare occurrences.

In general, it is difficult to merge two scenarios since those scenarios may involve orthogonal world states. When the world states of scenarios do not clash

4S. Freud (1900/1965) proposed a process of “secondary revision” to render the manifest content of a dream more coherent and “like a day-dream.” (p. 530)

entirely, how can they be reconciled? Merging might require alteration of one or both of the event sequences. How much alteration is permitted? Freud’s dreamwork mechanisms, for example, permit a very distortive transformation from nonconscious to conscious elements. The task of merging might be facilitated if processes were allowed to interact as events were being generated.

How might emotions be modeled in an overdetermined processing scheme? Are there nonconscious emotions? Emotional arousal (Schachter & J. E. Singer, 1962), at least, is associated with conscious thought. Need emotions be handled any differently than other concepts?

One sometimes employs the technique of “counting sheep” in order to fall asleep. An overdetermination mechanism has no executive process for guiding the many nonconscious processes. How, then, does the “voluntary” or “directed” aspect of the stream of consciousness arise? How is it that we can “consciously direct” our thoughts?

Overdetermination assumes that this aspect of thought is an emergent property of the many nonconscious processes. It is not at all clear how this property might be achieved. This problem presents an apparent paradox: Conscious control arises out of nonconscious processes, and yet it seems able to direct those very processes.

To what extent are the multiple causes of a

behavior accessible to consciousness? S. Freud (1901/1960) argues, for example, that slips of the tongue originate in repressed (unconscious) thoughts. On the other hand, the utterance in which such a slip is embedded may have been consciously planned. Sometimes one utters a sentence with full knowledge of a double meaning, while other times one is not conscious of any second meaning. What beliefs do people have about the causes of their own behavior? To what extent are these beliefs accurate?

If conscious elements are overdetermined by nonconscious elements, then nonconscious elements might be underdetermined by conscious elements. (This is not necessarily so: Multiple conscious elements might share single nonconscious elements.) That is, it may be difficult or impossible for the modeler to figure out what the nonconscious elements are, given the conscious elements.Whenever more remote potential sources for a given behavior are found in addition to more direct sources, how does the modeler know whether these more remote sources

actually had any influence? Is it necessary to consider when the influence comes or how much influence there is from a particular source?

If the stream of consciousness merely arises out of nonconscious processes, then why is daydreaming—or the stream of consciousness

—an interesting object of study? Should we not be studying nonconscious processes instead? First, the surface level of consciousness (as reported to others and as reflected in other actions) corresponds to human behavior and that is a primary object of study

Even given further information, such as free associations, the nonconscious elements are not fully determined and the interpreter can hypothesize arbitrary nonconscious elements to suit the occasion. This has been one of the major criticisms of S. Freud’s (1900/1965) wishfulfillment theory of dreams (see, for example, Gr¨unbaum, 1984, pp. 235-239; Foulkes, 1978, pp. 45-46).

in the field of artificial intelligence. Second, consciousness is a window into nonconscious processes. By examining the results of nonconscious processes we may be able to figure out what those nonconscious processes


      1. Initial Solutions

We now present initial solutions to some of the problems described above. In production systems (see, for example, Langley & Neches, 1981), conflicts are resolved through techniques such as selecting rules whose antecedent concepts have the highest activation, giving priority to more specific rules, and others. In an overdetermination mechanism, however, conflicting rules are permitted to fire. The outputs of those rules must then be merged into an appropriate result.

One way to merge identical concepts is to compile each new concept into a discrimination net (Feigenbaum, 1963; Charniak, Riesbeck, & McDermott, 1980). Concepts are merged whenever a new concept is compiled into the same location in the network as an existing concept. A number is associated with each unique concept indicating its degree of determination, or the

number of times a concept was compiled into that location. Concepts with a degree of determination above a certain threshold would be the conscious results of the nonconscious processes.

In order to merge similar concepts, a simple similarity metric based on the syntactic form of slot-filler objects and embedded objects could be used. This would in fact be a general way of implementing the mechanism of “composition” (S. Freud, 1900/1965) in which several persons are represented in the surface content of a dream as a single person.

The content of daydreaming, however, is not single concepts but sequences of events. In order to merge two event sequences, each intended to achieve a different goal, transformational rules or optimization techniques such as the “critics” proposed by Sacerdoti (1977) could be employed. These transformations would find and eliminate redundancies in a plan consisting of the simple composition of the two plans (performing one sequence and then the

other). For example, if one plan is to go to the grocery store to get groceries, and the other is to go to a newsstand to get a newspaper, the combined plan would consist of getting both the groceries and newspaper at the grocery store, paying not twice but once, and so on.

Alternatively, transformations could be applied as the plans are constructed, rather than afterwards.

Another event merging process is suggested by the serendipity detection mechanism of DAYDREAMER: Whenever a subgoal of one plan under construction is recognized as applicable—directly or through the application of further rules—to some subgoal of another plan under construction, those two plans may be merged.

Some techniques have been developed for constructing overdetermined conversational utterances. The MAGPIE program (P. N. Johnson & Robertson, 1981) constructs utterances which achieve multiple goals as follows: An utterance is produced to achieve the first goal; the remaining goals are then

achieved through incremental modification of the utterance. For example, MAGPIE models the generation of an utterance by a wife who has three goals: to obtain information about her husband’s location the previous night, to express anger toward her husband, and to regain control over the relationship.

Depending on the ordering of goals, the system is able to generate “Damn you, you were out too late last night. Where were you?” and “You were out so late last night, why?” (Anger, for example, is expressed in the former via the initial utterance “Damn you!” and in the latter through modification of the existing question utterance in order to emphasize the source of anger.) Hobbs and

D. A. Evans (1980) suggest a similar approach in which goals compete to fill in “slots” of an utterance; material may be added to a given slot provided that (a) a stronger goal has not already filled that slot, and (b) the material is suitable given the contents of other slots.

      1. DAYDREAMER*

One possible future direction for research is to construct a system for the overdetermined production of daydreams, called DAYDREAMER*. To handle the large number of independent processes, DAYDREAMER* would ideally be implemented on a massively parallel machine. In addition, several components of the current version of DAYDREAMER are suited to parallel implementation: The rule intersection search performed by the serendipity mechanism and investigation of alternative possibilities resulting from generic rules and episodes.

Instead of the stream of consciousness resulting from a shifting of processing from concern to concern, as in DAYDREAMER, the stream of consciousness in DAYDREAMER* would result from the merged results of processing on behalf of multiple concerns. Specifically, concerns would start out as independent processes executing simultaneously.Then two processes would be merged upon detection of applicability of a combining transformation,

upon detection of a common subgoal, or upon detection of potentially common subgoal (as in the serendipity mechanism of DAYDREAMER). This process could be applied repeatedly, resulting in the merging of what were once three or more independent processes. Processes would split again if the merged process was unable to achieve all (or some percentage) of its concern goals (or those processes would simply terminate).

Furthermore, why limit daydreaming on behalf of a given concern to a single event stream at a time? While alternative plans for achieving a given subgoal are investigated in serial order in DAYDREAMER, n alternative plans would result in a process being split into n parallel processes in DAYDREAMER*. Thus many alternative processes for each concern would be subject to merging with many processes for other concerns.

6This discussion applies only to daydreaming goal concerns. Personal goal concerns would be carried out in a single process corresponding to the simulated real world.

Daydreaming in Humans and Machines_ A Computer Model of the Stream of Thoug-converted_html_5b565714fb7acdc9.jpg

Figure 11.1: Overdetermined Stream of Consciousness

Processing in DAYDREAMER* can thus be viewed as a continuous activity of process splitting and merging. How is the stream of consciousness extracted from this activity?

Perhaps it could be based on a “good” or “best” path through the sequence of merged processes. Figure 11.1 shows on a small scale what this might look like. (These mechanisms could be implemented on a serial system, but at a much slower rate.) Mechanisms for the “voting” of multiple conflicting planning and inference rules, and for pruning many possibilities (especially those generated through relaxed unification), will also have to be devised. It will be interesting to see to what degree the stream of consciousness

resulting from such a system resembles the human stream of consciousness.

      1. Relationship to Previous Work

J. L. Singer (1974, pp. 247-248) has previously proposed that daydreaming makes use of active long-term memory processes which take place just outside our awareness. Minsky (1977, 1981) has proposed a “society of mind” theory in which the mind is composed of many communicating and sometimes conflicting agents; in this theory, also, self-awareness or consciousness arises out of the complex pattern of agents rather than being contained in any one executive agent. A related idea is Hewitt’s (1977) model in which computations are performed by agents called “actors” which send and receive messages to and from other actors and create new ones.

Several holistic connectionist researchers have proposed to model human intelligence using large networks inspired by the physiology of the brain (Rumelhart et al.,

1986; Hofstadter, 1983; J. A. Feldman & Ballard, 1982; Hinton & J. A. Anderson, 1981). Connectionist networks provide a natural implementation for overdetermined behavior, since, as Norman (1986, p. 546) points out, there is no one reason for any given state of such a network; any state results from a conjunction of many factors. However, at present, connectionist networks lack an ability which is fundamental to daydream construction: The ability to instantiate and combine concepts. The problem is that a concept is represented in a connectionist network as a stable pattern of activation of the nodes in the net. As a result, only one concept can be represented at a time by a single network (Rumelhart, Smolensky, McClelland, & Hinton, 1986, p. 38), making it difficult to combine several concepts into a new one. Some researchers have therefore proposed keeping several connected copies of a network, one for each concept (Hinton, 1981). What is instead needed is some way, in a single network, to compose together previous substates of the network to form a new state. Minsky (1981) provides a

suggestion as to how this might be accomplished via his mechanism of “K-lines” and the “K-recursion principle.” Otherwise, the problem of constructing network architectures becomes equivalent to the problem of representing concepts by conventional, knowledge-level means.We look forward to future experimentation with connectionist systems.

How does the view of overdetermined processing presented here compare to (nonholistic) spreading activation architectures for cognition such as that proposed by J. R. Anderson (1983)? Such an architecture consists of nodes connected by links, a process for spreading an analog level of activation from one node to connected nodes, and production rules which operate only on nodes above a certain level of activation.

If it is assumed that nodes above a certain threshold level of activation are conscious,then this mechanism can describe the production of a conscious result by several

nonconscious sources: Nodes whose activation is below the threshold of consciousness may spread activation to a node causing its level to rise above the threshold of consciousness. However, nonconscious processing in addition to this automatic process of spreading activation could not be described, because production rules may only access conscious nodes.

Alternatively, one threshold could be equated with consciousness, while another, lower, threshold could be used to determine whether processing might occur for a given node. This would enable modeling of nonconscious processing as well as the production of overdetermined elements. However, this would not enable overdetermination on a massive scale: Because activation level is intended to model the limited capacity of human short-term memory (Miller, 1956), only a relatively small number of nodes are permitted to be active, and accessible to processing, at any time. While in a spreading activation architecture only a small portion of the network is involved in processing,

massive overdetermination requires that a large component of the network be involved in processing.

Thus in contrast to the view of spreading activation architectures that processing occurs only on short-term memory elements, overdetermination takes the view that processing occurs on many elements not present in short-term

See Chapter 10 for a more detailed discussion of this point.

8Although J. R. Anderson (1983) equates nodes above a certain activation level with human short-term memory, he does not assume such nodes correspond to consciousness (note, p. 309).

memory. In fact, overdetermination suggests that short-term memory (and consciousness) may arise out of a large number of nonconscious processes. It remains to be determined whether and how this might be the case.

    1. Was It Worth It?

Why was such a large problem as daydreaming chosen? Why was not a small,

circumscribed topic instead chosen? There are several advantages to taking on a large problem: One is less likely to form a peephole view of things. One can integrate what were once seen as different problems. Work on one component often leads to progress on other components. For example, at the time the serendipity mechanism in DAYDREAMER was first designed, two other mechanisms were being used to recognize the applicability of input states to an active concern and to recognize the utility of an mutated action. It was then realized that the serendipity mechanism could accomplish both of these tasks. Similarly, the analogical planning mechanism has proved far more useful and general than was originally expected: It, for example, is used equally for carrying out previously daydreamed plans in the simulated real world, for generating daydream scenarios from past experiences, and for fleshing out a serendipity. If any one of the components of daydreaming had been studied alone, its relationship to other components might never have been noticed.

Taking on a large problem has its disadvantages as well: There is the danger of one’s energy being spread too thin on too many problems; it is easy to become overwhelmed with the magnitude of the task; one might be so intent on doing everything that one ends up doing nothing. But any true artificial intelligence problem is by its very nature a large problem: How can we study natural language independently of other aspects of cognition? The same could be said for vision, problem solving, creativity, and so on. Daydreaming is but one more AI- complete problem: If we could solve any one artificial intelligence problem, we could solve all the others. We can benefit in the future from both comprehensive and more restricted approaches to these problems.

Despite its limitations, DAYDREAMER provides an explicit, running model of the seemingly elusive, but pervasive, phenomenon of daydreaming. The program demonstrates how the generation of various creative past, present, and future events enables beneficial modification of future

behavior. Daydreaming is a useful capability for any computer system, and a necessary capability for a truly intelligent one.


Abelson, R. P. (1963). Computer simulation of “hot” cognition. In S. S. Tomkins & S. Messick (Eds.), Computer simulation of personality. New York: Wiley.

Abelson, R. P. (1981). Constraint, construal, and cognitive science. In Proceedings of the Third Annual Conference of the Cognitive Science Society (pp. 1-9), Berkeley, CA.

Adams, J. L. (1974). Conceptual blockbusting: A guide to better ideas. New York: W. W. Norton.

Anderson, J. R. (1983). University Press. Antrobus, J. S. (1977). learning model.

Journal of Mental Imagery, 2, 327-337.

Antrobus, J. S., & Singer, J. L. (1964). Visual signal detection as a function of sequential variability of simultaneous speech. Journal of

Experimental Psychology, 68, 603-610.

Antrobus, J. S., Singer, J. L., & Greenberg, S. (1966). Studies in the stream of consciousness: Experimental enhancement and suppression of spontaneous cognitive process. Perceptual and Motor Skills, 23,


Arnheim, R. (1974). Art and visual perception: A psychology of the creative eye (the new version). Berkeley, CA: University of California Press.

Asimov, I. (1963). The human brain: Its capacities and functions. New York: Signet. Barber, G. (1983). Supporting organizational problem solving with a work station. ACM Transactions on Office Information Systems, 1(1), 45-67.

Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology.

Cambridge: Cambridge University Press.

Baylor, G. W., & Deslauriers, D. (1986). Dreams as problem solving: A method of study—Part I: Background and theory.

Imagination, Cognition, and Personality, 6(2), 105-118.

Beare, J. I. (1931). De memoria et reminiscentia. In W. D. Ross (Ed.), The works of Aristotle (vol. 3). Oxford: Clarendon Press.

Beck, A. (1967). Depression: Clinical, experimental, and theoretical aspects. New York: Harper & Row.

Ben-Aaron, M. (1975). The Poetzel effect: Corroboration of a cybernetic hypothesis. In

R. Trappl & F. Pichler (Eds.), Progress in cybernetics and systems research (Vol. 1, pp. 247-252). Washington, DC: Hemisphere Publishing.

Bexton, W. H., Heron, W., & Scott, T. H. (1954). Effects of decreased variation in the sensory environment. Canadian Journal of Psychology, 8(2), 70-76.

Bleuler, E. (1951). Autistic thinking. In D. Rapaport (Ed.), Organization and pathology of thought. New York: Columbia University

Press. (Original work published 1912). The architecture of cognition. Cambridge, MA: Harvard

The dream as metaphor: An information- processing and Bloch, G. (1985). Body and self: Elements of human biology, behavior, and health. Los Altos, CA: William Kaufmann.

Bower, G. H., & Cohen, P. R. (1982). Emotional influences in memory and thinking: Data and theory. In M. S. Clark &

S. T. Fiske (Eds.), Affect and cognition: The 17th Annual Carnegie Symposium on Cognition. Hillsdale, NJ: Lawrence Erlbaum.

Bower, G. H., Monteiro, K. P., & Gilligan, S.

G. (1978). Emotional mood as a context for learning and recall. Journal of Verbal Learning and Verbal Behavior, 17, 573-585. Breuer, J., & Freud, S. (1937). Studies in hysteria. Boston: Beacon. (Original work published 1895)

Bundy, A., Silver, B., & Plummer, D. (1985). An analytical comparison of some rule-

learning programs. Artificial Intelligence, 27, 137-181.

Cannon, W. B. (1927). The James-Lange theory of emotions: A critical examination and an alternative theory. American Journal of Psychology, 39, 106-124. Carbonell, J. (1980). Towards a process model of human personality traits. Artificial Intelligence, 15, 49-74.

Carbonell, J. G. (1983). Learning by analogy: Formulating and generalizing plans from past experience. In R. S. Michalski, J. G. Carbonell, & T. M. Mitchell (Eds.), Machine learning (pp. 137-161). Palo Alto, CA: Tioga.

Charniak, E. (1983a). Passing markers: A theory of contextual influence in language comprehension. Cognitive Science, 7, 171-


Charniak, E. (1983b). The Bayesian basis of common-sense medical diagnosis. In Proceedings of the National Conference on Artificial Intelligence (pp. 70-73). Los Altos, CA: Morgan Kaufmann.

Charniak, E., & McDermott, D. V. (1985). Introduction to artificial intelligence.

Reading, MA: Addison-Wesley.

Charniak, E., Riesbeck, C. K., & McDermott,

D. V. (1980). Artificial intelligence programming. Hillsdale, NJ: Lawrence Erlbaum.

Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press.

Churchland, P. M. (1984). Matter and consciousness: A contemporary introduction to the philosophy of mind. Cambridge, MA: MIT Press.

Clark, M. S., & Isen, A. M. (1982). Toward understanding the relationship between feeling states and social behavior. In A. Hastorf & A. M. Isen (Eds.), Cognitive social psychology. Amsterdam: Elsevier.

Clocksin, W. F., & Mellish, C. S. (1981). Programming in Prolog. Berlin: SpringerVerlag.

Cohen, H., Cohen, B., & Nii, P. (1984). The first artificial intelligence coloring book. Los Altos, CA: William Kaufmann.

Colby, K. M. (1963) Computer simulation of a neurotic process. In S. S. Tomkins & S. Messick (Eds.), Computer simulation of personality. New York: Wiley.

Colby, K. M. (1973). Simulations of belief systems. In R. C. Schank & K. M. Colby (Eds.), Computer models of thought and language. San Francisco: W. H. Freeman. Colby, K. M. (1975). Artificial paranoia: A computer simulation of paranoid processes. New York: Pergamon Press.

Colby, K. M. (1981). Modeling a paranoid mind. The Behavioral and Brain Sciences, 4(4), 515-560.

Collins, A. M., & Loftus, E. F. (1975). A spreading activation theory of semantic processing. Psychological Review, 82, 407-


Cope, D. H. (1976). New directions in music (2nd Edition). Dubuque, IA: Wm. C. Brown. Copland, A. (1980). Music and imagination. Cambridge, MA: Harvard University Press. Crook, J. H. (1983). On attributing consciousness to animals. Nature, 303(5), 11-

14. Cundiff, G., & Gold, S. R. (1979). Daydreaming: A measurable concept. Perceptual and Motor Skills, 49, 347-353. Dallin, L. (1974). Techniques of twentieth century composition: A guide to the materials of modern music (3rd Edition). Dubuque, IA: Wm. C. Brown.

Darwin, C. (1872). The expression of the emotions in man and animals. Chicago: University of Chicago Press.

de Bono, E. (1970). Lateral thinking: Creativity step by step. New York: Harper & Row.

DeJong, G. (1981). Generalizations based on explanations. In Proceedings of the Seventh International Joint Conference on Artificial Intelligence (pp. 67-69). Los Altos, CA: Morgan Kaufmann.

Dement, W. C. (1976). Some must watch while some must sleep. San Francisco: San Francisco Book Company.

Dement, W., & Kleitman, N. (1957). Cyclic variations in EEG during sleep and their

relation to eye movements, body motility, and dreaming. Electroencephalography and Clinical Neurophysiology, 9, 673-690.

DeMillo, R. A., Lipton, R. J., & Perlis, A. J. (1978). Social processes and proofs of theorems and programs (Revised version, GIT-ICS-78/04). Atlanta: Georgia Institute of Technology.

Dennett, D. C. (1978). Brainstorms. Cambridge, MA: MIT Press.

Dennett, D. C. (1984). Elbow room: The varieties of free will worth wanting.

Cambridge, MA: MIT Press.

Desoille, R. (1966). The directed daydream. New York: Psychosynthesis Research Foundation.

Dietterich, T. G., & Michalski, R. S. (1983). A comparative review of selected methods for learning from examples. In R. S. Michalski, J. G. Carbonell, & T. M. Mitchell (Eds.), Machine learning (pp. 41-81). Palo Alto, CA: Tioga.

Dixon, N. F. (1981). Preconscious processing. New York: Wiley.

Doyle, J. (1980). A model for deliberation, action, and introspection (Technical Report 581). Cambridge, MA: Massachusetts Institute of Technology, Artificial Intelligence Laboratory.

Duda, R. O., Hart, P. E., and Nilsson, N. J. (1976). Subjective Bayesian methods for rule-based inference systems. In Proceedings of the 1976 National Computer Conference. Montvale, NJ: AFIPS Press.

Dyer, M. G. (1983a). In-depth understanding. Cambridge, MA: MIT Press. Dyer, M. G. (1983b). The role of affect in narratives.

Cognitive Science, 7, 211-242.

Dyer, M. G. (1983c). Understanding stories through morals and remindings. In Proceedings of the Eighth International Joint Conference on Artificial Intelligence. Los Altos, CA: Morgan Kaufmann.

Ericsson, K. A., & Simon, H. A. (1984).

Protocol analysis: Verbal reports as data. Cambridge, MA: MIT Press.

Ernst, G., & Newell, A. (1969). GPS: A case study in generality and problem solving. New York: Academic Press.

Evans, C. (1983). Landscapes of the night: How and why we dream. New York: Viking.

Evans, C. R., & Newman, E. A. (1964). Dreaming: An analogy from computers. New Scientist, 419, 577-579.

Faletti, J. (1982). PANDORA—A program for doing commonsense planning in complex situations. In Proceedings of the National Conference on Artificial Intelligence (pp.

185-188). Los Altos, CA: Morgan Kaufmann. Farrell, B. A. (1950). Experience. Mind, 54, 170-198.

Feigenbaum, E. A. (1963). Simulation of verbal learning behavior. In E. A. Feigenbaum & J. Feldman (Eds.), Computers and thought. New York: McGraw-Hill.

Feigenbaum, E. A., & Feldman, J. (Eds.).

(1963). Computers and thought. New York: McGraw-Hill.

Feigenbaum, E. A., & Simon, H. A. (1984). EPAM-like Models of Recognition and Learning. Cognitive Science, 8, 305-336.

Feldman, J. A., & Ballard, D. H. (1982). Connectionist models and their properties. Cognitive Science, 6, 205-254.

Feshbach, S. (1956). The catharsis hypothesis and some consequences of interaction with aggressive and neutral play objects. Journal of Personality, 24, 449-462.

Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford University Press.

Fikes, R. E., Hart, P. E., & Nilsson, N. J. (1972). Learning and executing generalized robot plans. Artificial Intelligence, 3, 251-


Fikes, R. E., & Nilsson, N. J. (1971). STRIPS: A new approach to the application of theorem proving to problem solving.

Artificial Intelligence, 2, 189-208. Flavell, J.

H. (1979). Metacognition and cognitive monitoring: A new area of cognitive developmental inquiry. American Psychologist, 34, 906-911.

Fodor, J. (1981). The mind-body problem. Scientific American, 244, 114-123. Foulkes,

D. (1966). The psychology of sleep. New York: Scribner’s.

Foulkes, D. (1978). A grammar of dreams. New York: Basic Books.

Foulkes, D. (1985). Dreaming: A cognitive- psychological analysis. Hillsdale, NJ: Lawrence Erlbaum.

Foulkes, D., & Fleisher, S. (1975). Mental activity in relaxed wakefulness. Journal of Abnormal Psychology, 84, 66-75.

French, T. M. (1952). The integration of behavior, volume I: Basic postulates.

Chicago: University of Chicago Press. French, T. M. (1953). The integration of behavior, volume II: The integrative process in dreams. Chicago: University of Chicago Press.

Freud, A. (1946). The ego and the mechanisms of defence. New York: International Universities Press. (Original work published 1937)

Freud, S. (1936). The problem of anxiety. New York: W. W. Norton. (Original work published 1926)

Freud, S. (1960). The psychopathology of everyday life. New York: W. W. Norton. (Original work published 1901).

Freud, S. (1961). Beyond the pleasure principle. New York: W. W. Norton. (Original work published 1920)

Freud, S. (1962). Creative writers and day- dreaming. In J. Strachey (Ed. and Trans.), The standard edition of the complete psychological works of Sigmund Freud (Vol. 9). London: Hogarth Press. (Original work published 1908)

Freud, S. (1965). The interpretation of dreams. New York: Avon. (Original work published 1900)

Friday, N. (1973). My secret garden. New York: Pocket Books.

Gackenbach, J. I. (1985). A survey of considerations for inducing conscious awareness of dreaming while dreaming. Imagination, Cognition, and Personality, 5(1), 41-55. Garfield, P. (1974). Creative dreaming. New York: Ballantine.

Giambra, L. M. (1977). Adult male daydreaming across the life span: A replication, further analyses, and tentative norms based upon retrospective reports.

International Journal of Aging and Human Development, 8(3), 197-228.

Giambra, L. M. (1980). A factor analysis of the items of the Imaginal Processes Inventory. Journal of Clinical Psychology, 36, 383-409.

Gibbs, R. W., & Mueller, R. A. G. (1988). Conversational sequences and preference for indirect speech acts. Discourse Processes, 11, 101-116.

The presentation of self in everyday life. Garden City, NY: Goffman, E. (1959).


Goldman, N. (1975). information processing.

New York: American Elsevier.

Goldman, N. (1982). AP3 reference manual. Unpublished report. Marina del Rey, CA: USC/Information Sciences Institute.

Conceptual generation. In R. C. Schank (Ed.), Conceptual Gouaux, C. (1971). Induced affective states and interpersonal attraction.

Journal of Personality and Social Psychology, 20, 37-43.

Green, C. E. (1968). Lucid dreams. Oxford: Institute of Psychophysical Research.

Green, G. H. (1923). The daydream: A study in development. London: University of London Press.

Griffin, D. R. (1981). The question of animal awareness (Revised and Enlarged Edition).

Los Altos, CA: William Kaufmann.

Grossberg, S. (1976). Adaptive pattern classification and universal recoding: Part I. Parallel development and coding of neural feature detectors. Biological Cybernetics, 23, 121-134.

Gr¨unbaum, A. (1984). The foundations of psychoanalysis: A philosophical critique. Berkeley, CA: University of California Press.

Guilford, J. P. (1967). The nature of human intelligence. New York: McGraw-Hill.

Hariton, E. B., & Singer, J. L. (1974). Women’s fantasies during sexual intercourse: Normative and theoretical implications.

Journal of Consulting and Clinical Psychology, 42, 313-322.

Hayes-Roth, B., & Hayes-Roth, F. (1979). A cognitive model of planning. Cognitive Science, 3(4), 275-310.

Heider, F. (1958). The psychology of interpersonal relations. Hillsdale, NJ: Lawrence Erlbaum.

Heiser, J. F., Colby, K. M., Faught, W. S., & Parkison, R. C. (1980). Can psychiatrists distinguish a computer simulation of paranoia from the real thing? The limitations of

Turing-like tests as measures of the adequacy of simulation. Journal of Psychiatric Research, 15, 149-162.

Hewitt, C. (1975). How to use what you know. Advance papers of the Fourth

International Joint Conference on Artificial Intelligence (Vol. 1). Los Altos, CA: Morgan Kaufmann.

Hewitt, C. (1977). Viewing control structures as patterns of passing messages. Artificial Intelligence, 8(3), 323-364.

Hiller, L. (1984). The composer and the computer. Abacus, 1(4), 9-31. Hiller, L. A., & Isaacson, L. M. (1959). Experimental music. Westport, CT: Greenwood Press.

Hillis, W. D. (1985). The connection machine. Cambridge, MA: MIT Press. Hinton, G. E. (1981). Implementing semantic networks in parallel hardware. In G. E. Hinton & J. A. Anderson (Eds.), Parallel models of associative memory (pp. 161-188). Hillsdale, NJ: Lawrence Erlbaum.

Hinton, G. E., & Anderson, J. A. (Eds.). (1981). Parallel models of associative memory. Hillsdale, NJ: Lawrence Erlbaum. Hinton, G. E., McClelland, J. L., & Rumelhart, D. E. (1986). Distributed representations. In D. E. Rumelhart, J. L. McClelland, & PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Volume 1:

Foundations (pp. 77-109). Cambridge, MA: MIT Press.

Hobbes, T. (1968). Leviathan. Harmondsworth, England: Penguin. (Original work published 1651)

Hobbs, J. R., & Evans, D. A. (1980). Conversation as planned behavior. Cognitive Science, 4, 349-377.

Hofstadter, D. R. (1983). The architecture of jumbo. In Proceedings of the Second Machine Learning Workshop, Urbana, IL. Hofstadter, D. R. (1985). Metamagical themas: Questing for the essence of mind and pattern. New York: Basic Books.

Hofstadter, D. R., & Dennett, D. C. (1981). The mind’s I: Fantasies and reflections on self and soul. Toronto: Bantam.

Housman, A. E. (1952). The name and nature of poetry. In B. Ghiselin (Ed.), The creative process. New York: Mentor.

Huba, G. J., Segal, B., & Singer, J. L. (1977). Consistency of daydreaming styles across samples of college male and female drug and alcohol users. Journal of Abnormal

Psychology, 86, 99-102.

Izard, C. E. (1977). Human emotions. New York: Plenum.

Jakobson, R. (1978). Six lectures on sound and meaning. Cambridge, MA: MIT Press. James, W. (1890a). The principles of psychology (Vol. 1). New York: Dover.

James, W. (1890b). The principles of psychology (Vol. 2). New York: Dover. Janis, I., & Mann L. (1977). Decision- making. New York: Free Press.

Johnson, M. K., & Raye, C. L. (1981). Reality monitoring. Psychological Review, 88(1), 67-85.

Johnson, P. N., & Robertson, S. P. (1981). MAGPIE: A goal-based model of conversation (Research Report 206). New Haven, CT: Yale University, Computer Science Department.

Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science of language, inference, and consciousness. Cambridge,

MA: Harvard University Press. Jones, E. (1908). Rationalisation in everyday life. Journal of Abnormal and Social Psychology, 3, 161-169.

Jung, C. (1916). Psychology of the unconscious. New York: Dodd, Mead, and Company.

Kahn, K. M. (1978). Director guide (AI Memo 482). Cambridge, MA: Massachusetts Institute of Technology, Artificial Intelligence Laboratory.

Kahneman, D., & Tversky, A. (1982). The simulation heuristic. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press.

Kedar-Cabelli, S. T. (1985). Purpose-directed analogy. In The Seventh Annual Conference of the Cognitive Science Society (pp. 150- 159), Irvine, CA.

Kelley, H. H. (1967). Attribution theory in social psychology. In D. Levin (Ed.), Nebraska Symposium on Motivation.

Lincoln, NB: University of Nebraska Press. Kernighan, B. W., & Plauger, P. J. (1976). Software tools. Reading, MA: AddisonWesley.

Klahr, P. (1978). Planning techniques for rule selection in deductive questionanswering. In

D. A. Waterman & F. Hayes-Roth (Eds.), Pattern-directed inference systems (pp. 223- 239). New York: Academic Press.

Klinger, E. (1971). The structure and function of fantasy. New York: John Wiley.

Klinger, E. (1978). Modes of normal conscious flow. In K. S. Pope & J. L. Singer (Eds.), The stream of consciousness. New York: Plenum.

Koestler, A. (1964). The act of creation: A study of the conscious and unconscious in science and art. New York: Macmillan.

Kolodner, J. L. (1984). Retrieval and organizational strategies in conceptual memory: A computer model. Hillsdale, NJ: Lawrence Erlbaum.

Kolodner, J. L., Simpson, R. L., & Sycara- Cyranski, K. (1985). A process model of cased-based reasoning in problem solving. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence (pp.

284-290). Los Altos, CA: Morgan Kaufmann.

Kornfeld, W. A., & Hewitt, C. (1981). The scientific community metaphor. IEEE Transactions on Systems, Man, and Cybernetics, SMC-11(1).

Kowalski, R. (1975). A proof procedure using connection graphs. Journal of the ACM, 22, 572-595.

Kris, E. (1952). Psychoanalytic explorations in art. New York: International Universities Press.

Kugel, P. (1986). Thinking may be more than computing. Cognition, 22, 137-198. LaBerge,

S. (1985) Lucid dreaming: The power of being awake and aware in your dreams. Los Angeles: Jeremy P. Tarchner.

Laird, J. E. (1984). Universal subgoaling. Doctoral dissertation, Department of Computer Science, Carnegie-Mellon

University, Pittsburgh, PA.

Langley, P., Bradshaw, G. L., & Simon, H.

A. (1983). Rediscovering chemistry with the Bacon system. In R. S. Michalski, J. G. Carbonell, & T. M. Mitchell (Eds.), Machine learning (pp. 307-329). Palo Alto, CA: Tioga. Langley, P., & Neches, R. (1981). PRISM user’s manual (Technical report). Pittsburgh, PA: Carnegie-Mellon University, Department of Psychology.

Lazarus, R. S. (1968). Emotions and adaptation: Conceptual and empirical relations. In W. J. Arnold (Ed.), Nebraska Symposium on Motivation. Lincoln, NB: University of Nebraska.

Leavitt, R. (1976). Artist and computer. New York: Harmony Books. Lebowitz, M. (1980). Generalization and memory in an integrated understanding system (Research Report 186). New Haven, CT: Yale University, Computer Science Department.

Lehnert, W. G. (1982). Plot units: A narrative summarization strategy. In W. G. Lehnert &

M. H. Ringle (Eds.), Strategies for natural language processing. Hillsdale, NJ: Lawrence Erlbaum.

Lenat, D. B. (1976). AM: An artificial intelligence approach to discovery in mathematics as heuristic search (Report No. STAN-CS-76-570). Stanford, CA: Stanford University, Computer Science Department. Lenat, D. B. (1983). The role of heuristics in learning by discovery: Three case studies. In

R. S. Michalski, J. G. Carbonell, & T. M. Mitchell (Eds.), Machine learning (pp. 243- 306). Palo Alto, CA: Tioga.

Lenat, D. B., & Brown, J. S. (1984). Why AM and EURISKO appear to work. Artificial Intelligence, 23(3), 269-294.

Leuner, H. (1969). Guided affective imagery: A method of intensive psychotherapy.

American Journal of Psychotherapy, 23, 4-22. Lewin, K. (1951). Intention, will and need. In

D. Rapaport (Ed.), Organization and pathology of thought. New York: Columbia University Press. (Original work published 1926).

Lewis, H. R., & Papadimitriou, C. H. (1981). Elements of the theory of computation.

Englewood Cliffs, NJ: Prentice-Hall. Libet, B. (1985). Unconscious cerebral initiative and the role of conscious will in

voluntary action. The Behavioral and Brain Sciences, 8, 529-566.

Linton, M. (1982). Transformations of memory in everyday life. In U. Neisser (Ed.), Memory observed: Remembering in natural contexts. San Francisco: W. H. Freeman.

Loeb, A., Beck, A. T., & Diggory, J. (1971). Differential effects of success and failure on depressed and nondepressed patients. The Journal of Nervous and Mental Disease, 152, 106-114.

Loftus, E. F. (1975). Leading questions and the eyewitness report. Cognitive Psychology, 7, 560-572.

Luthe, W. (1969). Autogenic training: Method, research, and application in medicine. In C. T. Tart (Ed.), Altered states of consciousness. New York: John Wiley. Mandler, G. (1975). Mind and emotion. New York: John Wiley.

Marcel, A. J. (1980). Conscious and preconscious recognition of polysemous words: Locating the selective effect of prior verbal context. In R. S. Nickerson (Ed.), Attention and performance (vol. 8). Hillsdale, NJ: Lawrence Erlbaum.

Maslow, A. H. (1954). Motivation and personality (2nd Edition). New York: Harper & Row.

Maury, L. F. A. (1878). Le sommeil et les rˆeves. Paris: Didier.

McCarthy, J., Abrahams, P. W., Edwards, D. J., Hart, T. P., & Levin, M. I. (1965). LISP

1.5 programmer’s manual. Cambridge, MA: MIT Press.

McCarthy, J., & Hayes, P. J. (1969). Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer & D. Michie (Eds.), Machine intelligence 4 (pp. 463-502). Edinburgh, Scotland: Edinburgh Univeristy Press.

McClelland, J. L. (1986). The programmable blackboard model of reading. In J. L. McClelland, D. E. Rumelhart, & PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Volume 2: Psychological and biological models (pp.

122-169). Cambridge, MA: MIT Press. McClelland, J. L., & Rumelhart, D. E. (1986). A distributed model of human learning and memory. In J. L. McClelland, D.

E. Rumelhart, & PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Volume 2: Psychological and biological models (pp.

170-215). Cambridge, MA: MIT Press. McClelland, J. L., Rumelhart, D. E., & Hinton, G. E. (1986). The appeal of parallel distributed processing. In D. E. Rumelhart, J.

L. McClelland, & PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Volume 1: Foundations (pp. 3-44). Cambridge, MA: MIT Press.

McClelland, J. L., Rumelhart, D. E., & PDP Research Group (Eds.). (1986). Parallel distributed processing: Explorations in the microstructure of cognition. Volume 2: Psychological and biological models.

Cambridge, MA: MIT Press.

McDermott, D. V. (1976). Artificial intelligence meets natural stupidity. SIGART Newsletter, 57. New York: Association for Computing Machinery.

McDougall, W. (1923). Outline of psychology. New York: Scribner’s. McKellar, P. (1957). Imagination and

thinking: A psychological analysis. New York: Basic Books.

McNeil, J. (1981). [Daydream diaries (retrospective reports) collected from subjects]. Unpublished raw data.

McNeil, J. (1985). Interpersonal problem resolution in narrated imagery and verbal thought. Unpublished doctoral dissertation, Department of Psychology, University of California, Los Angeles.

Meacham, J. A., & Leiman, B. (1982) Remembering to perform future actions. In U. Neisser (Ed.), Memory observed: Remembering in natural contexts. San Francisco: W. H. Freeman.

Mednick, S. (1962). The associative basis of the creative process. Psychological Review, 69, 220-232.

Meehan, J. (1976). The metanovel: Writing stories by computer (Research Report 74). New Haven, CT: Yale University, Computer Science Department. Metz, C. (1982). The imaginary signifier. Bloomington, IN: Indiana University Press. Meyer, L. B. (1956). Emotion and meaning in music.

Chicago: University of Chicago Press.

Michalski, R. S., & Winston, P. H. (1986). Variable precision logic. Artificial Intelligence, 29, 121-146.

Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information.

Psychological Review, 63, 81-97. Miller, G. A., Galanter, E., & Pribram, K. H. (1960).

Plans and the structure of behavior. New York: Holt, Rinehart, and Winston.

Minsky, M. (1968). Descriptive languages and problem solving. In M. Minsky (Ed.), Semantic information processing (pp. 419- 424). Cambridge, MA: MIT Press. Minsky,

M. (1975). A framework for representing knowledge. In P. H. Winston (Ed.), The psychology of computer vision. New York: McGraw-Hill.

Minsky, M. L. (1977). Plain talk about neurodevelopmental epistemology. In Proceedings of the Fifth International Joint Conference on Artificial Intelligenceings of the Fifth International Joint Conference on Artificial Intelligence 1092). Los Altos, CA: Morgan Kaufmann.

Minsky, M. L. (1981). K-lines: A theory of

memory. In D. A. Norman (Ed.), Perspectives on cognitive science (pp. 87-104). Norwood, NJ: Ablex.

Mischel, W. (1979). On the interface of cognition and personality: Beyond the personsituation debate. American Psychologist, 34, 740-754.

Mitchell, T. M., Kellar, R. M., & Kedar- Cabelli, S. T. (1986). Explanation-based generalization: A unifying view. Machine Learning, 1, 47-80.

Moser, U., Pfeifer, R., Schneider, W., Von Zeppelin, I., & Schneider, H. (1980).

Computer simulation of dream processes (Technical Report 6). Z¨urich: Soziologisches Institut der Universit¨at Z¨urich, Interdisziplin¨aren Konfliktforschungsstelle. Moser, U., Pfeifer, R., Schneider, W., Von Zeppelin, I., & Schneider, H. (1982).

Experiences with computer simulation of dream processes. Sleep 1982: 6th European Congress on Sleep Research (pp. 30-44).

Basel, Switzerland: S. Karger. Moss, J. E. B. (1981). Nested transactions: An approach to reliable distributed computing (Technical Report 260). Cambridge, MA: Massachusetts

Institute of Technology, Laboratory for Computer Science.

Mostow, D. J. (1981). Mechanical transformation of task heuristics into operational procedures. Unpublished doctoral dissertation, Department of Computer Science, Carnegie-Mellon University, Pittsburgh, PA.

Mostow, J. (1983). Program transformations for VLSI. In Proceedings of the Eighth International Joint Conference on Artificial Intelligence (pp. 40-43). Los Altos, CA: Morgan Kaufmann.

Mueller, E. T. (1983). Implementation of nested transactions in a distributed system (Technical Report CSD-831115). (Master’s thesis). Los Angeles: University of California, Computer Science Department. Mueller, E. T. (1987a). GATE user’s manual (2nd edition, Technical Report UCLAAI-87- 6). Los Angeles: University of California, Artificial Intelligence Laboratory. Mueller, E.

T. (1987b). Daydreaming and computation: A computer model of everyday creativity, learning, and emotions in the human stream of thought (Technical Report UCLA-AI-87-

8). Doctoral dissertation, Computer Science Department, University of California, Los Angeles, CA.

Mueller, E. T., Moore, J. D., & Popek, G. J. (1983). A nested transaction mechanism for LOCUS. In Proceedings of the Ninth ACM Symposium on Operating Systems Principles (pp. 71-89). New York: Association for Computing Machinery. Mueller, E. T., & Zernik, U. (1984). GATE reference manual (Technical Report UCLA-AI-84-5). Los Angeles: University of California, Artificial Intelligence Laboratory.

Mueller, E. T., & Dyer, M. G. (1985a). Towards a computational theory of human daydreaming. In The Seventh Annual Conference of the Cognitive Science Society (pp. 120-129), Irvine, CA.

Mueller, E. T., & Dyer, M. G. (1985b). Daydreaming in humans and computers. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence (pp.

278-280). Los Altos, CA: Morgan Kaufmann. Mueller, R. A. G., & Gibbs, R. W. (1987).

Processing idioms with multiple meanings. Journal of Psycholinguistic Research, 16(1),


Mueller, R. E. (1963). Inventivity: How man creates in art and science. New York: John Day.

Mueller, R. E. (1967). The science of art: The cybernetics of creative communication. New York: John Day.

Mueller, R. E. (1983, January). When is Computer Art Art? Creative Computing, pp. 136-144.

Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83, 435-445.

Narayanan, A. (1983). What is it like to be a machine? (Research Report R-116). Exeter, England: University of Exeter, Department of Computer Science. Neisser, U. (1963). The imitation of man by machine. Science, 139, 193-197. Neisser, U. (1967). Cognitive psychology. New York: Appleton.

Neisser, U. (1982a). John Dean’s memory: A case study. In U. Neisser (Ed.), Memory observed: Remembering in natural contexts. San Francisco: W. H. Freeman. Neisser, U. (Ed.). (1982b). Memory observed: Remembering in natural contexts. San Francisco: W. H. Freeman.

Neisser, U. (1982c). Memory: What are the important questions? In U. Neisser (Ed.), Memory observed: Remembering in natural contexts. San Francisco: W. H. Freeman.

Newell, A. (1982). The knowledge level. Artificial Intelligence, 18(1), 87-127. Newell, A., & Simon, H. A. (1972). Human problem- solving. Englewood Cliffs, NJ: Prentice-Hall. Newell, A., Shaw, J. C., & Simon, H. A. (1957). Empirical explorations of the logic theory machine. Proceedings of the West Joint Computer Conference (vol. 15, pp. 218- 239).

Nilsson, N. J. (1980). Principles of artificial intelligence. Los Altos, CA: Morgan Kaufmann.

Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84, 231-259.

Norman, D. A. (1981). Twelve issues for cognitive science. In D. A. Norman (Ed.), Perspectives on cognitive science (pp. 265- 295). Norwood, NJ: Ablex. Norman, D. A. (1982). Learning and memory. San Francisco: W. H. Freeman. Norman, D. A.

(1986). Reflections on cognition and parallel distributed processing. In J. L. McClelland,

D. E. Rumelhart, & PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Volume 2: Psychological and biological models (pp. 531-546). Cambridge, MA: MIT Press.

Ornstein, R. E. (Ed.). (1973). The nature of human consciousness. New York: Viking. Osborn, A. F. (1953). Applied imagination. New York: Scribners.

Osgood, C. E., & Tannenbaum, P. (1955). The principle of congruity and the prediction of attitude change. Psychological Review, 362, 42-55.

Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. New York: Basic Books.

Paton, R. (1972). Fantasy content, daydreaming frequency and the reduction of aggression. Unpublished doctoral dissertation, City University of New York. Pearl, J. (1982). Reverend Bayes on inference engines: a distributed hierarchical approach.

In Proceedings of the National Conference on

Artificial Intelligence (pp. 133-136). Los Altos, CA: Morgan Kaufmann.

Pekala, R. J., & Levine, R. L. (1981). Mapping consciousness: Development of an empirical-phenomenological approach.

Imagination, Cognition and Personality, 1(1), 29-47.

Pfeifer, R. (1982). Cognition and emotion: An information processing approach (CIP

Pittsburgh, PA: Carnegie-Mellon University, Department of Working Paper 436).


Plutchik, R. (1980). R. (Ed.), Emotion: Theory, research, and experience. Volume 1: Theories of emotion (pp. 3-33). New York: Academic Press.

Pohl, I. (1971). Bi-directional search. In B. Meltzer & D. Michie (Eds.), Machine intelligence 6 (pp. 127-140). Edinburgh, Scotland: Edinburgh Univeristy Press.

Poincar´e, H. (1952). Mathematical creation. In B. Ghiselin (Ed.), The creative process.

New York: Mentor. (Original work published


Polya, G. (1945). How to solve it. Princeton, NJ: Princeton University Press.

Pope, K. S. (1978). How gender, solitude, and posture influence the stream of consciousness. In K. S. Pope & J. L. Singer (Eds.), The stream of consciousness. New York: Plenum.

Pope, K. S., & Singer, J. L. (1978a). Regulation of the stream of consciousness: Toward a theory of ongoing thought. In G. E. Schwartz & D. Shapiro (Eds.), Consciousness and self regulation: Advances in research (Vol. 2). New York: Plenum.

Pope, K. S., & Singer, J. L. (Eds.). (1978b). The stream of consciousness. New York: Plenum.

Posey, T. B., & and Losch, M. E. (1983). Auditory hallucinations of hearing voices in 375 normal subjects. Imagination, Cognition and Personality, 3(2), 99-113.

P¨ otzl, O. (1960) The relationship between

experimentally induced dream images and indirect vision. Psychological Issues, 3, Monograph 7, 41-120. New York: International Universities Press. (Original work published 1917)

Pylyshyn, Z. W. (1984). Computation and cognition: Toward a foundation for cognitive science. Cambridge, MA: MIT Press.

Quillian, M. R. (1968). Semantic memory. In

M. Minsky (Ed.), Semantic information processing (pp. 227-270). Cambridge, MA: MIT Press.

A general psychoevolutionary theory of emotion. In Plutchik, Quinn, N. (1981). Marriage is a do-it-yourself project: The organization of marital goals. In Proceedings of the Third Annual Conference of the Cognitive Science Society (pp. 31-40), Berkeley, CA.

Randell, B. (1975). System structure for software fault tolerance. IEEE Transactions on Software Engineering, SE-1(2), 220-232. Rapaport, D. (Ed.). (1951). Organization and

pathology of thought. New York: Columbia University Press.

Rapaport, D. (1960). The structure of psychoanalytic theory: A systematizing attempt. Psychological Issues, 2, Monograph

6. New York: International Universities Press.

Rapaport, D. (1974). The history of the concept of association of ideas. New York: International Universities Press.

Reed, D. P. (1978). Naming and synchronization in a decentralized computer system (Technical Report 205). Cambridge, MA: Massachusetts Institute of Technology, Laboratory for Computer Science.

Rees, J. A., Adams, N. I., & Meehan, J. R. (1984). The T manual (4th ed.). New Haven, CT: Yale University, Computer Science Department.

Reik, T. (1948). Listening with the third ear: The inner experience of a psychoanalyst.

New York: Farrar, Straus, and Giroux.

Reiser, B. J. (1983). Contexts and indices in autobiographical memory (Technical Report 24). New Haven, CT: Yale University, Cognitive Science Program. Ritchie, D. M., & Thompson, K. (1974). The UNIX time- sharing system. Communications of the ACM, 17(7), 365-375.

Robinson, J. A. (1965). A machine-oriented logic based on the resolution principle.

Journal of the Association for Computing Machinery, 12(1), 23-41.

Rosenblatt, F. (1962). Principles of neurodynamics. New York: Spartan.

Rosenbloom, P. S. (1983). The chunking of goal hierarchies: A model of practice and stimulus-response compatibility (Technical Report 83-148). Doctoral dissertation, Department of Computer Science, Carnegie- Mellon University, Pittsburgh, PA.

Rothenberg, A. (1979). The emerging goddess: The creative process in art, science, and other fields. Chicago: University of Chicago Press.

Rulifson, J., Derksen, J., & Waldinger, R. (1972). QA4: A procedural calculus for intuitive reasoning (Technical Note 73).

Stanford, CA: Stanford Research Institute, Artificial Intelligence Center.

Rumelhart, D. E., Hinton, G. E., & McClelland, J. L. (1986). A general framework for parallel distributed processing. In D. E. Rumelhart, J. L. McClelland, & PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Volume 1: Foundations (pp. 45-76). Cambridge, MA: MIT Press.

Rumelhart, D. E., McClelland, J. L., & PDP Research Group (Eds.). (1986). Parallel distributed processing: Explorations in the microstructure of cognition. Volume 1: Foundations. Cambridge, MA: MIT Press. Rumelhart, D. E., & Norman, D. A. (1982). Simulating a skilled typist: A study of skilled cognitive-motor performance. Cognitive Science, 6, 1-36.

Rumelhart, D. E., Smolensky, P., McClelland, J. L., & Hinton, G. E. (1986). Schemata and sequential thought processes in PDP models. In J. L. McClelland, D. E. Rumelhart, & PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Volume 2: Psychological and biological models (pp. 7- 57). Cambridge, MA: MIT Press.

Rumelhart, D. E., & Zipser, D. (1986). Feature discovery by competitive learning. In

D. E. Rumelhart, J. L. McClelland, & PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Volume 1: Foundations (pp. 151-193). Cambridge, MA: MIT Press.

Russell, B. (1945). A history of western philosophy. New York: Simon and Schuster. Sacerdoti, E. D. (1974). Planning in a hierarchy of abstraction spaces. Artificial Intelligence, 5, 115-135.

Sacerdoti, E. D. (1977). A structure for plans and behavior. New York: Elsevier. Salaman,

E. (1970). A collection of moments: A study of involuntary memories. London: Longman. Schank, R. C. (1975). Conceptual information processing. New York: American Elsevier.

Schank, R. C. (1977). Rules and topics in conversation. Cognitive Science, 1(4), 421-


Schank, R. C. (1982). Dynamic memory. Cambridge: Cambridge University Press. Schank, R. C. (1986). Explanation patterns: Understanding mechanically and creatively. Hillsdale, NJ: Lawrence Erlbaum.

Schank, R. C., & Abelson, R. P. (1977). Scripts, plans, goals, and understanding. Hillsdale, NJ: Lawrence Erlbaum.

Schank, R. C., & Riesbeck, C. K. (1981). Inside computer understanding: Five programs plus miniatures. Hillsdale, NJ: Lawrence Erlbaum.

Schank, R. C., Wilensky, R., Carbonell, J. G., Kolodner, J. L., & Hendler, J. A. (1978).

Representing attitudes: Some primitive states (Research Report 128). New Haven, CT: Yale University, Computer Science


Schachter, S., & Singer, J. E. (1962). Cognitive, social and physiological determinants of emotional state.

Psychological Review, 69, 379-399.

Segal, B., Huba, G., & Singer, J. L. (1980). Drugs, daydreaming, and personality: A study of college youth. Hillsdale, NJ: Lawrence Erlbaum.

Selfridge, O. G. (1959). Pandemonium: A paradigm for learning. In Proceedings of the Symposium on the Mechanization of Thought Processes (vol. 1). London: H. M. Stationary Office.

Shanon, B. (1981). Thought sequences and the language of consciousness. In Proceedings of the Third Annual Conference of the Cognitive Science Society (pp. 234- 235), Berkeley, CA.

Shepard, R. N., & Metzler, J. (1971). Mental rotation of three-dimensional objects.

Science, 171, 701-703.

Shapero, H. (1952). The musical mind. In B. Ghiselin (Ed.), The creative process. New York: Mentor. (Original work published 1946)

Shortliffe, E. H. (1976). Computer-based medical consultations: MYCIN. New York: American Elsevier.

Shortliffe, E. H., & Buchanan, B. G. (1975). A model of inexact reasoning in medicine.

Mathematical Biosciences, 23, 351-379.

Silberer, H. (1951). Report on a method of eliciting and observing certain symbolic hallucination-phenomena. In D. Rapaport (Ed.), Organization and pathology of thought. New York: Columbia University Press. (Original work published 1909).

Simon, H. (1967). Motivational and emotional controls of cognition.

Psychological Review, 74(1), 29-39.

Simon, H. (1974). How big is a chunk? Science, 183, 482-488.

Singer, J. L. (1966). Daydreaming. New York: Random House.

Singer, J. L. (1974). Daydreaming and the stream of thought. American Scientist, 62, 244-252.

Singer, J. L. (1975). The inner world of daydreaming. New York: Harper & Row.

Singer, J. L. (1978). Experimental studies of daydreaming and the stream of consciousness. In K. S. Pope & J. L. Singer (Eds.), The stream of consciousness. New York: Plenum.

Singer, J. L. (1981). Towards the scientific study of imagination. Imagination, Cognition, and Personality, 1(1), 5-28.

Singer, J. L., & Antrobus, J. S. (1963). A factor analysis of daydreaming and conceptually related cognitive and personality variables. Perceptual and Motor Skills, Monograph supplement, 3-V17.

Singer, J. L., & Antrobus, J. S. (1972). Daydreaming, imaginal processes and personality: A normative study. In P. W. Sheehan (Ed.), The function and nature of

imagery. New York: Academic Press.

Singer, J. L., & McCraven, V. (1961). Some characteristics of adult daydreaming. Journal of Psychology, 51, 151-164.

Singer, J. L., & Pope, K. S. (1978). The power of human imagination. New York: Plenum.

Skinner, B. F. (1935). Two types of conditioned reflex and a pseudo type. Journal of General Psychology, 12, 66-77.

Sloman, A., & Croucher, M. (1981). Why robots will have emotions. In Proceedings of the Seventh International Joint Conference on Artificial Intelligence (pp. 197-202). Los Altos, CA: Morgan Kaufmann.

Small, S., & Rieger, C. (1982). Parsing and comprehending with word experts (a theory and its realization). In W. G. Lehnert & M. H. Ringle (Eds.), Strategies for natural language processing. Hillsdale, NJ: Lawrence Erlbaum. Smirnov, A. A. (1973). Problems of the psychology of memory. New York: Plenum.

Smith, B. (1982). Reflection and semantics in a procedural language (Technical Report 272). Cambridge, MA: Massachusetts Institute of Technology, Laboratory for Computer Science.

Smith, D. E., Genesereth, M. R., & Ginsberg,

M. L. (1986). Controlling recursive inference. Artificial Intelligence, 30, 343-389.

Smolensky, P. (1986a). Information processing in dynamical systems: Foundations of harmony theory. In D. E. Rumelhart, J. L. McClelland, & PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Volume 1: Foundations (pp. 194-281). Cambridge, MA: MIT Press.

Smolensky, P. (1986b). Neural and conceptual interpretation of PDP models. In

J. L. McClelland, D. E. Rumelhart, & PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Volume 2:

Psychological and biological models (pp. 390-431). Cambridge, MA: MIT Press.

Sofer, K. (1981, October). Art? Or Not Art? Datamation, pp. 118-127. Starker, S. (1982). Fantastic thought: All about dreams, daydreams, hallucinations, and hypnosis.

Englewood Cliffs, NJ: Prentice-Hall. Stoller, R. J. (1979). Sexual excitement: Dynamics of erotic life. New York: Simon and Schuster.

Stoy, J. E. (1977). Denotational semantics: The Scott-Strachey approach to programming language theory. Cambridge, MA: MIT Press. Suinn, R. M. (1984). Visual motor behavior rehearsal: The basic technique. Scandinavian Journal of Behaviour Therapy, 13(3), 131-


Suppes, P. & Warren, H. (1975). On the generation and classification of defence mechanisms. International Journal of Psycho- Analysis, 56, 405-414.

Sussman, G. J. (1975). A computer model of skill acquisition. New York: American Elsevier.

Sussman, G. J., & Steele, G. L., Jr. (1975). SCHEME: An interpreter for extended lambda calculus (AI Memo 349). Cambridge, MA: Massachusetts Institute of Technology, Artificial Intelligence Laboratory.

Tarnopolsky, Y. (1986). Spontaneous thinking as natural selection. Unpublished manuscript.

Tart, C. T. (Ed.). (1969). Altered states of consciousness. New York: John Wiley.

Thompson, K. (1978). UNIX implementation. The Bell System Technical Journal, 57(6), Part 2, 1931-1946.

Titchener, E. B. (1912). The schema of introspection. American Journal of Psychology, 23, 485-508.

Tomkins, S. S. (1962). Affect, imagery, consciousness. Vol. I. The positive affects. New York: Springer.

Tomkins, S. S. (1963). Affect, imagery, consciousness. Vol. II. The negative affects. New York: Springer.

Touretzky, D. S. (1986). Boltzcons. In The Eighth Annual Conference of the Cognitive

Science Society.

Tulving, E. (1972). Episodic and semantic memory. In E. Tulving & W. Donaldson (Eds.), Organization of memory. New York: Academic Press.

Tulving, E. (1983). Elements of episodic memory. New York: Oxford University Press.

Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. In Proceedings of the London Mathematical Society, 2(42), 230-

265, and 2(43), 544-546.

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 54(236), 433-460.

Varendonck, J. (1921). The psychology of day-dreams. London: George Allen & Unwin.

Wallas, G. (1926). The art of thought. New York: Harcourt, Brace.

Waltz, D. L., & Boggess, L. (1979). Visual analog representations for natural language

understanding. In Proceedings of the Sixth International Joint Conference on Artificial Intelligence (pp. 926-934). Los Altos, CA: Morgan Kaufmann.

Waltz, D. L., & Pollack, J. B. (1985). Massively parallel parsing: A strongly interactive model of natural language interpretation. Cognitive Science, 9(1), 75-


Warren, H. C. (1921). A history of the association psychology. New York: Scribner’s. Watkins, M. M. (1976). Waking dreams. New York: Gordon and Breach.

Watson, J. B. (1924). Behaviorism. New York: W. W. Norton.

Weiner, B. (1980a). A cognitive (attribution)- emotion-action model of motivated behavior: An analysis of judgments of help-giving.

Journal of Personality and Social Psychology, 39, 186-200.

Weiner, B. (1980b). Human motivation. New York: Holt, Rinehart, & Winston.

Weiner, B. (1982). The emotional consequences of causal attributions. In M. S. Clark & S. T. Fiske (Eds.), Affect and cognition: The 17th Annual Carnegie Symposium on Cognition. Hillsdale, NJ: Lawrence Erlbaum.

Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45.

Weizenbaum, J. (1974). Automating psychotherapy [Letter to the editor]. Communications of the ACM, 17(7), 543. Wertheimer, M. (1945). Productive thinking. Chicago: University of Chicago Press.

Whitney, J. (1980). Digital harmony: On the complementarity of music and visual art.

Peterborough, NH: Byte Books.

Wilensky, R. (1983). Planning and understanding: A computational approach to human reasoning. Reading, MA: Addison- Wesley.

Williams, M. D., & Hollan, J. D. (1981). The

process of retrieval from very long term memory. Cognitive Science, 5, 87-119.

Winston, P. H. (1984). Artificial intelligence. Reading, MA: Addison-Wesley. Winston, P. H., & Horn, B. K. P. (1981). LISP. Reading,

MA: Addison-Wesley. Wirth, N. (1971). The programming language PASCAL. Acta Informatica, 1, 35-63.

Woodworth, R. S., & Schlosberg, H. (1954). Experimental psychology (Revised Edition). New York: Holt, Rinehart, and Winston.

Xenakis, I. (1971). Formalized music. Bloomington, Indiana: Indiana University Press.

Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35, 151-175.

Tags: Reference
Copywrite © 2020 LOGICMOO (Unless otherwise credited in page)