The Eightfold Way of Deliberation Dialogue Peter McBurney,'* David Hitchcock,2? Simon Parsons®+ 'Department of Computer Science, University of Liverpool, Liverpool L69 7ZF, United Kingdom 2 Department of Philosophy, McMaster University, Hamilton, Ontario L8S 4K1, Canada ’Department of Computer and Information Science, Brooklyn College, City University of New York, Brooklyn, New York 11210, USA Deliberation dialogues occur when two or more participants seek to jointly agree on an action or a course of action in some situation. We present the first formal framework for such dialogues, grounding it in a theory of deliberative reasoning from the philosophy of argumentation. We further fully articulate the locutions and rules of a formal dialogue game for this model, so as to specify a protocol for deliberation dialogues. The resulting protocol is suitable for dialogues between computational entities, such as autonomous software agents. To assess our protocol, we consider it against various records of human deliberations, against normative principles for the conduct of human dialogues, and with respect to the outcomes produced by dialogues under- taken according to the protocol. © 2007 Wiley Periodicals, Inc. 1. INTRODUCTION In an influential typology, argumentation theorists Doug Walton and Erik Krabbe! classified human dialogues according to the objectives of the dialogue, the objectives of the participants (which may differ from one another), and the infor- mation that each participant had available at commencement of the dialogue. This classification resulted in six primary dialogue types, as follows: Information- Seeking Dialogues are dialogues in which one participant seeks the answer to some question(s) from another participant, who is believed by the first to know the answer(s). In Inquiry Dialogues the participants collaborate to answer some question or questions whose answers are not known to any one participant. Persuasion Dialogues involve one participant seeking to persuade another to accept *Author to whom all correspondence should be addressed: e-mail: p.j.mcburney@ csc. liv.ac.uk. e-mail: hitchckd @mcmaster.ca. e-mail: parsons @sci.brooklyn.cuny.edu. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, VOL. 22, 95-132 (2007) © 2007 Wiley Periodicals, Inc. Published online in Wiley InterScience (www.interscience.wiley.com). e DOI 10.1002/int.20191 @WILEY . S InterScience’ DISCOVER SOMETHING GREAT 96 MCBURNEY, HITCHCOCK, AND PARSONS a statement he or she does not currently endorse. In Negotiation Dialogues, the par- ticipants bargain over the division of some scarce resource. Here, each participant may be seeking to maximize his or her share of the resource, in which case the indi- vidual goals of the participants are in conflict. Participants of Deliberation Dia- logues collaborate to decide what action or course of action should be adopted in some situation. Here, participants share a responsibility to decide the course of action, or, at least, they share a willingness to discuss whether they have such a shared responsibility. In Eristic Dialogues, participants seek to vent perceived griev- ances, and the dialogue may act as a substitute for physical fighting.* Formal models of several of these dialogue types have been developed in recent years. For example, models have been proposed for information-seeking dialogues,* inquiry dialogues,’ persuasion dialogues,'-> and negotiation dialogues.:° * More- over, because most real-world dialogues are, in fact, combinations of primary types, models have been proposed for complex combinations of primary dialogues, for example, iterated, sequential, parallel, and embedded dialogues.®"'° However, to our knowledge, no general, formal model has yet been proposed for deliberation dia- logues, and it is the purpose of this article to present such a model, which we call the Deliberation Dialogue Framework (DDF)- For this framework, we draw on a model of deliberation decision making from the philosophy of argumentation, and we use a dialogue-game formalism to define an interaction protocol. Our protocol effectively creates a public space in which multiple participants may interact to jointly decide on a course of action, with the structure and rules of the protocol defin- ing the nature of these interactions. The article is structured as follows: Section 2 explores the features of delib- eration dialogues that are specific to this type of dialogue. Section 3 presents our formal model of deliberation dialogues, drawing on work in the philosophy of argumentation. This is followed, in Section 4, with a dialogue-game formalism for deliberation dialogues that accords with the general model presented in Section 3. The full syntax of the dialogue-game locutions and the rules governing their use, however, are presented in an Appendix. This is followed, in Section 5, with an example of the use of our formalism. We then consider, in Section 6, how we may assess our protocol. Here we consider it against various records of human deliber- ations, against normative principles for the conduct of human dialogues, and with respect to the outcomes produced by dialogues undertaken according to the proto- col. The article ends with a summary of our contribution, along with related and future research, in Section 7. Before presenting our model, however, there is one aspect of our work that it is important to emphasize. Although our approach is motivated by human *Because Eristic dialogues are not generally rule governed, formal models of them may be difficult to develop. However, recent work by Gabbay and Woods? has looked at dialogues involving noncooperation and hostility by the participants. We will not consider them further in this article. ’Note that in Ref. 11, two of us proposed a dialogue-game model for agent dialogues over the use of shared resources, dialogues which may incorporate elements of information-seeking, inquiry, persuasion, negotiation, and deliberation. International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 97 deliberation dialogues, we seek in this article to define a model for deliberation interactions only between computational entities, such as autonomous software agents. We use the term dialogue to refer to such interactions because they are analogous to human dialogues and because they may serve similar, or even iden- tical, purposes to human dialogues. However, we are not seeking to model human deliberations or to provide models for natural language explanation, generation, or processing. Thus, this article is not, and does not claim to be, a contribution to computational linguistics. This restricted focus has several implications for our work. First, all utterances in a dialogue between agents conducted according to some protocol may be assumed to accord with the rules of that protocol; if one participant utters expressions invalid according to the rules of the protocol these will not be transmitted to the other participants. This is unlike the situation in human—human or human-machine interactions, where utterances that do not con- form to the protocol syntax and combination rules may indeed be transmitted, resulting in considerable efforts being expended by listeners attempting to parse or to understand them. Second, we can assume that agents participating in a dialogue do so of their own free volition and may leave at any time. This con- trasts with at least one model of human—human dialogues, that of Paul Grice (Ref. 12, p. 48), in which a conversation between two parties can only end when both parties agree to its termination. We believe our model is more appropriate for an open computational society of autonomous software agents. Third, our assumption of agent autonomy leads us to assume that agents will not enter any dialogue unless and until they perceive it to be in their self-interest (however conceived by the agent concerned) to do so. In particular, agents will require, before entry, a statement of the intended topic of discussion in the dialogue. Unlike in many human dialogues, agents should not need to infer this from the utterances of others in the course of the dialogue. 2. DELIBERATION DIALOGUES What distinguishes deliberation dialogues from other types of dialogue in the Walton and Krabbe typology? A first characteristic arises from the focus of a delib- eration, which concerns what is to be done in some situation by someone, either an individual or a group of individuals. This focus on action distinguishes delibera- tion dialogues from inquiry and information-seeking dialogues, although not from persuasion and negotiation dialogues; these latter two may also be about action. Moreover, information-seeking and inquiry dialogues involve a search for the true answer to some factual question, either by one participant or by all. In such a search for truth, appeals to value assumptions (goals, preferences, etc.) would be inappropriate. However, this is not the case for deliberations, in which a course of action may be selected on the basis of such considerations. A second characteristic of deliberation dialogues is the absence of a fixed initial commitment by any participant on the basic question of the dialogue. ©The agents, for example, may be acting on behalf of human principals. International Journal of Intelligent Systems DOI 10.1002/int 98 MCBURNEY, HITCHCOCK, AND PARSONS Although the participants may express individual positions about what is to be done, the discussion is a mutual one directed at reaching a joint decision over a course of action; the actions under consideration, however, need not be joint, and may indeed be enacted by others not participating in the dialogue. A deliberation dialogue is not, at least not at its outset, an attempt by one participant to persuade any of the others to agree to an initially defined proposal. In this respect, deliber- ation dialogues differ from persuasion dialogues. Indeed, the governing question of a deliberation dialogue may change in the course of the dialogue, as partici- pants examine the issues associated to it. A third characteristic of deliberations relates to their mutual focus. Although the participants may evaluate proposed courses of actions according to different standards or criteria, these differences are not with respect to personal interests that they seek to accommodate in the resulting decision. In this respect, a deliber- ation dialogue differs from a negotiation dialogue, which concerns the division of some scarce resource between competing allocations and so must deal with recon- ciling potentially competing interests. In a negotiation, for example, it may be deleterious for a participant to share its information and preferences with others. But a sharing strategy should behoove participants in a deliberation; to the extent that agents are unwilling to share information or preferences, we would define their discussion to be a negotiation and not a deliberation. These last two characteristics lead to an important observation about deliber- ation dialogues. An action-option that is optimal for the group when considered as a whole may be seen as suboptimal from the perspective of each of the partici- pants to the deliberation. This could be because a demonstration of optimality requires more information than is held by any one participant at the start of the dialogue or because individual participants do not consider all the relevant criteria for assessment.’ Similarly, an option for which the group has a compelling argu- ment may be such that no one participant, on his or her own, has such an argu- ment; only by pooling information or resources is the group able to construct a winning argument for the option. This characteristic means that an assumption of an individual sincerity condition on agent utterances (e.g., in the FIPA Agent Com- munications Language ACL!>) may not be appropriate: with this condition, the optimal option would never be proposed if no one participant has, on its own, a compelling argument for it. Moreover, real-life deliberations often benefit from whimsical or apparently random proposals, which lead participants to discuss cre- ative (“off-the-wall”) alternatives. How do dialogues commence and proceed? Information-seeking dialogues, persuasions, and inquiries each commence with a question or a statement by a participant and proceed by means of responses from other participants. Like- wise, negotiation dialogues arise when a resource needs to be divided, and “For example, as Rehg'? has noted, one benefit of public discussion of proposed govern- mental actions is that participants to the discussion learn about the consequences of action- options for others of which they were not, or even could not have been, previously aware. For this reason, decision processes that incorporate public discussion may produce better quality outcomes than those which do not, as argued in Ref. 14. International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 99 they can commence with a proposal by a participant to divide the resource in some manner, perhaps optimally for that participant. The negotiation will then proceed via responses to this proposal, including counterproposals, and these responses, in the best case, converge on a mutually acceptable settlement. This is how auction and economic negotiation mechanisms, such as the monotonic con- cession protocol,!®!? are conducted; one may view these as protocols for nego- tiation dialogues with limitations on the nature and content of the permitted utterances. A deliberation dialogue arises with a need for action in some circumstance. In general human discourse, this need may be initially expressed in governing ques- tions that are quite open-ended, as in, “Where shall we go for dinner this eve- ning?” or “How should we respond to the prospect of global warming?” Proposals for actions to address the expressed need may only arise late in a dialogue, after discussion on the governing question and discussion on what considerations are relevant to its resolution. When possible courses of action are proposed, they may be evaluated on a large number of attributes, including their direct or indirect costs and benefits, their opportunity costs, their consequences, their practical feasibil- ity, their ethical, moral, or legal implications, their resourcing implications, their likelihood of realization or of success, their conformance with other goals or strat- egies, their timing, duration, or location, and so forth. Negotiations over multiattri- bute outcomes share the characteristic of multidimensionality with deliberations. To achieve resolution of a deliberation dialogue, one or more participants must make a proposal for an appropriate course of action. But where do such pro- posals for action arise? And how do the participants know when they have identi- fied all the possible alternatives, or at least all those alternatives worth considering? These are not easy questions, for human or for machine deliberators. 3. A FORMAL MODEL OF DELIBERATIONS Guided by the considerations of the previous section, we now present a for- mal, high-level model for deliberation dialogues. Our work adopts a structure sim- ilar to the idealized, five-stage model for negotiation dialogues proposed by Hulstijn?* We also draw on a domain-specific decision theory, the retroflexive argumentation model for nondeductive argument of Wohlrapp.'® This model talks of a matter-in-question, equivalent to a governing question or a proposal for action, being considered from a number of different frames or perspectives; we use the latter term, to avoid confusion with Reed’s Dialogue F: rames.'° As mentioned above, perspectives may be factors such as moral implications, opportunity costs, and so forth. An argument for or against a particular option is a partial understanding of that option from one or more, but rarely all, perspectives. Having heard an argu- ment for or against an option, Wohlrapp argues, one proceeds by reexamining the underlying assumptions or modifying the action proposal in the light of that argu- ment. Thus, an argument against a law permitting euthanasia may be that such *Hulstijn calls these negotiation dialogues Transactions. International Journal of Intelligent Systems DOI 10.1002/int 100 MCBURNEY, HITCHCOCK, AND PARSONS practices are open to abuse of ill patients by malicious relatives. A retroflexive response to this argument is to modify the proposed law by adding restrictions that inhibit or preclude such abuses, such as a requirement that the patient be of sound mind and give prior consent to the act of euthanasia. With Wohlrapp’s model in mind, we assume that the subject matter of dia- logues can be represented in a symbolic language, with sentences and sentential functions denoted by lowercase Roman letters, for example, p,g,.... We define the following types of sentences: Actions: An action is a sentence representing a deed or an act (possibly a speech act) that may be undertaken or recommended as a result of the deliberation dia- logue. The purpose of the deliberation dialogue is to decide on an answer to the governing question, which will be some (course of ) action. Possible actions are also called action-options. Goals: A goal is a sentence representing a future world state (external to the dia- logue), possibly arising following execution of one or more actions and desired by one or more participants. Goals express the purpose(s) for which actions are being considered in the dialogue. Constraints: A constraint is a sentence expressing some limitation on the space of possible actions. Perspectives: A perspective is a sentence representing a criterion by which a poten- tial action may be evaluated by a participant. Facts: A fact is a sentence expressing some possible state of affairs in the world external to the dialogue. Evaluations: An evaluation is a sentence expressing an assessment of a possible action with respect to a goal, constraint or perspective. These types are mutually exclusive. With these elements defined, we now present a formal model of the dialogue itself, a model which consists of eight stages: Open: Opening of the deliberation dialogue and the raising of a governing ques- tion about what is to be done. Inform: Discussion of (a) desirable goals, (b) any constraints on the possible actions which may be considered, (c) perspectives by which proposals may be evalu- ated, and (d) any premises (facts) relevant to this evaluation. Propose: Suggesting of possible action-options appropriate to the governing question. Consider: Commenting on proposals from various perspectives. Revise: Revising of (a) goals, (b) constraints, (c) perspectives, and/or (d) action- options in the light of the comments presented and the undertaking of any information-gathering or fact-checking required for resolution. (Note that other types of dialogues, such as information seeking or persuasion, may be embed- ded in the deliberation dialogue at this stage.) Recommend: Recommending an option for action and acceptance or nonac- ceptance of this recommendation by each participant. International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 101 Confirm: Confirming acceptance of a recommended option by each participant. We have assumed that all participants must confirm their acceptance of a rec- ommended option for normal termination. Close: Closing of the deliberation dialogue. This is a model of an ideal dialogue. The stages may occur in any order, and may be entered by participants as frequently as desired, subject only to the following constraints: The first stage in every dialogue is the Open stage. Once a second participant enters the dialogue, the dialogue is said to be open. The Open stage in any deliberation dialogue may occur only once in that dialogue. All other stages may occur more than once. One deliberation dialogue may be embedded in another, so that successive open stages, each belonging to a different deliberation dia- logue, may occur. The only stages that must occur in every dialogue that terminates normally are Open and Close. At least one instance of the Inform stage must precede the first instance of every other stage, excepting Open and Close. At least one instance of the Propose stage must precede the first instance of the Con- sider, Revise, Recommend, and Confirm stages. At least one instance of the Consider stage must precede the first instance of the Revise stage. The Confirm stage can only be entered following an instance of a Recommend stage. Upon successful completion of an instance of the Confirm stage, the dialogue must enter the Close stage. The last stage in every dialogue that terminates normally is the Close stage. Subject only to the constraints expressed in these rules and constraints expressed in the locution-combination rules (articulated below), participants may enter any stage from within any other stage at any time. Some comments are appropriate on the rules constraining the order of stages. First, the participants may enter a Close stage more than once in a particular dia- logue. As the locution rules below will demonstrate, participants are required to indicate publicly that they wish to leave the dialogue. Whenever a participant does this, the dialogue enters a Close stage. However, the Close stage remains uncon- cluded, and the dialogue remains open, as long as there are at least two partici- pants who wish to continue speaking. It is therefore possible for the Close stage, as with all the other stages except the Open stage, to be entered multiple times in any one dialogue. Second, we have assumed for simplicity in this initial model that unanimity of the participants is required for a decision on a course of action to be made. It would be quite possible for the participants to adopt a different procedure for con- firmation, such as majority voting or consensus procedures, as modeled formally in Ref. 19. If alternative voting procedures were to be adopted, it would be useful to announce the results of any votes formally to the participants, with a statement of the group’s decision, just as the minutes of human meetings usually record these. For this reason, we have demarcated a separate stage, Confirm, to record final commitments to action. In addition, the requirement that participants once again International Journal of Intelligent Systems DOI 10.1002/int 102 MCBURNEY, HITCHCOCK, AND PARSONS assert their endorsement for a particular course of action reinforces their commit- ment to this course as the group’s decision. Once all participants have confirmed their acceptance of a recommended action, the dialogue must end, and any further discussion relevant to the same governing question can only occur by commence- ment of a new deliberation dialogue. Apart from the constraints listed here, the order of stages is not fixed, and participants may return to different stages multiple times in any one dialogue. Thus, a dialogue undertaken according to this model may cycle repeatedly through these stages, just as human dialogues do. In this way, the protocol here gives practical effect to Wohlrapp’s model of retroflexive argumentation. The model is also quite general; we have not specified the nature of the governing questions, goals, constraints, facts, action-options, perspectives, or evaluations. Nor have we specified here any particular mechanisms for producing, revising, or accept- ing action-options! 4. LOCUTIONS FOR A DELIBERATION DIALOGUE PROTOCOL 4.1. Introduction We now articulate the locutions of a formal dialogue game that enables a deliberation dialogue to be conducted according to the eight-stage model just pre- sented. Dialogue games are interactions between two or more participants who “move” by uttering locutions, according to certain rules. They were first studied by Aristotle? and have been used in modern philosophy to understand fallacious arguments**:*4 and to provide a game-theoretic semantics for formal logical sys- tems.”> Over the last decade they have been applied in various areas of computer science and artificial intelligence, for the specification of software systems with multiple stakeholders,”° for the design of man—machine interfaces,*-*’ for the analy- sis of complex human reasoning,” and for the design of interaction protocols for autonomous software agents.**7-778 A dialogue game may be specified by listing the legal locutions, together with the rules that govern their use, and the commencement and termination of dialogues.” In this section, we present only the locutions, and not also the nec- essary preconditions for, and the consequences of, their utterance; these condi- tions are presented in detail in the Appendix. We continue to assume that the fWohlrapp’s model of retroflexive argumentation and the formalization of it presented here have some similarities with Imre Lakatos’ theory of mathematical discovery? According to Lakatos, mathematicians work by proposing statements they believe may be theorems and then seeking proofs for these. In doing so, a counterexample to the proposed theorem may be found, which leads the mathematician to modify the proposal. A new attempt at seeking a proof is then undertaken, with the process repeated until such time as a theorem is identified for which a proof can be found. The theories of Lakatos and Wohlrapp may be seen as describing (in part) arguments that proceed by precization, in the terminology of Naess.?! EDialogue games have also been used in computational linguistics to model natural lan- guage conversations (e.g., Ref. 30), although this work appears unaware of their far longer use in philosophy. International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 103 subject matter of dialogues can be represented in a sentential language by low- ercase Roman letters, and we denote participating agents by P,, P:, and so forth. Since the work of Hamblin?’ it has been standard to define a public store, called a commitment store, for each participant in a dialogue game. We denote the store of agent P; by CS(P;). This store contains the sentences to which the participant is publicly committed, and the rules of the dialogue game may also define the circumstances under which sentences may be inserted or deleted from the com- mitment stores. The store for an agent contains the various sentences that that agent has publicly asserted or preferences he or she has declared; entries in the store are thus of two forms: (a) 2-tuples of the form (type, t), where f is a valid sentence instance of type type, with type € {goal, constraint, perspective, fact, action, evaluation}; and (b) 3-tuples of the form (prefer,a,b), where a and b are action sentences. Each store can be viewed by all participants, but only a participant’s own utterances lead to insertions into its associated store. 4.2. Locutions With this introduction, we are able to articulate the permissible locutions in the dialogue game: open_dialogue(P;,q?): Participant P; proposes the opening of a deliberation dia- logue to consider the governing question g?, where g is a sentence of type action, or a sentential function whose values are of type action (possibly conjoined with a sentence that exactly one sequence of objects satisfies the function). A dialogue may only commence with this move. enter_dialogue(P,, ¢?): Participant P, indicates a willingness to join a deliberation dialogue to consider the governing question g? All intending participants other than the mover of open_dialogue(.) must announce their participation with this move. Note that neither the open_dialogue(.) nor the enter_dialogue(.) move implies that the speaker accepts that g? is the most appropriate governing ques- tion, only that he or she is willing to enter into a discussion about it at this time. propose(P;, type, t): Participant P; proposes sentence f as a valid instance of type type, where type € { goal, constraint, perspective, fact, action, evaluation}. assert (P;, type, t): Participant P; asserts sentence f as a valid instance of type type, where type € { goal, constraint, perspective, fact, action, evaluation}. This is a stronger locution than propose (.), and results in the tuple (type, t) being inserted into CS(P;), the Commitment Store of P;. In the case where the utterance here is assert(P;, action, f) and follows an utterance of move(P,, action, t), for some other agent P,, then this utterance also removes any earlier entry in the Commit- ment Store CS(P;) of the form (action, s). prefer (P;, a, 5): Participant P; indicates a preference for action-option a over action- option b. This locution can only be uttered following utterance (possibly by other participants) of assert(P;, evaluation, e) locutions of at least two In other words, the Commitment Stores are private-write and public-read data stores. International Journal of Intelligent Systems DOI 10.1002/int 104 MCBURNEY, HITCHCOCK, AND PARSONS evaluations e, one of which has a as its first argument and one b. This combina- tion rule ensures that preferences expressed in the dialogue are grounded in an evaluation of each action-option according to some proposed goal, constraint, or perspective, and thus contestable. This locution inserts (prefer,a,b) into CS(P;), the Commitment Store of P;. ask_justify(P,, P;, type, t): Participant P; asks participant P; to provide a justifi- cation of sentence f of type type, where t € CS(P;). move(P;, action, a): Participant P; proposes that each participant pronounce on whether they assert sentence a as the action to be decided upon by the group. This locution inserts (action,a) into CS(P;) and removes any earlier entry in the Commitment Store of the form (action, b)i reject (P;, action, a): Participant P; rejects the assertion of sentence a as the action to be decided upon by the group. If the Commitment Store CS(P;) of participant P; contains (action, a) prior to this utterance, then it will be removed upon utterance. retract (P,, locution): Participant P; expresses a retraction of a previous locution, locution, where locution is one of three possible utterances: assert(P,, type, t) or move(P,, action, a) or prefer (P;,a, b) locution. The retraction locution deletes the entry from CS(P;) that had been inserted by locution. withdraw_dialogue(P;,g?): Participant P; announces her withdrawal from the deliberation dialogue to consider the governing question q?. The locution ask_ justify (P;, P;, type, 1) is a request by participant P; of par- ticipant P;, seeking justification from P; for the assertion that sentence f is a valid instance of type type. Following this, P; must either retract the sentence ¢ or shift into an embedded persuasion dialogue in which P; seeks to persuade P, that sen- tence ¢ is such a valid instance. One could model such a persuasion dialogue with a formal dialogue-game framework consistent with the deliberation framework presented here, drawing, for example, on the dialogue game models of persuasion proposed by Walton and Krabbe! or by Prakken.7® The move(.) locution requests that participants who agree with a particular action being decided upon by the group should utter an assert(.) locution with respect to this action. To communicate rejection of the proposed action made in a move(.) locution, a participant must utter a reject(.) locution with respect to the proposed action. Because in this model we have assumed unanimity of decision making, the Recommend stage is only concluded successfully, and hence the dia- logue only proceeds to the Confirm stage, in the case when all participants respond to the move(.) locution with the appropriate assert(.) locution. 4.3. Deliberation Dialogues We intend that the dialogue game protocol defined in Subsection 4.2 should implement the eight-stage model for deliberation dialogues proposed in Section 3. ‘The name of this locution derives from the standard terminology of human meeting pro- cedures, for example, Robert’s Rules of Order (Ref. 31, Section 4(1), p. 31). International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 105 To achieve this, we need to demonstrate that each of the eight stages of the formal model of deliberation dialogues can be executed by judicious choice of these locu- tions. We show this by considering each stage in turn: The Open stage of a dialogue begins with the locution open_dialogue(P;,q?) and at least one utterance of enter_dialogue(P,,q?), for P, and P; distinct participants. The Inform stage consists of utterances of propose(.), assert(.), retract (.) and ask_ justify (.) for some or all of the types goal, constraint, perspective, and fact. The Propose stage consists of one or more utterances of propose(P;, action, t). The Consider stage consists of utterances of locutions assert(P,, evaluation, e), prefer- (P,,4,b), and ask_ justify (.). In the Revise stage, a revision a2 to an action al proposed earlier may be proposed by means of the locution prepose(P;, action,a2). Similarly, the locution prepose(P;, type, t2) may be used to propose a revision ¢2 to a prior proposal f1, for any of the types goal, constraint, perspective, evaluation, and fact. The Recommend stage consists of an execution of move(P;, action, a), followed by utterances of assert(P, , action, a) or reject (P, , action, a), for P,and P; distinct participants. The Confirm stage only occurs following a Recommend stage in which all partici- pants have indicated acceptance of the recommended action-option. It then consists of the utterance of assert(P,, action, a) by every participant P,, including the speaker of move(P;, action, a). The Close stage occurs whenever a participant utters withdraw_dialogue(P;,¢?). A dialogue closes only when there remain two participants who have not uttered this locu- tion and one of them does so. Thus, the dialogue game protocol defined in the previous subsection enables par- ticipants in an interaction to undertake a deliberation dialogue that conforms to the model proposed in Section 3. Essentially what we have done here is show that the definitions of the dialogue game locutions are consistent with the definitions of the eight stages given earlier. We note that nothing in our protocol requires all dialogues to terminate or that all dialogues have substantive meaning. Thus, for example, one partici- pant could initiate a dialogue with an open_dialogue(.) utterance followed by enter_dialogue(.) utterance by another participant, only for the dialogue to then go silent. How long the two participants wait before speaking again or departing (if ever), is a matter for them, not the protocol. 4.4. Commitments Some comments on our notion of commitments are in order here, as this con- cept has different connotations for different authors. For Hamblin (Ref. 23, p. 257), commitment is purely dialectical, expressing only a willingness by the participant who has made the commitment to defend the commitment if it is attacked; in par- ticular, commitments need not correspond to the participant’s real beliefs. For Wal- ton and Krabbe (Ref. 1, Chapter 1), however, commitments are obligations to (execute, incur, or maintain) a course of action. These actions may be utterances in a dialogue, as when a speaker is forced to defend a statement he has asserted against attack from others; for these authors, propositional commitment is a special case of action commitments (Ref. 1, p. 23). For Munindar Singh and Marco Colombetti International Journal of Intelligent Systems DOI 10.1002/int 106 MCBURNEY, HITCHCOCK, AND PARSONS and their colleagues, social commitments are an expression of wider interpersonal, social, business, or legal relationships between the participants, and utterances in a dialogue are a means by which these relationships may be manipulated or modified ***3-) We adopt Hamblin’s understanding of commitments as represent- ing dialectical obligations; we do not require that commitments correspond to the participants’ real beliefs, preferences, or intentions at the time of the dialogue, nor that they indicate an intention to undertake some actions outside the world of and subsequent to the dialogue. Rather they represent statements that a speaker is com- mitted to defend, if and when they are attacked inside the dialogue by other par- ticipants. The main purpose of Commitment Stores in our framework, then, is to track these dialectical obligations of the participants. An important motivation for our work is the development of protocols which enable rational interaction between participants, where rational is used in the minimal sense of giving and receiving of reasons for statements.> Thus, our constraint that preferences between actions only be expressed for actions that have already been evaluated is intended to ensure that participant preferences are grounded in some reason, rather than simply being assumed to exist ab initio By supporting rational interaction, an interaction mechanism provides for the participants to change their beliefs, preferences, or intentions in the light of infor- mation or arguments received from other participants. Political theorists use the term self-transformation to refer to such changes that participants may experi- ence in the course of a discussion,*® and, as will be shown in Section 6.2 below, our protocol enables this. Because of this, we permit participants to make utter- ances that contradict their own prior utterances, or the utterances of others, and to retract prior utterances. For example, a participant may express a preference for action-option a over option b, but then vote for b—via an assert(P;, action, b) utterance—when another participant P; utters move(P;, action, b). As can be seen from inspection of the the axiomatic semantics given in the Appendix, the protocol rules governing the contents of participant Commitment Stores are few. Only the utterance of three locutions—assert(.), prefer(.), and move(.)—result in new entries to the speaker’s Commitment Store, whereas four— assert(., action, .), move(.), reject(.), and retract (.)—may cause deletions. Inter- actions between multiple commitments in one speaker’s Commitment Store are ignored, except when the speaker utters move(.) or assert(., action, .) following a move(.) utterance. In other words, only when a deliberation dialogue is in a Recommend stage do we consider consistency of a speaker’s commitments impor- tant, and then only for assertion of actions. Moreover, the protocol is not con- cerned with the consistency of the contents of the Commitment Stores of two or more participants. Thus, one participant may assert two action options and another participant express a preference for one option over the other; in this case, the JIn the multiagent systems literature, the word commitments can also refer to an agent’s persistent intentions. Singh** argues that this notion is distinct from the social commitments described here, and that neither can be derived from the other. kOur approach is consistent with recent approaches to practical reasoning by philoso- phers, such as Searle,2° and economists, such as Sen.57 International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 107 preference commitment created by the second speaker remains in its Commitment Store, even if the first speaker subsequently retracts one or both of its earlier asser- tions. We believe this liberal approach is necessary for a protocol for open agent systems, where participants may have very different goals, desires, and intentions, and may have been created by different agent design teams. 5. EXAMPLE We now consider a simplified example of a dialogue undertaken according to our deliberation dialogue protocol. In this example, the deliberation concerns what action to take regarding potential health hazards from the use of cellular phones. The dialogue utterances are numbered sequentially from U1, and each is annotated. U1: open_dialogue(P,, Do what about mobile phone health risk?) This move is the first move in the Open stage of the dialogue. U2: enter_dialogue(P:, Do what about mobile phone health risk?) With the entry of a second participant, the dialogue may be said to commence. U3: enter_dialogue(P3, Do what about mobile phone health risk?) A third participant also enters the dialogue. U4: propose(P>, perspective, degree of risk) Participant P, proposes that degree of risk should be a perspective from which to consider the question. With this move, the dialogue enters an Inform stage. U5: propose(P3, perspective, economic cost) Participant P; proposes that economic cost should be a perspective from which to consider the question. U6: propose(P), action, prohibit sale of phones) Participant P; proposes prohibition of sale of phones as an action-option. With this move, the dialogue enters a Propose stage. U7: propose(P3, action, do nothing) Participant P; proposes doing nothing as an action-option. US: assert(P,, evaluation, prohibit sale from a degree of risk perspective is low- est risk) International Journal of Intelligent Systems DOI 10.1002/int 108 MCBURNEY, HITCHCOCK, AND PARSONS Participant P, asserts that from the perspective of the degree of risk, prohibiting the sale of phones is the lowest risk action-option possible. With this move, the dialogue enters a Consider stage. U9: assert(P3, evaluation, prohibit sale from an economic cost perspective is high-cost) Participant P; asserts that from the perspective of economic cost, prohibiting sale is a high-cost option. U10: propose(P,, action, limit usage) Participant P; proposes limiting usage as an action-option, thus responding ret- roflexively to the previous two assert(P;, evaluation, e) locutions. With this move, the dialogue enters a Revise stage. UI11: propose (P3, perspective, feasibility) Participant P, proposes feasibility as a perspective from which to consider the question. With this move, the dialogue enters another Inform stage. U12: assert(P,, evaluation, limit usage from a feasibility perspective is impractical) Participant P, asserts that from the perspective of feasibility, limiting usage is not practical. With this move, the dialogue enters another Consider stage. U13: prefer (P,, prohibit sale, limit usage) Participant P, expresses a preference for the option of prohibiting the sale of phones over limiting their usage. The utterance is valid at this point, because each action-option has appeared as the first argument in a sentence e of type evaluation in an assert( P;, evaluation, e) locution. U25: withdraw_dialogue(P:, Do what about mobile phone health risk?) One participant, the second to enter the dialogue, announces its departure from the dialogue. The dialogue may continue until one of the other two partici- pants withdraws. U26: move(P,, action, limit usage) One participant seeks to have the remaining participants vote on the action- option of limiting phone usage. International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 109 U27: reject(P3, action, limit usage) The other remaining participant votes against this. Whether or not this would defeat the motion moved by Participant P; would depend on the decision- making rules of the forum.) U35: withdraw_dialogue(P3, Do what about mobile phone health risk?) A second participant announces its departure from the dialogue, leaving just one participant remaining. This utterance therefore ends the dialogue. This example, although very simple, illustrates the usage of selected locu- tions, and demonstrates the way in which a dialogue may move between stages as it proceeds. Such cycling between stages is commonplace in human delibera- tions, where comments, arguments, or preferences uttered by one participant are likely to provoke others to think of new goals, constraints, facts, perspectives, or action-options. 6. ASSESSMENT OF THE DDF PROTOCOL How may we assess the Deliberation Dialogue Framework model of a delib- eration dialogue and the associated dialogue game protocol? In other words, is this a good protocol or not? There are several ways to approach this issue, and in the next three subsections we consider three of these. First, we compare our protocol with actual human deliberation dialogues; second, we consider the DDF protocol from the perspective of the deliberation processes it implements; third, we con- sider the outcomes, if any, that deliberation dialogues conducted under the DDF protocol achieve. 6.1. Human Dialogues Although we intend our framework to support only interactions between com- putational entities, its motivation and structure derive from consideration of human deliberation dialogues. Therefore, one approach to the assessment of the frame- work would be to ask whether it provides a good model of actual human deliber- ation dialogues. However, in doing so, it is important to realize that our framework is an idealization of human dialogues in at least two respects. First, the framework presupposes cognitive abilities on the part of the participants that probably exceed those of most human deliberators, for instance, maintaining conformity with the preconditions of locution utterance;, adhering to the rules regarding the order of dialogue stages, and keeping track of the contents of commitment stores of all participants as the discussion proceeds. Second, actual human dialogues 'For example, the rules may provide for differential weighting of votes. International Journal of Intelligent Systems DOI 10.1002/int 110 MCBURNEY, HITCHCOCK, AND PARSONS undoubtedly contain more irrelevancies, rigidities, interruptions, and transitions to other types of dialogues that are not functionally embedded than does our frame- work. Given this reality, there are two features of actual human dialogues that could lead us to revise our framework: the absence of constructive components of a type of move presently included in our framework or the presence of construc- tive moves in human dialogues that our framework does not accommodate. On the the first of these, most readers will have experienced human deliber- ation dialogues in which instances of the various locutions we have proposed have been used. For instance, if a group of friends decide to have dinner together and jointly seek to agree on a restaurant, often one or more participants will make proposals on which restaurant to select. Some may even propose that criteria for selection be established first, for example, that the restaurant be within walking distance or provide food of a certain cuisine or be within a certain price range. Similarly, once suggested, such proposals may be subject to requests for justifica- tion, statements of preference, or suggestions that a particular option be selected. In the case in which there are many dinner participants having conflicting prefer- ences, there may even be a vote taken to make the final restaurant selection. Although everyday human deliberation dialogues are typically not as formal or as structured as is our framework, we believe they typically incorporate some or all of the ideal stages and constructive locutions we have identified. What of more important human deliberation dialogues, such as those to decide great matters of state or of public policy? Although perhaps a majority of such decisions involve deliberation dialogues, we have found few examples giving full accounts or transcripts of the dialogues themselves. Typical studies of governmen- tal decision-making, such as Ref. 39, an account of the decision-making processes in seven public policy domains in post-independence Zimbabwe reconstruct the major options considered and the arguments for and against them, but not in a sufficiently detailed manner to reveal the structure of deliberation dialogues used to reach decisions. However, we have found two examples of human deliberation dialogues in public policy domains from which we may infer the structure of these dialogues, with the aim of determining whether our framework requires revision. The first example concerns the discussions within the leadership of the Chi- nese Communist Party (CCP) at the time of the pro-democracy student demon- strations in the Northern spring and summer of 1989. Here the deliberation dialogues concerned what to do, if anything, about the demonstrations. In the end, the CCP leadership decided to impose martial law and order soldiers from the Chinese People’s Liberation Army to remove the demonstrators forcibly, an action which led to killings of demonstrators in Tiananmen Square in Beijing and elsewhere in China in June 1989. Recently, documents purporting to be the minutes of some of the relevant CCP meetings have been smuggled out of China and published.*° Although their authenticity has not been verified, three eminent Western scholars of Chinese politics found in them nothing to indicate that they were not genuine.” However, as instances of deliberation dialogues, these records are not very informative. The relative political power of the participants appears to have greatly influenced what they say to one another, and there is little substantive discussion International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE lll of the consequences of alternative courses of action or their relative advantages and disadvantages. For such a major decision, there is (at least in these docu- ments) remarkably little debate or substantive analysis. For example, once Deng Xiaopeng, the most powerful participant in the discussions, had decided on mar- tial law, all but two of the other participants, the brave Zhao Ziyang and Hu Qili, also supported it. The nature of this support appears to have mostly been politi- cal point scoring and scapegoating, primarily directed against Zhao; in reading these transcripts, one has the impression that the speakers expressing such views wete articulating positions they already knew Deng to support. Moreover, these dialogues do not provide an example of retroflexive argumentation, because the one proposal considered, imposing martial law, is not modified in the light of the few objections raised to it. Because our framework is intended to be a general one, we have not explicitly modeled any power relationships between the par- ticipants. This would be possible, and has, indeed, been done in other work on modeling coordination and negotiation in AI, for example, in Ref. 41. The participants in a second example of human deliberation dialogues in public policy domains were more equal than the CCP leadership appears to have been during the Tiananmen crisis. This example involved the discussions in the British War Cabinet in May 1940, when, following the appointment of Winston Churchill as Prime Minister, the members of the War Cabinet discussed various proposals regarding the conduct of the war with Germany.” One of the propos- als considered was to seek to negotiate a peace agreement with Germany and thus end the conflict quickly. Some of the participants, notably Churchill, had previously been strongly opposed to this option, but (according to the reconstruc- tion by Lukacs**), Churchill felt his political support at this time within the Par- liamentary Conservative Party and within the Cabinet was not strong. He therefore (according to Lukacs) pretended to entertain the proposal seriously, so as to strengthen his support with key ministers and backbenchers and so as not to provide his enemies with political ammunition against him at this time. Feints and tactical moves such as these, although common in political delib- erations, cannot easily be modeled computationally. Our framework, for instance, does not differentiate between sincere and insincere expressions of beliefs or pref- erences in a dialogue. Perhaps no computational framework can ever deal with this issue adequately, because any semantic requirement could always be simu- lated insincerely by a sufficiently clever agent.** In other words, it is hard to see how a framework could represent dialogues in which statements are made to cre- ate the impression that the speaker supports a position he really does not or to provoke other participants to reveal their true positions prematurely, so that these may be countered or rebutted, or to jockey for influence with third parties, both present and absent. All of these features are to be found in human deliberation dialogues, particularly when important public policy decisions are to be made. Even Singh’s™ notion of a social semantics—a commitment store involving a pub- lic expression of beliefs and intentions by each participant at the outset of a dialogue—will only enable statements in the subsequent dialogue to be verified for consistency with the expressed beliefs and intentions, not the degree of sincer- ity with which these beliefs and intentions are held. International Journal of Intelligent Systems DOI 10.1002/int 112 MCBURNEY, HITCHCOCK, AND PARSONS In summary, this brief exploration of human deliberation dialogues has not led us to revise our framework. As mentioned above, each of the various sentence types, locutions, and components found in our framework can be found in at least some human deliberation dialogues, and so our framework does not contain extra- neous elements. On the other hand, although we have identified a class of dialogue moves that are not accommodated in our framework, that of feints and other insin- cere statements uttered for tactical reasons, we do not believe that these can be readily accommodated in any computational model. 6.2. Deliberation Process A second approach to assessment of our framework is to measure it against normative principles for deliberation. We know of only three such sets of princi- ples.™ The first set is criteria for public decision processes in environmental mat- ters, identified by Webler et al.4° These principles were derived from a statistical multivariate factor analysis of the interview responses of participants in recent environmental public consultation exercises in the United States. The five result- ing principles are pitched at a very abstract level; for example, the second princi- ple is that the process should promote a search for common values. Although certainly useful for designers of public policy decision processes, the abstraction of these principles makes them unsuitable for assessment of our framework. 6.2.1. Alexy’s Rules for Discourse Ethics The second set of normative principles are Robert Alexy’s rules for discourse ethics.4’ These were designed as principles for rational discussion over ethical norms between free and consenting participants, building on Jiirgen Habermas’ philosophy of discourse ethics.4* Habermas sought to understand how rational, free people could engage in reasoned discussion and reach agreement over moral and ethical questions, and Alexy articulated a set of rules for such discussions.” We list the rules here, using Alexy’s categorization, naming, and numbering (apart from an initial A for each rule); for simplicity we use only the masculine gender. Al. Basic Rules A1.1 No speaker may contradict himself. A1.2 Each speaker may only assert what he himself believes. A1.3 Each speaker who applies a predicate F to an object a must also be prepared to apply F to any other object that is similar to a in all respects. A1.4 Different speakers may not use the same expression with different meanings. ™We note in passing that evaluation of a process for dialectical argumentation against formal criteria may fail to capture informal and pragmatic features associated with its usage.*° Because our protocol is intended for use by formally specified computational entities, this is not of concern here. International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 113 A2. Rules of Reason A2 (General Rule of Justification): Every speaker must justify what he asserts upon request, unless he can provide grounds that justify avoid- ing giving a justification. A2.1 Anyone who can speak may take part in discourse. A2.2 (a) Anyone may render any assertion problematic. A2.3 (b) Anyone may introduce any assertion into the discourse. A2.4 (c) Anyone may express his opinions, wishes, and needs. A2.5 No speaker may be prevented by constraint within or outside the dis- course from making use of his rights established in 2.1 and 2.2. A3. Rules of the Burden of Argumentation A3.1 Whoever wishes to treat a person A differently from a person B is obliged to justify this. A3.2 Whoever attacks a statement or norm that is not the object of discussion must provide a reason for doing so. A3.3 Whoever has put forward an argument is only committed to further argu- ments in the case of a counterargument. A3.4 Whoever introduces an assertion or a statement concerning his opin- ions, wishes, or needs into the discourse, which as argument is not related to a previous statement, has to justify upon request why he has intro- duced this assertion or this statement. A4. Forms of Argument Under this heading, Alexy proposes six normative models for the structural form of arguments concerning ethical values and norms, forms that depend on the reasons advanced for such values and the perceived consequences of adopting them. We do not present or discuss these here, as they are specific to arguments over ethical values. AS. Rules of Justification A5.1.1 Everyone must be able to accept the consequences of the rule— presupposed in his normative statements—regarding the satisfaction of the interests of each individual person even for the hypothetical case in which he finds himself in the situation of this person. A5.1.2 The consequences of every rule for the satisfaction of the interests of each and every individual must be capable of being accepted by all. A5.1.3 Every rule must be openly and universally teachable. A5.2.1 The moral rules that form the basis of the moral conceptions of the speakers must be able to withstand scrutiny in a critical, historical genesis. A moral rule does not withstand such a scrutiny (a) if it was indeed originally justifiable rationally but in the mean- time has lost its justification, or International Journal of Intelligent Systems DOI 10.1002/int 114 MCBURNEY, HITCHCOCK, AND PARSONS (b) if it was already originally not justifiable rationally and if no suf- ficient new reasons for it can be found. A5.2.2 The moral rules that form the basis of the moral conceptions of the speakers must be able to withstand the scrutiny of their individual history of emergence. A moral rule does not withstand such a scrutiny if it is only accepted on the basis of conditions of socialization that are not justifiable. A5.3 The factually given limits of realizability are to be observed. A6. Rules of Transition A6.1 It is possible at all times for any speaker to switch to a theoretical (empir- ical) discourse. A6.2 Itis possible at all times for any speaker to move to a linguistic-analytical discourse. A6.3 It is possible at all times for any speaker to move to a discourse on discourse theory. Habermas’ theory of discourse ethics has subsequently been applied to legal and political philosophy**° and to a philosophical assessment of electronic democ- racy.”! Despite these examples of wider application, however, some of Alexy’s rules appear very specific to ethical discussions and not applicable to generic delib- eration dialogues. For instance, Rule 4: Forms of Argument consists of six norma- tive models for the structural form of arguments concerning ethical values and norms. Similarly specific to discourse ethics are Rules A2.1, A3.1, A5, and A6.3. The other rules have applicability to wider deliberation dialogues, and, accord- ingly, we can assess our framework against them. We consider each rule in turn. Rule Al.1 is not satisfied: Participants using DDF may contradict themselves, as seen by examining the preconditions for the locutions given in the Appendix. Rule Al.2 is not satisfied: Our framework is defined purely in terms of observable linguistic behavior, and has no require- ments that participants are sincere in their utterances. Moreover, because our framework does not require consistency of utterances, either from the one speaker or between multiple speakers, Rules A1.3 and A1.4 are not satisfied (respec- tively). It would be possible to satisfy Rule Al.4 through appropriate regimenta- tion of the formal language used to represent the subject matter of deliberation dialogues. Rule A2 (General Rule of Justification) is satisfied, via the ask_ justify (.) locution. The three parts of Rule A2.2 are satisfied, by means of the ask_ justify(.), assert(.), and prefer(.) locutions, respectively. Rule A2.3 is satisfied within the dialogue by means of the preconditions of the locutions given in the Appendix. The DDF framework makes no assumptions concerning any relation- ship between the parties external to the dialogue, and so the framework cannot be assessed with regard to constraints on speakers imposed outside the dialogue. Rule A3.1 is specific to ethical discussions. Rule A3.2 is not satisfied, or rather, is satisfied trivially, because participants may only attack a statement via International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 115 the ask_justify(.) locution, which has as a precondition the requirement that a prior assert(.) locution has been uttered concerning the same statement. Rules A3.3 and A3.4 are both satisfied by the definition of the ask_justify(.) locution. Rules A4 and AS are specific to ethical discussions. Rule A6.1, A6.2, and A6.3 are satisfied: Although the types of sentences, the locutions, and the combination rules in our framework are specific to deliberation dialogues, the framework permits shifts to functionally embedded dialogues of different types, such as inquiry dia- logues or persuasion dialogues. These may concern theoretical, empirical, linguistic- analytical, or discourse-theoretic matters. Summarizing this assessment, we see that the Deliberation Dialogue Frame- work presented in Sections 3 and 4 satisfies Alexy’s rules for discourse ethics to the following extent: Rules A2, A2.2, A3.3, A3.4, A6.1, A6.2, and A6.3 are fully satisfied; Rule A2.3 is partly satisfied; Rules Al.1, A1.2, Al.3, Al.4, and A3.2 are not satisfied. In addition, Rules A2.1, A3.1, A4, and A5 are specific to ethical discussions and so are not applicable here. In assessing our framework against Alexy’s normative rules, we note that three of his rules that are not satisfied, Al.2, Al1.3, and Al.4, concern the relationship between what is uttered in the dialogue and what the speaker truly believes. As noted in the previous subsec- tion, our framework does not distinguish between sincere and insincere utter- ances and makes no requirements that speakers express only their true beliefs or preferences. 6.2.2. Hitchcock’s Principles for Rational Mutual Inquiry A third set of normative principles are the Principles of Rational Mutual Inquiry developed by one of us more than a decade ago.*” These were intended for human dialogues whose primary purpose was defined as being “to secure rational agreement by the participants on the answer to a specified question. A subsidiary purpose, if they do not come to agree on an answer, is to secure agreement on why they have not succeeded in answering their question.” (Ref. 52, p. 237). These human dialogues are called mutual inquiries; in terms of the typology of Walton and Krabbe,! this definition was formulated with inquiry dialogues primarily in mind, but also covers deliberation dialogues. It is therefore appropriate to consider them as principles against which our deliberation dialogue protocol may be mea- sured. We begin by summarizing these Principles, numbered H1 through H18; the linguistic labels are those of the original. H1 Externalization: The rules should be formulated in terms of verifiable lin- guistic behavior. H2 Dialectification: The content and methods of dialogue should be subject to the agreement of participants, without any prior imposition. H3 Mutuality: No statement becomes a commitment of a participant unless he or she specifically accepts it. H4 Turn Taking: At most one person speaks at a time. International Journal of Intelligent Systems DOI 10.1002/int 116 MCBURNEY, HITCHCOCK, AND PARSONS H5 Orderliness: One issue is raised at a time and is dealt with before proceed- ing to others. H6 Staging: An inquiry dialogue should proceed by a series of stages, from ini- tial clarification of the question at issue and of the methods of resolving it, through data gathering and intepretation, to formation of arguments. H7 Logical Pluralism: Arguments should permit both deductive and nondeduc- tive forms of inference. H8 Rule Consistency: There should be no situation where the rules prohibit all acts, including the null act. H9 Semantic Openness: The rules should not force any participant to accept any statement, even when these follow by deduction from previous statements. H10 Realism: The rules must make agreement between participants a realistic possibility. H11 Retractability: Participants must be free at all times to supplement, change, or withdraw previous tentative commitments.” H12 Role Reversal: The rules should permit the responsibility for initiating sug- gestions to shift between participants. H13 Experiential Appeal: The rules should permit direct mutual appeal to experience. H14 Openness: There should be no restrictions on the content of contributions. H15 Tentativeness: Participants should be free to make tentative suggestions as well as assertions. H16 Tracking: The rules should make it possible to determine at any time the cumulative commitments, rights, and obligations of each participant. H17 Termination: There should be rules for the orderly termination of the dia- logue. Hitchcock proposes that an inquiry terminate as soon as (a) a partici- pant declares an intention to abandon it, (b) in two successive turns neither participant has a suggestion for consideration, or (c) there is agreement on the conclusion of the discussion. H18 Allocation of Burden of Proof: The burden of proof remains with the participant who makes a suggestion, even after contestation by another participant. As with Alexy’s rules, we can assess the DDF protocol against Hitchcock’s Principles, by considering each principle in turn. Principle H1 (Externalization) is satisfied by our protocol, as can be seen by an examination of the pre- and "This Principle may be understood as a requirement that the protocol enables seif- transformation, in the sense of Section 4.4. International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 117 postconditions of the locutions listed in the Appendix and the constraints on the order of dialogue stages given in Section 3.° Principle H2 (Dialectification) is only partly satisfied, because we do not permit participants to change the protocol framework itself. Principle H3 (Mutuality) is satisfied, as shown by the commit- ment store conditions for the locution assert(.). Principle H4 (Turn Taking) will be satisfied in any computational application on a sequential processor. Principle H5 (Orderliness) is satisfied to the extent that each dialogue under the protocol concerns one governing question. However, there is nothing to stop issues related to this question being considered simultaneously in a manner contrary to this prin- ciple. The next principle, H6 (Staging), is satisfied by the phased framework pre- sented in Section 3. Principle H7 (Logical Pluralism) is satisfied, because there are no restrictions placed on the content of the justifications participants may advance for their statements. However, embedded dialogues may restrict infer- ences to specific forms, such as embedded persuasion dialogue protocols that use deductive inference. Principle H8 (Rule Consistency) is satisfied, as is shown by an examination of the postconditions of each locution given in the Appendix. Principle H9 (Semantic Openness) is satisfied, because no rules force a par- ticipant to accept any statement. Principle H10 (Realism) is satisfied, because the protocol readily permits participants to express their agreement to statements uttered in dialogues under it. Principle H11 (Retraceability) is satisfied up to the execu- tion of the Confirm stage, by means of the retract(.) locution. Utterances of accep- tances in this stage cannot be subsequently retracted. Principle H12 (Role Reversal) is satisfied, because any participant may initiate suggestions in the dialogue. Prin- ciple H13 (Experiential Appeal) is satisfied, because participants may support their utterances in any way they wish. Principle H14 (Openness) is partly satisfied, because the contents of utterances are typed according to the types of sentences given in Section 3. However, apart from this typing, there are no restrictions on the content of contributions. Principle H15 (Tentativeness) is satisfied because the propose (.) locution permits participants to make tentative suggestions. Principle H16 (Tracking) is satisfied by means of the commitment stores established for each participant. Principle H17 (Termination) is satisfied by the rules governing the Confirm stage and the rules governing withdrawal from the dialogue. The protocol rules allow participants to withdraw at any time and without giving rea- sons. Principle H18 (Allocation of Burden of Proof) is satisfied by the definition of the ask_justify(.) locution, which permits a participant to contest an earlier assertion by another participant, and requires that other participant to provide a justification for the earlier assertion. In summary, the Deliberation Dialogue Framework presented in this article satisfies all but 4 of Hitchcock’s 18 Principles of Rational Mutual Inquiry; Princi- ples H2 (Dialectification), H5 (Orderliness), H11 (Retraceability), and H14 (Open- ness) are only partly satisfied. It is worth noting that there is some inconsistency °In contrast, the definition of the syntax of the Agent Communications Language (ACL) of the Foundation for Intelligent Physical Agents (FIPA), an emerging standard for agent com- munications, requires agents to sincerely believe statements they make in dialogues,'° thus vio- lating this principle. International Journal of Intelligent Systems DOI 10.1002/int 118 MCBURNEY, HITCHCOCK, AND PARSONS within Hitchcock’s collection of Principles. Principles H5 (Orderliness), H6 (Stag- ing), and H17 (Termination) may conflict with Principle H2 (Dialectification), because the latter gives the participants complete freedom, including the freedom to change the rules of the protocol. Essentially, this inconsistency arises because of the need to meet two desirable, but conflicting, objectives in the design of a protocol: freedom for the participants and orderliness of the resulting dialogues. By the very act of defining a protocol for dialogues, we are constraining the free- dom of the participants in some way and are imposing some structure on the inter- actions between them. Because we seek to define a framework within which deliberation dialogues between computational entities can occur, our task, as design- ers, is to strike an appropriate balance between these conflicting objectives.’ Our framework, although not maximally dialectical, is dialectical to a considerable extent, for instance, in leaving the participants free to agree on what factors to accept as relevant to the governing question or to initiate embedded dialogues on different questions. The framework could be made more dialectical by providing for the opportunity to convene a “loya jurga” or “constituent assembly” to change the framework rules; such an assembly could, for example, change the require- ment of unanimity of decision making (in the definition of the Confirm stage given in Section 3) to a requirement that, say, only a two-thirds majority of accep- tances is necessary for a decision to be adopted by the group. 6.3. Deliberation Outcomes The previous subsection considered our Deliberation Dialogue Framework protocol from the perspective of the processes it implemented. We could also assess a protocol in terms of the outcomes achieved, if any, of dialogues conducted under the protocol. For example, a protocol to support an inquiry dialogue could be assessed on whether or not dialogues conducted according to the protocol succeed in finding the answer to the question motivating the dialogue. In other words, is the outcome of an inquiry dialogue the true answer to the governing question? Because some questions may be undecidable or may require considerable time or significant resources for answers to be found, a more refined measure of the pro- tocol may be whether it leads, on average, to the truth, or whether it would do so, given infinite time and unlimited processing resources. Two of us adopted this approach to study the formal properties of a dialogue game protocol we proposed for scientific inquiry dialogues, showing that, under some conditions, the proba- bility that a dialogue under the protocol did not converge on the truth could be bounded away from 14 In contrast to inquiry dialogues, deliberation dialogues have as their stated objective agreement on some course of action, rather than a search for truth. In this, they are similar to negotiation dialogues, where the stated objective is PKrabbe notes a similar conflict of design objectives in a discussion of retraction rules in dialogue games? As an example, the dialogue game protocols of Amgoud and Parsons,°*4 particularly those in Ref. 55, are at the orderliness end of the freedom—orderliness spectrum. International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 119 agreement on an action of a particular type, namely a division of a scarce resource. We mention negotiation dialogues here because this objective is shared by the auction and negotiation protocols studied in the branch of economics known as mechanism theory, and considerable attention has been devoted to assessment of the outcomes of these mechanisms (e.g., Refs. 16, 17, and 56). Among the usual criteria proposed are: Maximum social welfare: Intuitively, a protocol maximizes social welfare if it ensures that any outcome maximizes the sum of the utilities of negotiation par- ticipants. If the utility of an outcome for an agent was simply defined in terms of the amount of money that the agent received in the outcome, then a protocol that maximized social welfare would maximize the total amount of money “paid out.” Pareto-efficient: An outcome is Pareto-optimal if any other outcome leaves at least one participant worse off, as measured by the utility of the outcome. A mechanism that achieves Pareto-optimal outcomes is said to be Pareto-efficient. Many auction and economic negotiation mechanisms have been studied and shown to have these properties. We know of only one study of negotiation dialogues that considers properties such as these, recent work of two of us with Michael Woold- ridge (McBurney et al.°’). This work demonstrated, under assumptions concern- ing the absence of time constraints and of coercion on participants, that the outcomes of negotiation dialogues between self-interested and nonmalicious par- ticipants conducted according to protocols with certain properties are Pareto- optimal (Ref. 57, Proposition 1). Adopting a similar approach to assess protocols for deliberation dialogues would mean considering whether dialogues conducted according to the protocol succeed in agreeing on a course of action and considering the quality of this agreed course. But how to judge the quality of a course of action? We are not given ante- cedently a set of evaluative criteria (goals, constraints, considerations, etc.) in terms of which one could theoretically determine, given all the relevant factual circum- stances, what is the “best” answer to the governing question. Indeed, the protocol does not require all participants to agree at any point in the discussion on the evaluative criteria to be used, and so conflicting evaluative criteria may be supported throughout a dialogue. Moreover, participants may even undertake dia- logues on different governing questions, because the rules of our DDF protocol permit the initiation of embedded deliberation dialogues on new questions within a given deliberation dialogue.4 For these reasons, it seems that the best one might do is to establish condi- tional results about outcomes of dialogues using the protocol. For example, such a result might be that, given agreement by the participants to a set of evaluative 4So, although our protocol does not permit revision of the governing question within a dialogue, a similar outcome may be achieved by opening, within this first dialogue, an embed- ded dialogue on a new question and reaching agreement in the second dialogue prior to return- ing and ending immediately the first dialogue. International Journal of Intelligent Systems DOI 10.1002/int 120 MCBURNEY, HITCHCOCK, AND PARSONS criteria and a set of factual sentences, then, if the participants use the protocol, they will reach agreement on an answer to the governing question that is optimal, provided those agreed evaluative criteria and factual sentences are valid and exhaus- tive of matters relevant to the governing question, and provided the participants undertake the dialogue free of time and processing constraints, and free of coer- cion or duress. However, to prove this formally we believe would require a “seman- tic” theory of actions akin to the standard account of sentential truth initiated by Wittgenstein*® and Tarski? Utility theory in economics could be viewed as a semantic theory of actions, but this has restrictive assumptions that limit its appli- cability.©°°! Developing a general theory would be a much larger undertaking than could be accommodated in this article. We therefore leave the assessment of our protocol on the basis of the outcomes of dialogues conducted under it to another time and place. 7. DISCUSSION 7.1. Contribution This article has presented a dialogue-game protocol, called the Deliberation Dialogue Framework (DDF) protocol, for deliberation dialogues between compu- tational entities, with the syntax being fully specified. The protocol is intended for use in both closed and open multiagent systems, where open systems are those permitting participation by agents not built by the design team that created the system itself. Accordingly, we have only defined the interaction protocol and not the architecture of the agents that may use it; any agent may participate (subject to the rules of the system owner) in a dialogue under the DDF protocol, provided only that they know and follow the protocol. In addition, in the terminology of computer programming theory (e.g., Refs. 62 and 63), the protocol has been given an axiomatic semantics. The DDF protocol was based on a model for deliberative reasoning taken from argumentation theory, namely Harald Wohlrapp’s theory of Retroflexive Argumentation.!® Moreover, we showed that the protocol conforms to the majority of a set of normative principles proposed for rational mutual inqui- ries between humans. Further work is needed to assess the quality of outcomes achieved, if any, by dialogues conducted according to the DDF protocol. In enabling participants to contribute to a joint discussion that may proceed iteratively and to view each other’s commitment stores, our model has some similarities with “black- board” architectures for intelligent systems in computer science. The designer of any interaction protocol needs to define locutions and com- bination rules so as to strike a balance between generality and specificity of application. If the locutions and rules are too tightly defined, the protocol will not be widely applicable. Thus, for example, HTTP, the Hyper-Text Transfer Protocol used for internet exchanges, is suitable for requesting and sending infor- mation, but not for much else; its impoverished expressiveness makes it unsuit- able for argument about any information requested or transmitted, and its statelessness makes it inappropriate as it stands for requests or promises of action International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 121 commitments." On the other hand, if the locutions and rules of the protocol are too loosely drawn, then the protocol will lose features specific to a particular domain of application. Arguably, the Agent Communications Language ACL of FIPA suffers from this defect.>’ Because there are no constraints on what may be said by a participant at any time using FIPA ACL, agent protocol designers have had to resort to additional methods to constrain utterances and to prevent cacophonous interactions. For example, designers have defined layers on top of the basic proto- col for specific applications, as in the FIPA Dutch and English auction proto- cols,°>-® or have defined predetermined dialogue segments, called conversation policies, which can be invoked modularly, as in Refs. 67 and 68. In proposing a protocol for deliberation dialogues, we face this same chal- lenge. If we place too many constraints on the utterances possible using our framework, we will lose generality of application: There will be some (possibly many) deliberation dialogues that cannot be undertaken using our framework. On the other hand, if we have too few constraints, then our framework would apply to many interactions that we would not recognize as deliberations. Our response to this challenge has been to define specific types of sentences (actions, goals, constraints, etc.), specific stages of dialogue (Open, Inform, Propose, etc.), and specific locutions (propose, assert, prefer, etc.) that we believe appropriate to deliberation dialogues. But we have not defined many rules constraining the use of these sentences, locutions, and stages. For example, it would be possible to constrain assertions of action sentences by a participant to be consistent with prior assertions of constraints and/or preferences made by that participant, or even to be consistent with prior assertions of constraints or preferences made by other participants. The existence of such rules would limit the domain of appli- cability of the framework, because there will always be dialogues that would be recognizable as deliberations and that reach agreement, and yet do not comply with rules such as these. Moreover, making such rules part of the protocol defi- nition also reduces the freedom of the participants to decide themselves how to conduct a particular deliberation dialogue, and thereby reduces the extent of compliance of the framework to Hitchcock’s Principle H2 (Dialectification). For these two reasons, we have not included such rules as part of the definition of the framework. There is nothing, however, to prevent the DDF framework being instantiated with such rules if designers or participants so desire it. Similarly, although we have allowed for embedded persuasion dialogues within deliberations, we have not articulated a model of persuasion dialogue to accom- pany the deliberation framework. Participants in a specific deliberation dialogue on a specific occasion may favor a particular model for the conduct of a persua- sion dialogue; several such models have been proposed, for example, in Refs. 1, 5, 28, 69, and 70. On a different topic or with different participants or at a different time in the same deliberation dialogue, a different model of persuasion may be "HTTP does not track the history of requests and responses for information made using the protocol and so cannot monitor the state of a specific request (e.g., not-yet-requested, requested- but-not-yet-fulfilled, requested-and-fulfilled-previously, requested-again, etc.). Cookies were developed to overcome HTTP’s lack of state. International Journal of Intelligent Systems DOI 10.1002/int 122 MCBURNEY, HITCHCOCK, AND PARSONS favored. Our framework is sufficiently flexible to permit this diversity. Similarly, for the same reason, we have not specified the relationships between commit- ments incurred in embedded dialogues and those in the main dialogue, nor the relationships between earlier and later commitments made in the one dialogue. In previous work,’ two of us presented a formalism that enables such different rela- tionships between commitments in dialogue to be expressed and which permits participants to an interaction to agree on such relationships prior to commence- ment of a dialogue. Adding such expressiveness and functionality to the delibera- tion dialogue framework presented here would be straightforward, if required. Including it as part of the DDF definition, however, would limit the applicability of the framework. Does our framework, then, strike an appropriate balance between generality and specificity? Our grounding of the framework in an argumentation-theoretic account of deliberative decision making means that the framework’s sentence types, dialogue stages, and locutions are specific to deliberation dialogues. We have there- fore constrained the framework sufficiently to preclude it being applied to just any type of dialogue. Conversely, its flexibility ensures that many different types of deliberation dialogue may be undertaken within it. The framework broadly satis- fies, for instance, the principles proposed by Hitchcock for rational mutual inquiry and many of the principles proposed by Alexy for discourses over ethical ques- tions, as we have shown above. The comparison with political deliberations, pre- sented in Section 6.1, however, reveals the existence of many dialogues, ostensibly deliberations, in which participants secretly pursue other objectives. Although pos- sibly expressible in our framework, such dialogues cannot necessarily be distin- guished from sincere deliberations; as we have argued, however, this feature may be true of all computational frameworks for interaction. 7.2. Related Work Considerable research effort in AI over the last 30 years has concerned the task of designing robots so that, when given a specific goal, such as moving into the next room, they may determine a plan for achievement of this goal. Because this research, known as Al Planning, concerns the determination of an action or course of actions, it would seem amenable to the application of deliberation dialogues. However, the only research program known to us which combines AI Planning with models of dialogues is the TRAINS project,’! which constructed an intelligent computer assistant for a human rail-freight scheduler. For this project, actual human—human conversations in the specific domain were first recorded and analyzed as a basis for the design of machine—human interactions. Although the two participants in the TRAINS system, machine and human, discuss a course of action and thus ostensibly engage in a deliberation dialogue, the design of the system assumes that the machine and the human user each begin the dialogue with a privately developed proposal for action, which they then present to one another. Each tries to persuade the other to adopt its proposal. Thus, in the terminology of Walton and Krabbe,! their conversation is a persuasion dialogue, albeit two-way, rather than a true deliberation. In addition, the TRAINS system design assumes International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 123 that the human user’s goal is paramount, and that the machine participates in the dialogue to assist the human to find an effective plan for achievement of this goal. Thus, the model of dialogue assumes a specific relationship of inequality between the two participants. In contrast, the model of deliberation dialogue we have pre- sented here is not limited in this way. Other work in AI has also come close to developing a formal model of deliberation dialogues without yet doing so. The dialogue-game protocols pro- posed for developing collective intention by Dignum and his colleagues??? assume, like the research in AI Planning, that the overall goal of the participating agents is predetermined. Moreover, these authors assume that one agent, an Jni- tiator, undertakes a persuasion dialogue to convince the others to adopt some joint intention it has adopted. Although the task is a deliberative one, the dia- logue model proposed for it is not that of a deliberation. The same comment is true of other recent research in multiagent systems. The agent interactions in Parsons et al.,’? for example, are deliberations mixed with persuasions, negotia- tions, and information-seeking dialogues, as noted in Ref. 5. However, they are modeled as persuasions, with one agent uttering an argument that the recipients try to counter. Similarly, the SharedPlans framework of Grosz and Kraus,” for collaborative planning between agents, assumes that agents begin their inter- action with a partial plan; this framework does not fully specify the mechanisms by which this partial plan is transformed into a full plan. Hunsberger and Zan- canaro,'° seeking to remedy this, have articulated mechanisms to enable Shared- Plan participants to vote over contested elements of a possible plan. However, these mechanisms do not permit the expression of arguments for and against proposals, and so we would not call them models of deliberation dialogues. In the language of argumentation theory (e.g., Ref. 35), the inability to express rea- sons for statements renders these mechanisms nonrational. Another approach to representing deliberation interactions between autonomous agents is the work of Panzarasa et al.,"’ who propose a modal logic formalism to represent the mental states of, and interactions between, the participating agents. This framework, how- ever, assumes (Ref. 41, Section 8) that at least one participant begins the discus- sion with a suggested proposal for action; the resulting agent interaction to decide a course of action, although termed a negotiation by the authors, is therefore modeled, like the TRAINS system, as a persuasion and not a deliberation. More- over, this model of persuasion, which the authors call social mental shaping, is one based on the exercise of social relationships between agents, such as that pertaining between a manager and her subordinates in a company.* Although such a model has wide applicability, it is not as general as the one we have presented above, which assumes nothing about the relationships between the par- ticipants; nor could social mental shaping be called an entirely rational model for deliberation, because essentially the only reason an agent can provide to another to adopt a proposed course of action is, “Because I said so!” Thus, social mental ‘Such relationships are readily captured in preference-based argumentation systems, such as Ref. 54. International Journal of Intelligent Systems DOI 10.1002/int 124 MCBURNEY, HITCHCOCK, AND PARSONS shaping may be seen to conflict with Alexy’s Rule A2.3, which prohibits con- straints on the rights of participants. Within the area of computational dialectics specifically, several computer sys- tems have been designed to support human deliberation dialogues. The Zeno sys- tem, for example, of Gordon and Karacapilidis” and Karacapilidis et al.”° was designed to support community participation in urban planning decisions. The model of argumentation used in this work was the /BIS system of Rittel and Web- ber,’’ which provides a framework for connecting topics, issues, and attributes in a multiattribute decision domain. Later systems inspired by Zeno, such as the Hermes system for computer-supported collaborative human decision making’® and the Demos system to support human debate over issues of public policy,” also use the IBIS framework. This framework connects utterances in a dialogue on the basis of their meanings (with respect to some decision problem), but does not spec- ify or constrain the dialectical obligations of the participants. If a statement uttered by a participant in the dialogue challenges a previous statement by another partici- pant, the IBIS framework provides a mechanism to represent the relationship between the two, but the framework has no rules or mechanisms for requiring such a challenge to be made, or for defending the earlier statement against such a chal- lenge, or for resolving multiple conflicting statements.’ The IBIS framework has no rules of the form described in our dialogue-game protocol that require or pre- clude particular types of responses when statements are uttered. (This is not to say that our protocol constrains every locution, only that it constrains some.) The same comment is also true of various systems using spatial representations of state- ments and their relationships in computer-supported human dialogues, as in Refs. 80 and 81. The resulting systems are thus capable of supporting human dialogues that are more freewheeling than the agent dialogues our model supports, but, because of the absence of rules specifying dialectical obligations, we do not believe that any of these systems incorporates a formal model of deliberation dialogues. This is true even though Hermes, for example, allows participants to discover, by click- ing on a discourse item, what actions they are permitted. Finally, Karacapilidis and Moraitis®” have recently proposed a framework for automated software agent dialogues in e-commerce domains. This framework is more general than ours, in that it enables other types of dialogue, for example, negotiations and persuasions, to be conducted by the participating agents, and allows these to be embedded within one another. For deliberation dialogues, how- ever, the framework we present here is more expressive than their framework, as the authors indicate in Ref. 83. 7.3. Future Research We are exploring a number of extensions of this work. First, we seek to model and automate more general classes of deliberation dialogue. For example, many ‘In the three systems mentioned, this task is left to a human mediator, possibly assisted by computer summarization. International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 125 human deliberations exhibit strong disagreement between the participants over the relevance and importance of different perspectives. Our dialogue-game model may be extended to allow for similar arguments between agents over these. Sec- ond, we plan to enable discussion over confirmation procedures, so that, for exam- ple, majority or plurality voting may be used instead of the unanimity now required in the Confirm stage. If a group of agents were to engage regularly in deliberation dialogues using the same decision procedures, these procedural discussions would not need to be undertaken in each dialogue but could be assumed constant. Sys- tems for agent interactions with such predetermined rules of encounter have been called Institutions in the AI literature (e.g., Ref. 84). Third, our explicit typing of sentences (into facts, goals, constraints, etc.) may facilitate the mathematical rep- resentation of dialogues under this model by means of the A-calculus,®° and thus the possible development of a denotational semantics for the protocol using enriched category theory, as has been achieved for monolectical argumentation in Ref. 86. Acknowledgments This work was partly funded by the EU IST Programme, through the Sustain- able Lifecycles in Information Ecosystems (SLIE) Project (IST- 1999-10948), and a Ph.D. studentship from the British Engineering and Physical Sciences Research Council (EPSRC). An earlier version of this article was presented at the May 2001 Meeting of the Ontario Society for the Study of Argumentation (OSSA), in Wind- sor, Canada, and we thank the audience on that occasion for their comments. We also thank Raphael Bordini, Wiebe van der Hoek, Joris Hulstijn, Henry Prakken, and the anonymous referees for their comments on earlier versions of this article. References 1. Walton DN, Krabbe ECW. Commitment in dialogue: Basic concepts of interpersonal rea- soning. Albany, NY: State University of New York Press; 1995. Gabbay DM, Woods J. Non-cooperation in dialogue logic. Synthese 2001;127:161-186. Hulstijn J. Dialogue models for inquiry and transaction. Ph.D. thesis. Enschede, The Neth- erlands: Universiteit Twente; 2000. 4. McBurney P, Parsons 8. Representing epistemic uncertainty by means of dialectical argu- mentation. Ann Math Artif Intell 2001;32:125-169. 5. Amgoud L, Maudet N, Parsons S. Modelling dialogues using argumentation. In: Durfee E, editor. Proc Fourth Int Conf on Multi-Agent Systems (ICMAS-2000), Boston, MA. Pis- cataway, NJ: IEEE Press; 2000. pp 31-38. 6. Amgoud L, Parsons S, Maudet N. Arguments, dialogue, and negotiation. In: Horn W, edi- tor. Proc 14th European Conf on Artificial Intelligence (ECAI-2000), Berlin, Germany. Amstedam: IOS Press; 2000. pp 338-342. 7. McBurmey P, van Eijk RM, Parsons $, Amgoud L. A dialogue-game protocol for agent purchase negotiations. J Auton Agents Multi Agent Syst 2003;7:235-273. 8. Sadri F, Toni F, Torroni P. Logic agents, dialogues and negotiation: An abductive approach. In: Schroeder M, Stathis K, editors. Proc Symp on Information Agents for E- Commerce, Artificial Intelligence and the Simulation of Behaviour Conference (AISB 2001), York, UK; 2001. 9. McBurney P, Parsons 8. Games that agents play: A formal framework for dialogues between autonomous agents. J Logic Lang Inform 2002;11:315-334. wr International Journal of Intelligent Systems DOI 10.1002/int 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. MCBURNEY, HITCHCOCK, AND PARSONS Reed C. Dialogue frames in agent communications. In: Demazeau Y, editor. Proc Third Int Conf on Multi-Agent Systems (ICMAS-98). Piscataway, NJ: IEEE Press; 1998. pp 246-253. Parsons 8, McBurney P. Argumentation-based dialogues for agent coordination. Group Decis Negot 2003;12:415-439. Grice HP. Logic and conversation. In: Cole P, Morgan JL, editors. Syntax and semantics III: Speech acts. New York: Academic Press; 1975. pp 41-58. Rehg W. The argumentation theorist in deliberative democracy. Controversia 2002;1:18—42. Fiorino DJ. Environmental risk and democratic process: A critical review. Columbia J Envi- ron Law 1989;14:501-547. Foundation for Intelligent Physical Agents (FIPA). Communicative Act Library Specifi- cation. Standard SC00037J; December 3, 2002. Rosenschein JS, Zlotkin G. Rules of encounter: Designing conventions for automated nego- tiation among computers. Cambridge, MA: MIT Press; 1994. Sandholm TW. Distributed rational decision making. In: Weiss G, editor. Multiagent sys- tems: A modern introduction to distributed artificial intelligence. Cambridge, MA: MIT Press; 1999. pp 201-258. Wohlrapp H. A new light on non-deductive argumentation schemes. Argumentation 1998; 12:341-350. Hunsberger L, Zancanaro M. A mechanism for group decision making in collaborative activity. In: Proc 17th Nat Conf on Artificial Intelligence (AAAI 2000). Menlo Park, CA: AAAT Press; 2000. pp 30-35. Lakatos I. Proofs and refutations: The logic of mathematical discovery. Cambridge, UK: Cambridge University Press; 1976. Naess A. Communication and argument: Elements of applied semantics. London: Allen and Unwin; 1966. Translation of En del Elementaere Logiske Emner. Universitetsforlaget, Oslo, Norway; 1947. Aristotle. Topics. In: Ross WD, editor. The works of Aristotle. Oxford, UK: Clarendon Press; 1928. Hamblin CL. Fallacies. London: Methuen and Co. Ltd; 1970. MacKenzie JD. Question-begging in non-cumulative systems. J Philos Logic 1979;8: 117-133. Lorenzen P, Lorenz K. Dialogische Logik. Darmstadt, Germany: Wissenschaftliche Buch- gesellschaft; 1978. Finkelstein A, Fuks H. Multi-party specification. In: Proc Fifth Int Workshop on Software Specification and Design, Pittsburgh, PA, 1989. ACM Sigsoft Engineering Notes. Bench-Capon TJM, Dunne PE, Leng PH. Interacting with knowledge-based systems through dialogue games. In: Proc 11th Int Conf on Expert Systems and Applications, Avignon, France; 1991. pp 123-140. Prakken H. On dialogue systems with speech acts, arguments, and counterarguments. In: Ojeda-Aciego M, de Guzman MIP, Brewka G, Pereira LM, editors. Proc Seventh Euro- pean Workshop on Logic in Artificial Intelligence (IELIA-2000). Lecture Notes in Artifi- cial Intelligence 1919. Berlin: Springer; 2000. pp 224-238. Dignum F, Dunin-Keplicz B, Verbrugge R. Creating collective intention through dialogue. Logic J IGPL 2001;9:305-319. Levin JA, Moore JA. Dialogue-games: Metacommunications structures for natural lan- guage interaction. Cogn Sci 1978;1:395—420. Robert HM, Robert SC, Robert HM IIJ, Evans WJ, Honemann DH, Balch TJ. Robert’s rules of order, 10th ed. Cambridge, MA: Perseus; 2000. Colombetti M, Verdicchio M. An analysis of agent speech acts as institutional actions. In: Castelfranchi C, Johnson WL, editors. Proc First Int Joint Conf on Autonomous Agents and Multi-Agent Systems (AAMAS 2002), Bologna, Italy. New York: ACM Press; 2002. pp 1157-1164. Singh MP. An ontology for commitments in multiagent systems: Toward a unification of normative concepts. Artif Intell Law 1999;7:97-113. International Journal of Intelligent Systems DOI 10.1002/int 39. 40. Al. 42. 43. 44, 45. 46. 47. 48. 49. 50. 51. 52. 53. 55. 56. 57. EIGHTFOLD WAY OF DELIBERATION DIALOGUE 127 Singh MP. A conceptual analysis of commitments in multiagent systems. Technical Report 96-09. Raleigh, NC: Department of Computer Science, North Carolina State University; 1996. Johnson R. Manifest rationality: A pragmatic theory of argument. Mahwah, NJ: Lawrence Erlbaum Associates; 2000. Searle J. Rationality in action. Cambridge, MA: MIT Press; 2001. Sen A. Rationality and freedom. Cambridge, MA: Harvard University Press; 2002. Forester J. The deliberative practitioner: Encouraging participatory planning processes. Cambridge, MA: MIT Press; 1999. Herbst J. State politics in Zimbabwe. Perspectives on Southern Africa, vol 45. Berkeley, CA: University of California Press; 1990. Nathan AJ, Link P, editors. The Tiananmen Papers. Compiled by Zhang Liang, with an afterword by O. Schell. London, UK: Little, Brown and Company; 2001. Panzarasa P, Jennings NR, Norman TJ. Formalizing collaborative decision-making and practical reasoning in multi-agent systems. J Logic Comput 2002;12:55-117. Lukacs J. Five days in London: May 1940. New Haven, CT: Yale University Press; 1999. Wooldridge MJ. Semantic issues in the verification of agent communication languages. J Auton Agents Multi Agent Syst 2000;3:9-31. Singh MP. A social semantics for agent communications languages. In: Dignum F, Chaib- draa B, Weigand H, editors. Proc Workshop on Agent Communication Languages, Int Joint Conf on Artificial Intelligence (IJCAI-99). Berlin: Springer (in press). Rehg W, McBurney P, Parsons 8. Computer decision-support systems for public argumen- tation: Assessing deliberative legitimacy. AI Soc 2004;19:203-228. Webler T, Tuler 8, Krueger R. What is a good public participation process’? Five perspec- tives from the public. Environ Manag 2001;27:435-450. Alexy R. A theory of practical discourse. In: Benhabib S, Dallmayr F, editors; Frisby D, trans. The communicative ethics controversy: Studies in contemporary German social thought. Cambridge, MA: MIT Press; 1990. pp 151-190. Habermas J. Moral consciousness and communicative action. Lenhardt C, Nicholsen SW, trans. Cambridge, MA: MIT Press; 1991. Habermas J. Between facts and norms: Contributions to a discourse theory of law and democracy. Rehg W, trans. Cambridge, MA: MIT Press; 1996. Habermas J. The inclusion of the other: Studies in political theory. Cronin C, De Greiff P, editors. Cambridge, MA: MIT Press; 1998. Ess C. The political computer: Democracy, CMC, and Habermas. In: Ess C, editor. Philo- sophical perspectives on computer-mediated communication. Albany, NY: State Univer- sity of New York Press; 1996. pp 197-230. Hitchcock D. Some principles of rational mutual inquiry. In: van Eemeren F, Grootendorst R, Blair JA, Willard CA, editors. Proc Second Int Conf on Argumentation. Amsterdam, The Netherlands: International Society for the Study of Argumentation (SICSAT); 1991. pp 236-243. Krabbe ECW. The problem of retraction in critical discussion. Synthese 2001;127:141-159. Amgoud L, Parsons S. Agent dialogues with conflicting preferences. In: Meyer JJ, Tambe M, editors. Pre-Proceedings of the Eighth Int Workshop on Agent Theories, Architectures, and Languages (ATAL 2001), Seattle, WA; 2001. pp 1-14. Parsons S$, Wooldridge M, Amgoud L. An analysis of formal interagent dialogues. In: Castel- franchi C, Johnson WL, editors. Proc First Int Joint Conf on Autonomous Agents and Multi-Agent Systems (AAMAS 2002). New York: ACM Press; 2002. pp 394-401. Pekeé A, Rothkopf MH. Combinatorial auction design. Working Paper. Presentation to Workshop on Electronic Market Design, Infonomics Group, University of Maastricht, Maas- tricht, The Netherlands; July 2001. McBurney P, Parsons 8, Wooldridge M. Desiderata for agent argumentation protocols. In: Castelfranchi C, Johnson WL, editors. Proc First Int Joint Conf on Autonomous Agents and Multi-Agent Systems (AAMAS 2002), Bologna, Italy. New York: ACM Press; 2002. pp 402-409. International Journal of Intelligent Systems DOI 10.1002/int 128 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. TA. 75. 76. 77. 7B. 79. MCBURNEY, HITCHCOCK, AND PARSONS Wittgenstein L. Tractatus logico-philosophicus. London, UK: Routledge and Kegan Paul; 1922. Tarski A. The concept of truth in formalized languages. In: Woodger JH, trans. Logic, semantics, metamathematics. Oxford, UK: Clarendon Press; 1956. pp 152-278. Gollop SJ. Paradoxes of the black box: The Allais paradox, instransitive preferences and orthodox decision theory. M.A. thesis. Auckland, New Zealand: Department of Philoso- phy, University of Auckland; 2000. Mandler M. A difficult choice in preference theory: Rationality implies completeness or transitivity but not both. In: Millgram E, editor. Varieties of practical reasoning. Cam- bridge, MA: MIT Press; 2001. pp 373-402. van Eijk RM. Programming languages for agent communications. Ph.D. thesis. Utrecht, The Netherlands: Department of Computer Science, Utrecht University; 2000. Tennent RD. Semantics of programming languages. Hemel Hempstead, UK: Prentice- Hall; 1991. Nii HP. Blackboard systems (part one): The blackboard model of problem solving and the evolution of blackboard architectures. AI Mag 1986;Summer:38-53. Foundation for Intelligent Physical Agents (FIPA). Dutch Auction Interaction Protocol Specification. Technical Report XCO0032F; August 10, 2001. Foundation for Intelligent Physical Agents (FIPA). English Auction Interaction Protocol Specification. Technical Report XCO0031F; August 10, 2001. Flores RA, Kremer RC. Bringing coherence to agent conversations. In: Wooldridge MJ, Weil G, Ciancarine P, editors. Agent-Oriented Software Engineering I: Second Int Work- shop (AOSE-2001), Montreal, Canada, May 29, 2001. Lecture Notes in Computer Science 2222. Berlin: Springer; 2002. pp 50-67. Greaves M, Holmback H, Bradshaw J. What is a conversation policy? In: Dignum F, Greaves M, editors. Issues in agent communication. Lecture Notes in Artificial Intelligence 1916. Berlin: Springer; 2000. pp 118-131. Maudet N, Evrard F. A generic framework for dialogue game implementation. In: Proc Second Workshop on Formal Semantics and Pragmatics of Dialog, 13th Int Twente Work- shop on Language Technology (TWLT 13). Universite Twente, The Netherlands, Centre for Telematics and Information Technology; 1998. Sycara K. Persuasive argumentation in negotiation. Theor Decis 1990;28:203-242. Allen JF, Schubert LK, Ferguson G, Heeman P, Hwang CH, Kato T, Light M, Martin NG, Miller BW, Poesio M, Traum DR. The TRAINS project: A case study in building a con- versational planning agent. J Exper Theor Artif Intell 1995;7:7-48. Dignum F, Dunin-Keplicz B, Verbrugge R. Agent theory for team formation by dialogue. In: Castelfranchi C, Lespérance Y, editors. Proc Seventh Int Workshop on Agent Theories, Architectures, and Languages (ATAL-2000), Boston; 2000. pp 141-156. Parsons S, Sierra C, Jennings NR. Agents that reason and negotiate by arguing. Logic Comput 1998;8:261-292. Grosz BJ, Kraus S$. The evolution of SharedPlans. In: Wooldridge MJ, Rao A, editors. Foundations of rational agency. Amsterdam: Kluwer; 1999. Gordon TF, Karacapilidis N. The Zeno argumentation framework. In: Proc Sixth Int Conf on AI and Law. New York: ACM Press; 1997. pp 10-18. Karacapilidis N, Papadias D, Gordon T, Voss H. Collaborative environmental planning with GeoMed. Eur J Oper Res 1997;102:335-346. Rittel HWJ, Webber MM. Dilemmas in a general theory of planning. Pol Sci 1973;4: 155-169. Karacapilidis N, Papadias D. Computer supported argumentation and collaborative deci- sion making: The HERMES system. Inform Syst 2001;26:259-277. Luehrs R, Malsch T, Voss K. Internet, discourses, and democracy. In: Terano T, Nishida T, Namatame A, Tsumoto S, Ohsawa Y, Washio T, editors. New Frontiers in Artificial Intel- ligence: Joint JSAI 2001 Workshop Post Proceedings. Lecture Notes in Artificial Intelli- gence 2253. Berlin: Springer; 2001. pp 67-74. International Journal of Intelligent Systems DOI 10.1002/int 80. 81. 82. 83. 84. 85. 86. 87. EIGHTFOLD WAY OF DELIBERATION DIALOGUE 129 Conklin J, Begeman ML. gIBIS: A hypertext tool for exploratory policy discussion. In: Proc Second Conf on Computer-Supported Co-operative Work. New York: ACM Press; 1988. pp 140-152. Nakata K. Enabling public discourse. In: Terano T, Nishida T, Namatame A, Tsumoto S, Ohsawa Y, Washio T, editors. New Frontiers in Artificial Intelligence: Joint JSAI 2001 Workshop Post Proceedings. Lecture Notes in Artificial Intelligence 2253. Berlin: Springer; 2001. pp 59-66. Karacapilidis N, Moraitis P. Inter-agent dialogues in electronic marketplaces. Comput Intell 2004;20:1-17. Karacapilidis N, Moraitis P. Engineering issues in inter-agent dialogues. In: van Harmelen F, editor. Proc 15th European Conf on Artificial Intelligence (ECAI 2002), Lyon, France; 2002. pp 58-62. Sierra C, Jennings NR, Noriega P, Parsons S. A framework for argumentation-based nego- tiation. In: Singh MP, Rao A, Wooldridge MJ, editors. Intelligent Agents IV: Agent Theo- ries, Architectures, and Languages, Proc Fourth Int ATAL Workshop. Lecture Notes in Artificial Intelligence 1365. Berlin: Springer; 1998. pp 177-192. Church A. A formulation of the simple theory of types. J Symbol Logic 1940;5:56-68. Ambler SJ. A categorical approach to the semantics of argumentation. Math Struct Com- put Sci 1996;6:167-188. Fikes RE, Nilsson NJ. STRIPS: A new approach to the application of theorem proving to problem solving. Artif Intell 1971;2:189-208. APPENDIX: AXIOMATIC SEMANTICS In this appendix, we define the preconditions for the legal utterance of locu- tions, and the postconditions that occur upon their utterances, for each of the locutions of the Deliberation Dialogue Framework protocol presented in Sec- tion 4. Such a presentation in terms of pre- and postconditions is commonly known in AI as a STRIPS-like notation, following Ref. 87. Within the theory of computer programming languages it is also called an axiomatic semantics for the language: 62,63 L1. The open_dialogue (.) locution: Locution: open_dialogue(P;,¢?), where q is a sentence of type action or a sentential function whose values are of type action (possibly conjoined with the sentence that exactly one sequence of objects satisfies the function). Preconditions: There must have been no prior utterance of the locution open_dialogue(P,,q?) by any participant P; within the dialogue. Meaning: Participant P; proposes the opening of a deliberation dialogue to consider the governing question qg?, where q is a sentence of type action or a sentential function whose values are of type action (possibly conjoined with the sentence that exactly one sequence of objects satisfies the func- tion). A dialogue may only commence with this move. Response: No response required. Other intending participants may respond with the enter_dialogue(.) locution. Commitment Store Update: No effects. International Journal of Intelligent Systems DOI 10.1002/int 130 MCBURNEY, HITCHCOCK, AND PARSONS L2. The enter_dialogue(.) locution: Locution: enter_dialogue(P;,¢?), where g is a sentence of type action or a sentential function whose values are of type action (possibly conjoined with the sentence that exactly one sequence of objects satisfies the function). Preconditions: A participant P;, where P; and P; are distinct, must previously have uttered the locution open_dialogue(P,,q?). Meaning: Intending participant P; indicates a willingness to join a delibera- tion dialogue to consider the governing question g?, where gq is a sentence of type action or a sentential function whose values are of type action (pos- sibly conjoined with the sentence that exactly one sequence of objects sat- isfies the function). All intending participants other than the speaker of open_dialogue(.) must announce their participation with this move. Response: No response required. This locution is a precondition for all locu- tions other than open_dialogue(.), that is, an intending speaker P, of any other locution must have previously uttered enter_dialogue(P,,q?). As soon as one participant has uttered the enter_dialogue(P,,q?) locution, the dia- logue is said to be Open. Commitment Store Update: No effects. Because all the locutions listed below have a common precondition, namely that the speaker P; has previously uttered either the locution open_dialogue(P,,q?) or the locution enter_dialogue (P;,9), we do not list this precondition under each locution; only those preconditions specific to the locution concerned are listed. Likewise, all locutions other than open_dialogue(P,,¢?) and enter_dialogue(P,,q¢?) require that the speaker not have previously withdrawn from the dialogue, and this precondition is also not listed explicitly. L3. The propose (.) locution: Locution: propose(P,, type, t), where f is a sentence and type is an element of the set {action, goal, constraint, perspective, fact, evaluation}. Preconditions: No agent P; has previously uttered propose(P,, type, t). In addition, before an agent P; may utter propose(P;, action, a), some agent P; (possibly P;) must have uttered either propose(P;, type, t) or assert (P;, type, t) for some type € { goal, constraint, perspective, fact}. Meaning: Participant P; proposes sentence f as a valid instance of type type. Response: No response required. Commitment Store Update: No effects. L4. The assert (.) locution: Locution: assert(P;, type, t), where f is a sentence and type is an element of the set {action, goal, constraint, perspective, fact, evaluation}. Preconditions: Agent P; has not previously uttered assert (P;, type, t). In addi- tion, before an agent P; may utter assert(P;, evaluation, e), some agent P; (possibly P;) must have uttered either propose (P;, action, a) or assert(P;, action, a) for some action a that is referenced in sentence e. Meaning: Participant P; asserts sentence f as a valid instance of type type. International Journal of Intelligent Systems DOI 10.1002/int EIGHTFOLD WAY OF DELIBERATION DIALOGUE 131 Response: No response required. Commitment Store Update: The 2-tuple (type, tf) is inserted into CS(P;), the Commitment Store of participant P;. In the case in which agent P; utters the locution assert( P;, action, t) and this follows an utterance of move(P,, action, t) by some other agent P;, then any earlier entry in the Commitment Store of participant P; of the form (action, s), for some s, is simultaneously removed from the Commitment Store CS(P;). LS. The prefer(.) locution: Locution: prefer (P,, a,b), where a and b are sentences of type actions. Preconditions: Some participants P; and P,, possibly including P;, must pre- viously have uttered the locution assert(P,, evaluation, e) and the locution assert (P,, evaluation, f ), where e and f are sentences of type evaluation that refer, respectively, to action-options a and b. Meaning: Participant P; indicates a preference for action-option a over action- option b. Response: No response required. Commitment Store Update: The 3-tuple (prefer, a, b) is inserted into CS(P;), the Commitment Store of P;. L6. The ask_ justify (.) locution: Locution: ask_justify(P,,P;, type, t), where type is an element of the set {action, goal, constraint, perspective, fact, evaluation}. Preconditions: Participant P; has previously uttered the locution assert (P;, type,t) and this utterance has not subsequently been retracted by P;. Meaning: Participant P; asks participant P; to provide a justification of sen- tence f of type type, where (type, t) € CS(P;). Response: P; must respond in one of the following three ways: e Retract the sentence f. e Seek to persuade P; in an embedded persuasion dialogue that sentence f is a valid instance of type type. e Seek to persuade FP; in an embedded persuasion dialogue that no justifi- cation is required for the assertion that f is a valid instance of type type. Commitment Store Update: No effects. L7. The move(.) locution: Locution: move(P;, action, a), where a is a sentence of type action. Preconditions: Some participant P;, possibly P;, must previously have uttered either propose (P;, action, a) or assert(P;, action, a), and such an utterance has not subsequently been retracted by the participant who uttered it. Meaning: Participant P; proposes that each participant pronounce on whether they assert sentence a as the action to be decided upon by the group. Response: Other participants P, must each respond with either an utterance of assert(P,,action,a) or an utterance of reject(P;,action,a). No other response is permitted. International Journal of Intelligent Systems DOI 10.1002/int 132 MCBURNEY, HITCHCOCK, AND PARSONS Commitment Store Update: The 2-tuple (action, a) is inserted into CS(P;). In addition, any earlier entry in the Commitment Store of participant P; of the form (action, s), for some s, is simultaneously removed from the Com- mitment Store CS(P;). L8. The reject(.) locution: Locution: reject(P,, action, a), where a is a sentence of type action. Preconditions: Some participant P;, not P;, has previously uttered move (P;, action, a). Meaning: Participant P; wishes to reject the assertion of action a as the action to be decided on by the group. Response: No response is required. Commitment Store Update: If the 2-tuple (action, a) is contained in CS(P;) prior to this utterance, then it is deleted. L9. The retract(.) locution: Locution: retract(P,, locution), where locution is one of the locutions, assert(.), move(.), or prefer (.). Preconditions: Participant P; must have previously uttered and not sub- sequently retracted the locution /ocution. Meaning: Participant P; expresses a retraction of a previous utterance locu- tion, where locution is one of the following three locutions: assert(P;, type, t), move(P;, action, a), or prefer (P;, a, b). Response: No response required. Commitment Store Update: Exactly one of (a) the 2-tuple (type, t), (b) the 2-tuple (action, a), or (c) the 3-tuple (prefer,a,b) is deleted from CS(P;), according to whichever of the three possible prior locutions is being retracted. L10. The withdraw_dialogue(.) locution: Locution: withdraw_dialogue(P;,¢?), where g is a sentence of type action or a sentential function whose values are of type action (possibly con- joined with the sentence that exactly one sequence of objects satisfies the function). Preconditions: Participant P; must not previously have uttered a withdraw_ dialogue (P;,¢?) locution. Meaning: Participant P; announces her withdrawal from the deliberation dia- logue considering the governing question g?. Response: No response required. If only two participants remain in a dia- logue and one of these utters this locution, the dialogue terminates. Commitment Store Update: No effects. International Journal of Intelligent Systems DOI 10.1002/int