PPLAN -- Planning with Preferences

*** January 2010: This page is currently in the process of being updated. ***

This page contains, code, domain challenge problems, and experimental results for ongoing work on PPLAN, a best-first search planner for planning with rich user preferences that was developed at the University of Toronto.

Contributors


Overview

PPLAN is a provably optimal best-first search planner for planning with non-Markovian qualitative preferences.

Work on PPLAN was first reported in the paper Specifying and Generating Preferred Plans, by Bienvenu and McIlraith. More recent citations are included below. Contributions include:

The language is rich, and is amenable to integration with many existing planners, and beyond planning, can be used to support arbitrary dynamical reasoning tasks. The semantics of the first-order preference language is defined using the situation calculus. The truth or falsity of a preference formula is evaluated as a situation calculus formula. The relative preference of alternative plans is represented as a weight, which is a function of the weights of its component properties. A preference formula provides a partial order on what are effectively temporally extended goals.

PPLAN is an optimal best-first forward-chaining planner that generates a plan that not only achieves a user-defined goal, but that also conforms, where possible, to a user's preferences. PPLAN uses progression to more efficiently evaluate preference formulae.

PPLAN takes as input, an action theory, a specification of the initial state of the system, a preference formula, and a goal.

Language Syntax

To specify preferences for planning, we appeal to a first-order preference language defined in Planning with Qualitative Temporal Preferences. Details of the syntax of the language can be found in this paper.


Implementation

PPLAN is implemented in Prolog. Users must provide a plan horizon (length bound). PPLAN evaluates partial plans optimistically. As such it uses an admissible evaluation function with best-first search.

You can find code, test cases and experimentals results by clicking on the relevant link that follows.


Test Cases

As an example we provide the dinner domain, specified in the following Prolog (e.g. XSB) files:

PPLAN has been tested on 60 instances of the dinner domain:


Experimental Results

We have proved that our algorithm will find the plan that optimizes the user's preferences. To evaluate the effectiveness of our heuristic in pruning search, we compared the number of nodes expanded or considered by PPLAN to the equivalent breadth-first planner. The results are given here:

In these experiments, we used the following implementation of breadth-first search.


References

The principle references for PPLAN are the following:

The closest related work is that of Son and Pontelli:


Code and examples for Son and Pontelli's planner can be found here.


Related work on integrating qualitative user preferences with decision-theoretic agent programming can be found here: