Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Heuristics in Planning - Automated Planning - Lecture Slides, Slides of Computer Science

These are the lecture Slides of Automated Planning which includes Domain-Independent Planners, Abstract Search Procedure, Planning Algorithms, Current Set of Solutions, Unpromising Members, Loop Detection, Constraint Violation etc. Key important points are: Heuristics in Planning, Classical Planning, Goal Nodes, Multiple Paths, Number of Nodes, Optimal Solution, Hill Climbing, Node-Selection Heuristic, Solution for Relaxation, Depth-First Search

Typology: Slides

2012/2013

Uploaded on 03/21/2013

dharmpaal
dharmpaal 🇮🇳

3.9

(10)

87 documents

1 / 20

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Chapter 9
Heuristics in Planning
Lecture slides for
Automated Planning: Theory and Practice
Docsity.com
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14

Partial preview of the text

Download Heuristics in Planning - Automated Planning - Lecture Slides and more Slides Computer Science in PDF only on Docsity!

Chapter 9

Heuristics in Planning

Lecture slides for

Automated Planning: Theory and Practice

Digression: the A* algorithm (on trees)

  • Suppose we’re searching a tree in which each edge ( s , s' ) has a cost c ( s , s' )
    • If p is a path, let c ( p ) = sum of the edge costs
    • For classical planning, this is the length of p
  • For every state s , let
    • g ( s ) = cost of the path from s 0 to s
    • h* ( s ) = least cost of all paths from s to goal nodes
    • f* ( s ) = g ( s ) + h* ( s ) = least cost of all paths from s 0 to goal nodes that go through s
  • Suppose h ( s ) is an estimate of h* ( s )
    • Let f ( s ) = g ( s ) + h ( s )
      • f ( s ) is an estimate of f* ( s )
    • h is admissible if for every state s , 0 ≤ h ( s ) ≤ h* ( s )
    • If h is admissible then f is a lower bound on f*

g(s)

h*(s)

Hill Climbing

  • Use h as a node-selection heuristic
    • Select the node v in C for which h ( v ) is smallest
  • Why not use f?
  • Do we care whether h is admissible?

u

C

FastForward (FF)

• Depth-first search

• Selection heuristic: relaxed

Graphplan

– Let v be a node in C

– Let Pv be the planning problem of

getting from v to a goal

– use Graphplan to find a solution for

a relaxation of Pv

– The length of this solution is a lower

bound on the length of a solution to

Pv

u

C

FastForward

  • FF evaluates all the nodes in the set C of u ’s successors
  • If none of them has a better heuristic value than u , FF does a breadth-first search for a state with a strictly better evaluation
  • The path to the new state is added to the current plan, and the search continues from this state
  • Works well because plateaus and local minima tend to be small in many benchmark planning problems
  • Can’t guarantee how fast FF will find a solution, or how good a solution it will find - However, it works pretty well on many problems

AIPS-2000 Planning Competition

• FastForward did quite well

• In the this competition, all of the planning

problems were classical problems

• Two tracks:

– “Fully automated” and “hand-tailored” planners

– FastForward participated in the fully automated track

  • It got one of the two “outstanding performance” awards

– Large variance in how close its plans were to optimal

  • However, it found them very fast compared with the other

fully-automated planners

2004 International Planning

Competition

  • FastForward’s author was one of the

competition chairs

  • Thus FastForward did not participate

Plan-Space Planning

  • Refine = select next flaw to work on
  • Branch = generate resolvers
  • Prune = remove some of the resolvers
  • nondeterministic choice = resolver selection

Serializing and AND/OR Tree

  • The search space is

an AND/OR tree

  • Deciding what flaw to work on next = serializing this tree (turning it into a

state-space tree)

  • at each AND branch,

choose a child to

expand next, and

delay expanding

the other children

… …

Operator o 1 … Operator o n

Goal g 1 Goal g 2 Constrain variable v

Order tasks

Partial plan p

Partial plan p

Goal g 1 Operator o (^1) Operator o n

Partial plan p 1 Partial plan p n

Goal g (^2) … Constrainvariable v …^ Ordertasks Goal g (^2) … Constrainvariable v …^ Ordertasks

One Serialization

Why Does This Matter?

  • Different refinement strategies produce different serializations
    • the search spaces have different numbers of nodes
  • In the worst case, the planner will search the entire serialized search space
  • The smaller the serialization, the more likely that the planner will be efficient
  • One pretty good heuristic: fewest alternatives first

A Pretty Good Heuristic

  • Fewest Alternatives First (FAF)
    • Choose the flaw that has the smallest number of alternatives
    • In this case, unestablished precondition g 1

Case Study, Continued

  • The best serialization contains Θ( b^2 k ) nodes
  • The worst serialization contains Θ(2 kb^2 k ) nodes
  • The size differs by an exponential factor
  • But both serializations are doubly exponentially large
  • This limits how good any flaw-selection heuristic can do
    • To do better, need good ways to do node selection, branching, pruning

Resolver Selection

  • This is an “or” branch