Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Planning Based on Model Checking - Automated Planning - Lecture Slides, Slides of Computer Science

These are the lecture Slides of Automated Planning which includes Domain-Independent Planners, Abstract Search Procedure, Planning Algorithms, Current Set of Solutions, Unpromising Members, Loop Detection, Constraint Violation etc. Key important points are: Planning Based on Model Checking, Multiple Possible Outcomes, Nondeterministic Systems, Markov Decision Processes, Execution Structures, Graph of Execution Paths, Types of Solutions

Typology: Slides

2012/2013

Uploaded on 03/21/2013

dharmpaal
dharmpaal 🇮🇳

3.9

(10)

87 documents

1 / 42

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Chapter 17
Planning Based on Model Checking
Lecture slides for
Automated Planning: Theory and Practice
Docsity.com
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a

Partial preview of the text

Download Planning Based on Model Checking - Automated Planning - Lecture Slides and more Slides Computer Science in PDF only on Docsity!

Chapter 17

Planning Based on Model Checking

Lecture slides for

Automated Planning: Theory and Practice

Motivation

  • Actions with multiple possible

outcomes

  • Action failures
    • e.g., gripper drops its load
  • Exogenous events
    • e.g., road closed
  • Nondeterministic systems are like

Markov Decision Processes (MDPs), but without probabilities attached to the outcomes

  • Useful if accurate probabilities aren’t available, or if probability calculations would introduce inaccuracies

a

c b

grasp(c)

a

c

b Intended outcome

a b Unintended outcome

Goal

Start

2

Example

  • Robot r1 starts at location l
  • Objective is to get r1 to location l
  • π 1 = {(s1, move(r1,l1,l2)), (s2, move(r1,l2,l3)), (s3, move(r1,l3,l4))}
  • π 2 = {(s1, move(r1,l1,l2)), (s2, move(r1,l2,l3)), (s3, move(r1,l3,l4)), (s5, move(r1,l3,l4))}
  • π 3 = {(s1, move(r1,l1,l4))}

Goal

Start

2

Example

  • Robot r1 starts at location l
  • Objective is to get r1 to location l
  • π 1 = {(s1, move(r1,l1,l2)), (s2, move(r1,l2,l3)), (s3, move(r1,l3,l4))}
  • π 2 = {(s1, move(r1,l1,l2)), (s2, move(r1,l2,l3)), (s3, move(r1,l3,l4)), (s5, move(r1,l3,l4))}
  • π 3 = {(s1, move(r1,l1,l4))}

Goal

Start

2

Execution Structures

  • Execution structure for a policy π: - The graph of all of π’s execution paths
  • Notation: Σπ = ( Q , T )
    • QS
    • TS × S
  • π 1 = {(s1, move(r1,l1,l2)), (s2, move(r1,l2,l3)), (s3, move(r1,l3,l4))}
  • π 2 = {(s1, move(r1,l1,l2)), (s2, move(r1,l2,l3)), (s3, move(r1,l3,l4)), (s5, move(r1,l3,l4))}
  • π 3 = {(s1, move(r1,l1,l4))}

s

s

s

s

s

Execution Structures

  • Execution structure for a policy π: - The graph of all of π’s execution paths
  • Notation: Σπ = ( Q , T )
    • QS
    • TS × S
  • π 1 = {(s1, move(r1,l1,l2)), (s2, move(r1,l2,l3)), (s3, move(r1,l3,l4))}
  • π 2 = {(s1, move(r1,l1,l2)), (s2, move(r1,l2,l3)), (s3, move(r1,l3,l4)), (s5, move(r1,l3,l4))}
  • π 3 = {(s1, move(r1,l1,l4))}

s

s

s

s

s

Execution Structures

  • Execution structure

for a policy π:

  • The graph of all of π’s execution paths
  • Notation: Σπ = ( Q , T )
  • QS
  • TS × S
  • π 1 = {(s1, move(r1,l1,l2)), (s2, move(r1,l2,l3)), (s3,

move(r1,l3,l4))}

  • π 2 = {(s1, move(r1,l1,l2)), (s2, move(r1,l2,l3)), (s3,

move(r1,l3,l4)), (s5, move(r1,l3,l4))}

  • π 3 = {(s1, move(r1,l1,l4))}

s

s

s

s

s

Goal

Start

2

Execution Structures

  • Execution structure

for a policy π:

  • The graph of all of π’s execution paths
  • Notation: Σπ = ( Q , T )
  • QS
  • TS × S
  • π 1 = {(s1, move(r1,l1,l2)), (s2, move(r1,l2,l3)), (s3,

move(r1,l3,l4))}

  • π 2 = {(s1, move(r1,l1,l2)), (s2, move(r1,l2,l3)), (s3,

move(r1,l3,l4)), (s5, move(r1,l3,l4))}

  • π 3 = {(s1, move(r1,l1,l4))}

s1 (^) s

Types of Solutions

• Weak solution : at least one execution path

reaches a goal

• Strong solution : every execution path reaches a

goal

s s

s a0 Goal a

a s

a

s s

s a0 Goal a

a s

s s

s a0 a1 Goal

a s

Goal

a

Finding Strong Solutions

• Backward breadth-first search

• StrongPreImg( S )

= {( s,a ) : γ( s,a ) ≠ ∅, γ( s,a ) ⊆ S }

– all state-action pairs for which

all of the successors are in S

• PruneStates(π, S )

= {( s,a ) ∈ π : s ∉ S }

– S is the set of states we’ve

already solved

– keep only the state-action

pairs for other states

Start

Example

π = failure

S π ' = ∅

S g ∪ S π ' = {s4}

π '' ← PreImage =

{(s3,move(r1,l3,l

(s5,move(r1,l5,l

Goal

s

s

s

2

Start

Example

π = failure

S π ' = ∅

S g ∪ S π ' = {s4}

π '' ← PreImage =

{(s3,move(r1,l3,l4)

(s5,move(r1,l5,l4))

Goal

s

s

s

2

Example

π ' = {(s3,move(r1,l3,l4)),

(s5,move(r1,l5,l4))}

S π ' = {s3,s5}

S g ∪ S π ' = {s3,s4,s5}

PreImage ←

{(s2,move(r1,l2,l3)),

(s3,move(r1,l3,l4)),

(s5,move(r1,l5,l4)),

(s3,move(r1,l4,l3)),

(s5,move(r1,l4,l5))}

π '' ← {(s2,move(r1,l2,l3))}

Goal

Start

s

s

s

s

2

Docsity.com

Example

π = {(s3,move(r1,l3,l4)),

(s5,move(r1,l5,l4))}

π ' = {(s2,move(r1,l2,l3)),

(s3,move(r1,l3,l4)),

(s5,move(r1,l5,l4))}

S π ' = {s2,s3,s5}

S g ∪ S π ' = {s2,s3,s4,s5}

Goal

Start

s

s

s

s

2