State space Search
3.1.3 - State space representation of problem
Example:
Representing
Xs and Os
as state-space problem.
Image courtesy of
Ralph Morelli.
See
Luger Fig II.5
State space representation of a problem:
-
All the states the system can be in are represented as nodes of a graph.
-
An action that can change the system
from one state to another (e.g. a move in a game)
is represented by a link from one node to another.
-
Links may be unidirectional (e.g. Xs and Os, can't go back)
or bi-directional (e.g. geographic move).
-
Search for a solution.
-
A solution might be:
- Any path from start state to goal state.
- The best (e.g. lowest cost)
path from start state to goal state
(e.g. Travelling salesman problem).
-
It may be possible to reach the same state through many different paths
(obviously true in Xs and Os).
-
There may be loops in the graph (can go round in circle).
No loops in Xs and Os.
- Xs and Os
- Directed acyclic graph
- Unidirectional. No loops. Multiple paths to the same state.
- Multiple goal states.
- Game-tree complexity
- 9 first moves, 8 replies, 7 replies, ...
- 9! (9 factorial)
= 362,880
different paths.
- In fact, 9! is an upper bound
- many of these games will terminate with a win before reaching end of path.
- Number of paths < 9!
- Unlike chess, could do exhaustive search.
- State-space complexity
Reduce statespace
We can use
symmetry reduction to reduce the search problem.
Many states are
rotations or mirror-image of another state,
so we don't have to search them separately.
e.g. There are basically only 3 unique first moves.
See
Luger Fig 4.1
-
Symmetry reduction on 1st level reduces no. of paths to 3 . 8!
-
Symmetry reduction on 2nd level reduces no. of paths to 12 . 7! = 60,480
Symmetry reduction may require a lot of work by the human.
But worth it if it reduces search problem (sometimes massively).
We write code to do the transform, search and transform back.
Travelling salesman problem (TSP)
Travelling salesman problem
Travelling salesman problem.
Start at A, visit all cities, return to A.
Links show cost of each trip (distance, money).
Find trip with minimum cost.
Solution is a path.
e.g. [A,D,C,B,E,A]
Image courtesy of
Ralph Morelli.
See
Luger Fig 3.9
Q. What is the shortest path?
Representing
Travelling salesman problem as state-space problem.
Image courtesy of
Ralph Morelli.
See
Luger Fig 3.10.
Start at A, 4 choices for first step, 3 choices for next, 2 choices, 1 choice.
4! (4 factorial) paths =
24 paths
Q. What assumption did we make?
Q. When do we stop the search?
How big is the TSP?
In general, how many paths are there in the TSP?
-
Given n cities, (n-1) choices for 1st stop, (n-2) choices for next stop, etc.
Q. What assumption did we make?
-
(n-1)! paths
With many cities or nodes this soon becomes intractable.
Restrict search / Problem-specific heuristics
With problems that scale badly, we need to restrict the search in some way.
Many of these restriction ideas will be problem-specific
- the idea only works on this problem and not on all problems.
Reduce statespace
Branch and bound
- Keep track of best path so far.
This is a
bound on future candidates.
As soon as best possible extension to a
partially-constructed path (the
branch)
exceeds bound,
that partial path and all its extensions are removed.
Reduces space but still exponential
(const)
n
number of paths.
With large state spaces (e.g. 50 cities)
need to use heuristic.
e.g. Nearest neighbour: "Go to the nearest unvisited city."
Finds solution quick! (Only tries one path!)
Luger changes his numbers to show
example of "Nearest neighbour" heuristic failing on
Travelling salesman problem:
Image courtesy of
Ralph Morelli.
See
Luger Fig 3.11
3.2 - Searching a statespace graph
Backtracking
- exhaustive search,
try each path in order, until find goal.
Depth-first search
The "Depth-first search" version of exhaustive search.
Goes down to remote descendants looking for solution before back-tracks up to a sibling.
Depth-first search:
Start at A. Search state space systematically until find goal.
When multiple children, go down 1st child.
If fails, back to parent, down 2nd child, and so on.
Image courtesy of
Ralph Morelli.
See
Luger Fig 3.14
Note there are multiple paths to F.
If F has already been found to be a dead-end when we went there from B,
the algorithm should not go there a second time (from C).
However not a great example here since C can see goal so won't go to F anyway.
Depth-first search algorithm
Algorithm for depth-first exhaustive search.
Definitions:
-
SL - state list - states in current path
- path keeps changing - states added/removed as we go
- until get path that ends in goal.
- NSL - new state list - nodes whose existence we have become aware of,
but have not visited yet.
- DE - dead ends - nodes proven not to lead to goal.
- CS - current state.
Algorithm:
function newchildren ( state )
{
returns not all the children of state,
but the new ones to look at
i.e. those not on DE, SL or NSL
}
SL := [Start] // CS is part of the chain
NSL := [Start]
DE := []
CS := Start
while NSL not empty do
{
if CS = goal
return SL
else if newchildren(CS) not empty
{
place newchildren(CS) on NSL
CS := first element of NSL // move down to new state
add CS to SL // CS is part of the chain
}
else // newchildren(CS) empty, so backtrack
{
while (SL not empty) and (CS == first element of SL)
{
add CS to DE
remove first element of SL
remove first element of NSL
CS := first element of NSL
}
add CS to SL
}
}
Trace:
First we work down the tree:
AFTER CS SL NSL DE
ITERATION
0 A [A] [A] []
1 B [B A] [B C D A] []
2 E [E B A] [E F B C D A] []
3 H [H E B A] [H I E F B C D A] []
Now the first backtrack:
H has no new children.
H will have been last link on SL, and last state added to NSL,
if we use
LIFO (stack)
structure.
So while
[H E B A] not empty and H is first element,
back up to parent, SL =
[E B A],
NSL =
[I E F B C D A]
then go down:
SL =
[I E B A]
4 I [I E B A] [I E F B C D A] [H]
Now I is dead and
so is parent E
I goes to DE, SL =
[E B A], NSL =
[E F B C D A], CS = E
CS is still first element of SL, so loop
again
E goes to DE, SL =
[B A], NSL =
[F B C D A], CS = F, loop exits
5 F [F B A] [F B C D A] [E I H]
6 J [J F B A] [J F B C D A] [E I H]
7 C [C A] [C D A] [B F J E I H]
Here we only add G to NSL, since F is already in DE:
8 G [G C A] [G C D A] [B F J E I H]
Features:
- DE stops it examining states that are known completely already.
- Only goes to new states (not in SL path), so can't loop.
The "Breadth-first search" version of exhaustive search.
Search all children (siblings) before search all grandchildren.
Image courtesy of
Ralph Morelli.
Depth-first search
examines the nodes in the order:
A, B, E, K, S, L, T, F, M, C, G, N, H, O, P, U, D, I, Q, J, R
Need to:
- Keep track of current path from start. Keeps changing as we search down.
- Keep track of dead ends - have searched all below this state.
- Keep track of known states not yet searched - will be quite small.
Breadth-first search
does each level first before moving lower,
examines the nodes in the order:
A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U
Need to:
- Keep track of known states not yet searched - will grow massively, since it is all cousins at the current level down.
Algorithm for breadth-first exhaustive search.
Definitions:
- OPEN - states known to exist but not yet examined
- CLOSED - states examined
Algorithm:
function newchildren ( state )
{
return children of state that are not already on OPEN or CLOSED
}
OPEN := [Start]
CLOSED := []
while OPEN not empty
{
remove LHS state from OPEN, call it X
if X is goal
return success
else
{
put X on CLOSED
put newchildren(X) on RHS of OPEN
}
}
Trace:
AFTER OPEN CLOSED
ITERATION
0 [A] []
1 [B C D] [A]
2 [C D E F] [B A]
3 [D E F G H] [C B A]
4 [E F G H I J] [D C B A]
5 [F G H I J K L] [E D C B A]
6 [G H I J K L M] [F E D C B A]
7 [H I J K L M N] [G F E D C B A]
...
- OPEN is FIFO
-
When finds goal, there is no record of how it got there.
To save that, might store (state, parent) pairs on OPEN and CLOSED.