"Uninformed search"
uses no problem-specific information to guide its search.
An example would be a
"brute force"
or exhaustive search
through all states.
We saw this with depth-first search and breadth-first search.
Uninformed search
doesn't scale well to large problems.
Informed search / Heuristic search
"Informed search"
means we use some problem-specific information to guide the search.
Instead of systematically searching states in the order in which they present themselves:
We restrict the search by first choosing a branch that is "more likely" to lead to goal.
But how do we know which branch is "more likely"? We don't know where the goal is!
Answer:
We don't know which branch.
But maybe we can make a decent guess, a lot of the time.
We use a "heuristic" to guess what to search next.
Heuristic search gives up the idea of ever doing an exhaustive search.
It gives up the idea
of guaranteeing to find the best solution.
Instead we try to do a smart search of a huge space
in the limited time available.
Any heuristic can fail.
Heuristic search for Xs and Os
Let us consider heuristic search with
Xs and Os
(Tic-Tac-Toe).
What heuristic can we think of to guide or restrict our search of future moves and counter-moves?
Consider a heuristic of:
"Number of winning opportunities (that contain at least one of my pieces)"
in each situation.
Always take highest (or if draw, any joint winner).
First move:
The heuristic says to always go for the centre move. Luger Fig 4.2
This dumps 2/3 of entire statespace after symmetry-reduction
(or 8/9 if we consider our original naive search).
2nd move:
There are only actually 2 unique replies the opponent can make:
Luger Fig 4.3
Depending which reply he makes, our heuristic tells us what to do next in each case.
Oddly, in both cases, it mightn't be the move you think.
The numbers circled show
"Number of winning opportunities (that contain at least one of my pieces)"
Two ways of using the heuristic:
Use it to make the actual move.
Don't search down multiple levels.
Respond to current state by comparing
"Number of winning opportunities" among
possible immediate moves,
and taking that move.
Not AI search at all.
Just a solution.
or:
Use it to decide which child to search first.
Use it to order the children
and do depth-first etc. searches as before.
"Take the state with the highest number of winning opportunities" heuristic
is a "Hill-climbing"
strategy.
Take the most promising immediate next step, without looking n steps ahead.
Imagine a landscape like the above, but the machine cannot see it.
The machine sees no equation, and no picture.
The machine produces an x value and is told the y value.
It changes x value to try to improve the y value.
Hill-climbing is where we change x to always go uphill, never downhill,
until reach a peak.
Imagine a single point (not n points as here).
Pick a random x value, see the y value.
Then move it uphill from where it is.
Imagine we change x by increments of plus or minus 0.1.
Unless the start point is already (by chance) on the slope of the global optimum,
our point will move to a local optimum
not the global optimum.
Hill-climbing will not temporarily get into a worse state in order
to get into a better one after that.
N-puzzle game
15-puzzle
(the most common version of the N-puzzle game)
Solving the 15-puzzle game.
The
N-puzzle game.
An example of where we might need to go "downhill".
To get to goal, might need to move some pieces temporarily out of position.
Public domain image from here.
The local optimum problem
Luger Fig 4.4
Large number is best.
We have parent (4) who is better than any of its children (3).
So will stay at parent (4)
and miss global optimum (5).
Best-first search
is a heuristic search.
We need a function to
order the states by a heuristic estimate of how good they are
or how close to the goal they are.
Of course this will just be a guess.
This is called an
"evaluation function".
Given such a heuristic
evaluation function,
best-first search works as follows.
Best-first search.
Image courtesy of
Ralph Morelli.
P is goal.
The numbers show the heuristic estimate
of the fitness of each state (lowest number best - view it as "distance from goal").
OPEN - current fringe of the search.
CLOSED - states already visited.
Notes:
When B is evaluated, its children E and F seem worse
/ further from goal
(our heuristic got it wrong, on either parent or child, or both).
E and F go into sorted queue.
C now has lowest number, so we jump to there.
We jump around the graph.
Next state to be evaluated may be at any level.
States actually expanded are in bold.
State O is dead-end. Note heuristic was wrong about O.
When state O is not goal, we turn to next in queue - P.
Morelli is showing the space that exists, not the space the program knows about (which is smaller).
e.g. He shows us K and L and their children, but the program does not know they exist yet.
Q. Why does Morelli have heuristic 2 on state O?
It has no children, so surely the heuristic knows it is not 2.
Q. Why does Morelli have heuristic 3 on goal P?
The heuristic can be wrong, but surely it will be 0 on the goal itself.
Also won't we detect it is the goal, and therefore not even call the heuristic function?
Q. Show all the paths where the heuristic must be wrong.
We have backtracking when we go down an apparently promising but fruitless path.
Less likely to get stuck in local optimum
than if always took best-looking next step (hill-climbing).
Eventually some other state probably evaluates as best
and we jump to that.
Inaccurate heuristic
The heuristic evaluation function is only a guess.
It is bound to be inaccurate on some states.
Note we could get stuck down a path
if heuristic very inaccurate.
e.g. Go down a line where heuristic says we are 2 steps from goal forever, no matter how deep we go.
Q. How might you solve this? (How might you compensate for an inaccurate heuristic?)
Evaluation function f is a heuristic estimate of
how good the state is (high f good)
/ distance to goal (low f good).
The design of heuristic evaluation functions is an empirical problem.
Can spend a lot of time designing and re-designing them.
Often no obvious answer.
e.g. Write a function that estimates, for any state x in chess,
how far that state is
from checkmate.