School of Computing. Dublin City University.
Online coding site: Ancient Brain
coders JavaScript worlds
Q. So finding an A* algorithm is easy?
A. Yes. Exactly!
You have just defined breadth-first search.
f(n) = g(n) + 0.
Breadth-first is an A* algorithm.
Problem is it takes too long.
The trick is to find a non-zero h(n) that will prune the search, but that is still always an underestimate.
The states searched by any A* algorithm will be a subset of the states searched by breadth-first.
Shows that it may not be hard to find a non-zero h(n)
that is still always optimistic.
Then you have an algorithm guaranteed to find optimal path (eventually).
Obviously some h(n)'s are better (weed more states) than others.
Imagine we have two A* heuristics.
h1(n) ≤ h*(n) for all n.
h2(n) ≤ h*(n) for all n.
If h1(n) ≤ h2(n) for all n, we say that heuristic h2 is more informed (i.e. better) than h1
But still need h2(n) ≤ h*(n) for all n.
h1(n) ≤ h2(n) ≤ h*(n) for all nh2 is more informed (better) than h1
Exercise - Prove this.
One can imagine a sequence of search spaces, each smaller than the previous one, converging on the optimal solution.
Luger Fig 4.28
"Control strategy cost" = Cost of calculating heuristic.
"Rule application cost" = Number of states searched.
LHS - Simple heuristic. Quick to calculate heuristic.
But doesn't prune much - huge space to search.
RHS - Good heuristic. Cuts space.
But calculating heuristic takes time.
Trade-off - Find middle point where computational cost of problem solving is lowest.