School of Computing. Dublin City University.
Online coding site: Ancient Brain
coders JavaScript worlds
With State-Space Search, we often have a heuristic "distance from goal".
With GA, we have no idea how far away we might be from the optimal solution.
With Neural Networks, we have a measure of "error" for many inputs.
With GA, we do not have any "error" value.
A "fitness" value gives us less information
than an "error" value.
More on this in Comparison of Neural Net and GA.
With GA, we are trying to find
the n-dimensional Input that leads to the highest Output.
i.e. Maximising the function.
We don't need to build up a representation of the function.
Requirements: Given any Input, tell me the Output.
Could we combine these?
Do a GA search,
and, as we go along, build up a network to represent the map.
Problem is we won't be representing it in same detail over the whole range.
We spend more time (get more detail) on the higher peaks.
We get random coverage
at the start, but it quickly becomes non-random.
The GA can track changing fitness landscape,
in that each generation has to re-prove its fitness.
Problem is that things like
pm and pc
and
temperature
must be increased again.
Also instead of starting with a random scattering of individuals,
we may be starting with them all concentrated in a small, sub-optimal
area of the landscape.
How do you detect if the fitness landscape has changed?
Typically, we need a random (or pseudo-random) number generator to randomly initialise our population, and also to make probabilistic decisions thereafter.
e.g. Use a GA for scheduling that minimises some cost function. Such as timetabling of lectures/exams. Minimise no. of clashes (options). Minimise no. of exams on consecutive days, etc.