Architectures of Mind
- What does the whole mind look like?
Are there lots of semi-intelligent sub-minds talking to each other?
Network, Hierarchy or Society?
Does input feed into many brains or one?
Who is in charge?
Where am I? What is consciousness?
Action Selection
- As a more specific example of the above.
We know how to solve 1 problem.
How does the creature deal with multiple problems
at once?
How does it time-slice its day?
"Learning to Learn"
- How does the creature generate goals for itself
in the first place?
Machine learning algorithms all tend to learn for a while
and then converge (stop learning).
Why do humans not converge?
Symbol-grounding, Evolution of language.
- What is language?
How do creatures processing numerical sensory data
end up processing symbolic "words" with meanings?
What does "chair" mean, internally?
Is it a meaningless token like
"Word number 5099"
being passed around?
Or is it a whole specialised "chair machine",
firing away?
Do parts of the brain talk to each other?
Do we have an internal language?
Is it like English, or is it something more messy?
Will sub-symbolic AI plug in neatly to symbolic AI?
Robots or simulation?
Related to the previous issue, the "symbol-grounding" problem.
We might be able to give words real meaning if we have real bodies.
Robots are grounded in the real world.
But experiments in software-only worlds
are easier and cheaper to set up.
The only intelligent things in the universe that we know of
have real bodies in the real world.
They are not software-only.
The Symbol-grounding problem:
3 month old baby starts to grasp objects and realise what objects are.
Can a computer ever learn what an object is if it has no body?