Dr. Mark Humphrys

School of Computing. Dublin City University.

Online coding site: Ancient Brain

coders   JavaScript worlds

Search:

Free AI exercises


Research - AI in general - ICA talk


The Hardest Problem in the History of Science


Mark Humphrys

A talk given at the
Institute of Contemporary Arts, London, Feb 2000.
See full reference.

Abstract

Discussion of Artificial Intelligence (AI) tends to swing between two extremes. One is that AI is impossible - the position argued recently by Professor Roger Penrose. The other is that AI is easy and will happen soon, and humans could be threatened with extinction or enslavement - the position argued recently by Professor Kevin Warwick.

I will argue that both of these positions are mistaken. That AI is in fact a branch of cognitive science, part of the understanding of ourselves and other animals. As such, AI is part of perhaps the hardest, most ambitious undertaking that science ever has or ever will undertake.

In response to the difficulty of this problem, I will describe the movement in AI that has essentially given up on human-level intelligence, and is working on animal-level intelligence - in the hope of producing properly grounded definitions of all the terms that are casually used at the human level - emotion, fear, love, consciousness, language, representations, memories - but whose precise definition remains elusive.

"Animal-AI" researchers would argue that we will only really know what these things mean when we understand their history and primitive origins.



AI is impossible?



AI is not a good enough model?



AI is easy? (and AI is dangerous?)



Why AI is hard



Open Issues

  1. Architectures of Mind - What does the whole mind look like? Network, Hierarchy or Society? Does I/O link to many brains or one? Who is in charge? Where am I? What is consciousness?

  2. Action Selection - As a more specific example of the above. We know how to solve 1 problem. How does the creature deal with multiple problems at once?

  3. "Learning to Learn" - How does the creature generate goals for itself in the first place?

  4. Symbol-grounding, Evolution of language. - What is language? How do creatures processing numerical sensory data end up processing symbolic "words" with meanings? What does "chair" mean, internally? Is it a meaningless token #5099 being passed around, or is it a whole specialised sub-system, firing away? Do parts of the brain talk to each other? Do we have an internal language? Is it English, or is it something more messy? Will sub-symbolic AI plug in neatly to symbolic AI?

  5. Robots or simulation? - Robots are more real, may solve symbol-grounding. But experiments in simulation are often more practical. e.g. Sims evolved a world of beautiful 3-D creatures from scratch. Large-scale experiments involving the Web?



Open Issues / Why AI is hard

A further reason that I propose:
  1. Societies. Culture. - Maybe you need a whole interacting society and culture to be intelligent. Idea that we aren't that intelligent individually - most of our intelligence resides in our culture, in other people, in our support systems, the way our world is structured for us, our books, our machines.

    We didn't arise alone in a lab like Cog. There were always millions of us.

    We didn't arise in a planet that was full. There was no other intelligent life. We had "peace and quiet" for 2 million years, during which we could slowly evolve our own languages and cultures. And it was a process of trial and error. Most societies fail - Jared Diamond, Guns, Germs and Steel: A Short History of Everybody for the Last 13,000 years - Evolutionary History.

    How will AIs do this? Their cultures will be under massive pressure from the outside world, right from the start. Their societies will fail much more easily.

AI is hard in theory, and will be even harder to implement in practice. One of the reasons why is that the planet is full.


The planet is full of humans

Cognitive Science - the understanding of our minds, and the minds of other animals, with fully detailed causal models



Mid-3rd millennium - We will become AI's (if we want, in a free society)





Early 21st cent. - Animal-level AI



Early 21st cent. - 2 types of AI systems



Early 21st cent. - Animal-like robots

Tricks so AI not needed:


Cars: Robots in the home? Much depends on price and robustness of the hardware.

Does this begin to answer the unsolved problems? Do we get a robotic chimpanzee by 2100? Does this then begin to converge with the rest of Cognitive Science?

It will be an interesting century.



Conclusion - Intelligence flows on

Warwick's central theme is "What happens when intelligence is cracked / liberated? What will we do with it?"

He imagines a totalitarian society but never explains where this totalitarian society is supposed to come from. So like, we just give up on human freedom and democracy just because a new technology exists?




ancientbrain.com      w2mind.org      humphrysfamilytree.com

On the Internet since 1987.      New 250 G VPS server.

Note: Links on this site to user-generated content like Wikipedia are highlighted in red as possibly unreliable. My view is that such links are highly useful but flawed.