School of Computing. Dublin City University.
Online coding site: Ancient Brain
coders JavaScript worlds
For decades, AI researchers have constructed physical and simulated environments ("Worlds"), designed physical and simulated robots and agents ("Bodies") to interact with them, and designed, learnt or evolved behaviour-producing control systems ("Minds") to drive the Bodies. These Minds are typically used once, in a set of experiments, and then discarded. They can (in theory) be reconstructed from the details provided in the scientific papers, but in practice few ever see the light of day again. The papers, for good scientific reasons, concentrate on extracting the general principles behind the Minds, rather than on ensuring that the Minds' actual existence continues.
In nature, by contrast, minds are produced by the million, and get to live in millions of bodies throughout the world for millions of years. The contrast is dramatic with the testing and development of AI minds confined to a single laboratory, and often only a single "body", for only a year or two (during 90 percent of which the body is actually inactive). This paper suggests that the Internet provides a way for the field of AI to develop its own rich, world-wide, always-on, ecosystem to parallel the natural one.
First we ask what the problems are with the existing situation.
The starting point for our motivation is the argument that the AI project is too big and complex for any single laboratory to do it all. Many authors have argued along these lines, and a number of different approaches to this issue have evolved:
The criticism of this approach is that the "whole mind" never actually gets built, and as a result many issues, such as the large-scale architecture of a complex multi-goal creature, and how it resolves its many internal conflicts, never get addressed. e.g. See the criticisms of [Brooks, 1986, Brooks, 1991]. For a symbolic AI call to build whole systems see [Nilsson, 1995].
In any case, this paper shall not take a position on the merits of pure evolution v. design or learning. The methods proposed will be such that could be driven by evolution if so desired. For the moment we simply note that the evolutionary approach may also share with the animat approach an implicit assumption that one lab can do it all.
It seems to me that all these approaches still avoid the basic question, which is: If the AI project is too big for any single laboratory to do it all, then as we scale up, how will we link the work of multiple laboratories?
If the work in AI is to be distributed, with different laboratories engaging in specialisation, and then some scheme to combine their work, many problems immediately arise. Who decides who works on which piece? Will we spend all our time trying to define protocols to talk to other pieces? What if people can't agree on how to divide up the work, or indeed what the pieces are? [Brooks, 1991] Will this scheme force everyone to use a common programming language? Will it enforce a common AI methodology and exclude others?
Whatever scheme we invent cannot enforce any of these things, nor can it define a particular way of dividing up the work of AI. Rather it must be a framework for exploring different ways of dividing up the work, and different ways of combining heterogenous and overlapping modules from diverse sources. (By "overlapping", we mean that the modules have not agreed on who does what. There are some inputs for which more than one module will attempt to provide an answer.)
One thing we will need is a model of mind that can cope with multiple overlapping and conflicting minds in the same body, and will assume the existence of multiple unused minds at all times.
Up until now, the above problems with schemes for sharing work have tended to scare researchers off, and instead everyone tends to invent their own system. In order to test a new learning algorithm, say, a researcher will first design his own simulated world, or his own robotic testworld, or even design his own robot. This takes a huge effort of time and work, and the problem world must be debugged and shown to be a hard problem, all before you even get to test your novel algorithm. How many animats researchers have spent (perhaps wasted?) their time designing their own gridworlds over the last number of years? Similarly, [Minsky, 1996] has criticised the recent "new AI" focus on using robots, saying researchers spend all their time learning robotics and get to do very little AI.
There are other problems, such as that designing your own world means your algorithm is not tested in the same world as previous algorithms. Machine Learning (ML) in particular has recognised this, and has attempted to define testbeds in which multiple algorithms can be tested and compared. See the datasets at the UCI ML Repository or MLnet [mlnet.org]. [Bryson et al., 2000] suggest the setting up of a website with similar standard tests for autonomous agents and adaptive behaviour.
There have been some attempts to re-use test agent minds [Sloman and Logan, 1999, Sutton and Santamaria, undated] and test worlds [Daniels, 1999], but they suffer from some of the same problems that have held back re-use [Humphrys, 1991] in the mainstream computer world - complex installation, and incompatibility problems with libraries, versions, languages and operating systems. Software is often easier to re-write than to re-use [Spolsky, 2000].
This paper suggests that for AI re-use we look at perhaps the most successful recent model of re-use (often not recognised as such) - the using of other people's documents and programs off remote Web servers. The Web has now made it commonplace for people to refer to data at a remote server rather than providing a copy of the data themselves. It has also made it commonplace for people to refer to the results of a program run on a remote server, rather than install the program themselves. We will suggest a similar model, where the agent test world essentially stays at the remote server and is used from there, instead of being copied.
This paper will not just address the problem of re-using other people's worlds, though. The fact that it is difficult to re-use other people's agent minds has also held back experimentation with complex, large-scale architectures.
Having invested the time and effort in inventing a robotic or agent testbed, few people then get to use it. Only MIT students and researchers get to run algorithms on Cog. This is at most a couple of dozen people. By nature's standards, Cog does not get to "live" much [Humphrys, 1997a]. There is only one instantiation of it, and - like all similar AI projects - it will in practice be inactive 90 percent of the time. The world's AI labs are full of robots that have not been activated for years, awaiting funding (and time) for the next researcher to set up some brief experiment with them. Similarly, in the computers of the world's AI researchers is a whole history of unused virtual worlds and agent minds, stored in offline directories, awaiting a reactivation that may never come.
Putting these agents and their worlds on the network could maximise their utilisation and testing, in the same way that the original purpose of the Arpanet/Internet was to maximise the utilisation of rare and expensive remote federally-funded mainframes. While their creators have moved on and are busy with new projects, the agents and worlds will survive and will be linked to people who have the time and inclination to do further work with them. Experiments will be done with them that their creators would never have thought of.
Clearly, it will be easier to provide remote access to a software-only world (you can give every client their own copy) than to a robotic test world - the robot owners will want to control who (if anyone) uses their robots remotely. This will be addressed in detail below. For software-only worlds it should not be hard to set up a server to provide the existence of the world long after its creator has departed. For example, animat researchers would be very interested in the existence of a server for a complex animal behaviour world, such as Tyrrell's world [Tyrrell, 1993].
To complete the criticism of the current situation, there is definitely something in the evolutionary criticism (referred to above) that advanced behavior control systems may be too complex to be understood.
As AI scales up, it is envisaged that minds will take on a level of complexity analogous to, say, a national economy, where each component may be understood by someone, but the dynamics of the entire system is too complex to be grasped by one individual. In the system we will propose, of a vast network of servers calling other servers, each individual link in the network will make sense to the person who set it up, but there is no need for the system as a whole to be grasped by any single individual.