School of Computing. Dublin City University.
Online coding site: Ancient Brain
coders JavaScript worlds
The proposed scheme to address these issues is called the "World-Wide-Mind" (WWM). The name has a number of meanings, as will be discussed. In this scheme, it is proposed that researchers construct their agent minds and their agent worlds as servers on the Internet.
In the basic scheme, there are the following types of server:
The client talks to the AS server to get an action, and the AS server talks to its list of Minds, using whatever queries it needs to resolve competition (perhaps repeated queries). Because it can produce an action a given a particular input state x, the AS server is in fact a type of Mind server itself. This is why we call it a "MindAS" server.
But it may be a very different thing from the other Mind servers. The other servers actually try to solve some problem or pursue some goal. Whereas the MindAS server may be simply a generic competition-resolution device, that can be applied to any collection of Minds without needing any actual understanding of what the Minds are doing. We have 2 possibilities:
It is imagined that each of these types of server may be customised with a number of parameters or arguments at startup, so that a client may ask for slightly different versions of what is essentially the same server. We have already seen possible arguments for a MindAS server (the list of Minds). Other possible server arguments will be discussed later.
By allowing Mind servers call each other we can incrementally build up more and more complex hierarchies, networks or societies of mind. We will call any collection of more than one Mind server acting together a Society. A Society is built up in stages. At each stage, there is a single Mind server that serves as the interface to the whole Society:
and so on, recursively. At each stage, there is a single Mind server which serves as the interface to the Society, to which we send the state, and receive back an action. For example, in the simplest type of Society, the client talks to a single Mind and World server, who talk to no one else:
- client talks to:
- Mind
- World
In a more complex society, the Mind server that the client talks to is itself talking to other Mind servers:
- client talks to:
- MindM, which talks to:
- Mind
- Mind
- World
or:
- client talks to:
- MindAS, which talks to:
- Mind
- Mind
- Mind
- World
and so on, with an endless number of combinations, e.g.:
- client talks to:
- MindM, which talks to:
- Mind
- MindM, which talks to:
- Mind
- MindAS, which talks to:
- Mind
- MindM, which talks to:
- Mind
- Mind
- WorldW, which talks to:
- World
(We will discuss "WorldW" servers later.)
Each Society has precisely one Mind server at the top level. This makes sense because the competition must get resolved somewhere, so that an action is produced. And the client can't resolve it. So it must be resolved by the time it gets to the client.
There are the following types of users of this system:
This is like constructing a new Mind server: When a client constructs a new combination of Mind servers like this, it looks very much as if a new Mind server has been created, with totally new behaviour. But in fact it has been done by supplying new parameters to an existing MindAS server. Admittedly some of these parameters (the addresses of other Mind servers) may not have existed when the MindAS server was written, and the choice of the combination may be an extremely creative act.
In the above, the client does not necessarily need to know anything about how the servers work, or even anything much about AI. The client just observes that certain combinations work very well, and others don't. The role that large numbers of clients acting over long periods may play in experimentation and artificial selection may be very important, as will be discussed in detail later.
As we have seen above, just specifying an existing server with new parameters can be very much like creating a totally new Mind server with new behaviour. The server author, however, will be writing an actually new server. For example:
That is the basic WWM idea. How this scheme will work, and what does it mean, will now be discussed in detail.
First, we note that, among many other things, the scheme is trying to define a protocol whereby researchers can use each other's agent worlds. Not by installing the world at your own site, but by leaving the world running on a server at its author's site, and using it remotely. One issue we will have to solve is how does the client see what is going on in the remote world where he is running his selected mind combination?
Perhaps the World server provides a URL where the client can see in real time what is going on, if he wants to. If this URL is not connected to, no user interface is displayed.
The very definition of this system may seem strange to the reader. We are taking for granted that simply being able to match a single Mind server with a World server would be of limited use. Instead we are building a system where clients can put multiple minds in the same body. The problem with these multiple minds is that none of them are aware of the others' existence, and each is designed to control the body on its own. Instead of allowing them do that, we simply take their wishes into account when running the Action Selection server. For an introduction to these kind of models see [Humphrys, 1997]. The Adaptive Behavior audience will be long familiar with these kind of multiple-mind models. Other readers may find them very wasteful and unusual.
Yet, we argue, if 3rd parties are to construct societies of mind using others' components, it will be impossible to prevent massive overlap in function and conflict between these components. The realistic approach is to accept overlap as inevitable and work on conflict-resolution methods (the AS servers). The AS server will try to allow the expression at different times of most or all of the goals pursued by each mind. With some totally-conflicting goals this may not be possible, so the human clients will finish the job, by looking for specific combinations of minds whose conflicts, when resolved by the AS algorithm, results in a useful overall multi-goal creature.
The alternative to a multiple-conflicting-minds model would be to get the components to agree on their function in advance, which is totally impractical.
This is not to say that a group of Mind servers cannot agree to divide up function in advance, and the agent mind can consist of a single headquarters MindM server that knows about this division, and that calls each Mind server, carefully switching control from Mind to Mind. This is possible, but may be very rare if Societies are to be constructed by large numbers of researchers. The model cannot assume in general that minds make any allowance for the existence of other minds. Users will construct combinations of new minds and old minds, where the new minds did not even exist when the old minds were being written. For instance, even with a perfect division of function like the collection we just described, no sooner will it be put online than users will start constructing societies involving it and some new Minds. And then we are back to an Action Selection problem again.
In fact, as has been discussed elsewhere - see [PhD, §18.3], and also the "brittleness" criticism of classic AI [Holland, 1986] - wastefulness and multiple unexpressed ideas is generally a sign of intelligence rather than the opposite. When it comes to intelligence, parsimony may be the enemy, rather than wastefulness.
One question then is how complex does the top-level client algorithm have to be? How many servers does it have to talk to? For instance, when there is a collection of Mind servers and an AS server, the client could talk to the Mind servers itself, gather the results and repeatedly present them to the AS server for resolution.
However, when we consider the vast number of possible algorithms for resolving competition, some involving multiple queries of each Mind server with different suggestions, it seems more logical for the client to pass the list of Mind servers to the AS server at startup, and then let the AS server query them itself, i.e. let the complexity of the competition-resolution algorithm be buried in the AS server rather than in the client. In our model, the client manages a single Mind server and a single World server. Which leads to the next question:
Perhaps the client should pass the World server address as an argument to the top-level Mind server, and let the Mind talk to the World directly.
One reason we do not do this is that - unlike the situation with the AS server talking to the Mind servers - the interaction between the Mind server and World server is not bounded - the run of this Mind in this World may go on indefinitely. It seems better to have this logic in the client rather than in the server. The servers respond to short queries, and the client is responsible for how many such queries there are, and what the overall control algorithm is, e.g. implementing time-outs and repeated queries if servers do not respond. Or imagine a client where a user is watching the user interface of an infinite run, and deciding, by clicking a button, when to issue the "End run" command.
Another reason we may prefer to define the server queries, and then let a separate algorithm control how many queries take place, is that we may like two Minds to communicate with each other in a conversation, in which case each Mind serves as the World for the other. Instead of redefining the Mind server so it can respond to questions such as "Get state", the client algorithm manages this, querying the Mind server for an action, and then sending that action as the state for another Mind server. Imagine two chat programs connected together. Or a "tit-for-tat" online competition [Axelrod and Hamilton, 1981, Axelrod, 1984].
Crucial to the whole scheme of using minds from diverse sources is that we do not impose any restrictions on the type of Mind servers that can be written. Minds can be written in any language and according to any methodology. Minds do not have to explain themselves or how they work. Minds are hidden binaries on remote servers. Minds cannot know about each other's goals or insides. Or, to be precise, some minds may know how other Mind servers work (and act accordingly), but we cannot in general demand this.
If Mind servers do not understand each other, then they can only communicate by low-bandwidth attempts. That is, there is a limit to how much information they can usefully communicate to each other or to the AS server. If, as has already been argued, it will be impossible to prevent large-scale conflicts between Minds, it is the AS server that has to resolve the competition between these strangers who speak no common language. The central question is: What information does the AS server need from the Mind servers to resolve the competition? For example, if it just gets a simple list of their suggested actions: ai it seems it could do little more than just pick the most popular one (if any appears twice). If none appears twice, it seems it could only pick a random one. Perhaps the AS server rotates around the Minds, allowing each free reign for a time in rotation. Any such time-based Action Selection scheme will be very crude [PhD, §15.1].
For any more sophisticated Action Selection than the above, it seems that the Mind server needs to provide more information. We will first consider schemes where the servers pass simple numeric quantities around to resolve the competition, yet still do not have to understand each other's goals.
For example, Mind server i may tell the AS server what action ai it wants to take, plus a weight Wi expressing how important it is for them to win the competition on this timestep. This can be seen as a "payment" in a common currency to try to win the Action Selection (see the "Economy of Mind" ideas of [Baum, 1996]). Or perhaps the AS server, in a bid to resolve conflict, could suggest to all the Mind servers a compromise action ak and each Mind server could reply with another weight Wi illustrating how much they would dislike taking that action (assuming it is not the action they were originally suggesting). We may define the following weights:
The usage of Q and W comes from [Humphrys, 1997]. High Q does not imply high W. Q could be high and yet W = 0. For the differences between Q and W see [PhD, §5.5, §6.1, §16.2]. Given a set of Q-values, there are many possible schemes for deriving W-values (discussed throughout [Humphrys, 1997]). The WWM server queries defined in this paper will allow all of these numeric schemes to be implemented.
Higher-bandwidth communications than this would seem difficult if we are not to impose some structure on what is inside each Mind server. Hence I will begin the WWM implementation with a sub-symbolic, numeric Society of Mind, rather than a symbolic one.
Competition resolution, however, does not all have to be done by AS servers looking at Q-values and W-values. Much of the work of combining multiple minds will also be done by hand-coded MindM servers, which state explicitly who wins in which state:
"As long as the input state is in the region X do what Mind server no. 4006 wants to do,
otherwise, if the input state is in the region Y do what Mind 33000 wants to do,
otherwise (for all other states) give me a "strong" Mind 8000 (i.e. it has lots of currency to spend) and a "weak" Mind 11005, and let them compete under AS server 300."
In this case the knowledge that Mind server no. 4006 should always "win" the competition when the input state is in the region X is something that the server author has hard-coded. The Mind server did not have to convince the other competing minds (or the AS server) of this fact. In general, a Mind server can implement any general-purpose algorithm that interprets the incoming state, and can call another Mind server at any point in the algorithm. For a discussion of such "Nested" systems see [PhD, §18]. One issue in nested servers will be the possibility of a circular call, leading to an infinite loop. It is not proposed to build anything into the protocol to prevent this. It is up to the server authors to prevent. [PhD, §18.1] discusses infinite loops in a society of mind.
To simplify, the above algorithm would be written as a MindM server whose "New run" command would be:
send "New run" command to M1 send "New run" command to M2 send "New run" command to M3 with arguments (strong M4, weak M5)
where the server M3 is a MindAS server that can be passed its list of servers as arguments at startup. Then the MindM server's "Get action" command (with argument x) would be:
if x in region X1 send "Get action" command to M1 with argument x else if x in region X2 send "Get action" command to M2 with argument x else send "Get action" command to M3 with argument x
M3's "Get action" command sends a "Get action" command with argument x to both M4 and M5, and then makes a decision based on what they return, perhaps querying them further before deciding.
We have so far avoided the question of what is the exact data structure that is being passed back and forth as the state or action. It seems that this definition will be different in different domains. This scheme proposes that we allow different definitions to co-exist. Each server will explain the format of the state and action they generate and expect (most of the time this will just involve linking to another server's definition).
The above model simplified things by having a joint World-Body server. Why not separate further into World servers and Body servers?
The fact is that a World server is a Body server. The world only "exists" in so far as it is perceived by the senses. Imagine if we did split the model into World servers and Body servers. The World server would respond to requests for the state of the world with an output x and would be sent inputs a for execution. The Body server - would do the same thing. It would respond to requests for the state of the world as perceived by the senses with an output x and would be sent inputs a for execution. From the point of view of the client, the two servers are the same type of thing. It's a matter of style which they advertise themselves as.
But there is still a problem with this model of joint Mind-Body servers. It seems to indicate that only the World author can write Bodies - which goes against the philosophy of the WWM, where 3rd parties can write add-ons without permission. The question is: How do we write new Bodies for the same World (if we are not the World author)?
The answer is that it depends what we mean by writing "new Bodies" for the same World. There is a limit to what we can do with someone else's World without writing our own. The world may have a fixed number of actors - say, one (e.g. a robotic world) - and what we are meant to do is provide the mind for the single actor. We cannot will another actor into existence, so all we can do is give the single actor a different type of Body than the Body that the World server has given it by default.
And even then, we cannot give it senses that the World has not already given it. All we can do is write Bodies that sense a sub-space of the original state x provided by the World.
So when inventing new Bodies, we write them as new World-Body servers, which are wrappers around the old World-Body server. We will call such a World server a WorldW server to reflect the fact that it communicates with another server (which may itself communicate in a chain of World servers) unknown to the client.
The client only sees the senses as presented by the wrapper WorldW server, and sends its actions to the WorldW server too. The WorldW server talks to the "real" World server. Presumably to it it seems like just another client, requesting x and reporting a. And to the client, the WorldW server looks like just another World server. The senses that the WorldW server presents to the client are strictly a subset of the raw senses presented by the original World server.
Working with only a sub-space of the original state x may sound restrictive, but remember that in general, a "World-Body" server just provides a stream of output x when queried about the state of the world. We have not really defined what this output should be, and in fact people may write World servers whose outputs x are deliberately designed not to be used as the raw senses of any agent, but rather are designed to be filtered by many different species of 3rd-party Body, in wrapper WorldW servers. Note that even having a long chain of WorldW servers does not break our basic model that the client deals with a single top-level Mind server and a single top-level World server.
Having accepted that a 3rd party cannot necessarily control the number of actors in a World, how would the World server itself deal with multiple actors?
This would not require a World server and multiple Body servers. The senses returned by each are quite different, so this demands different World servers. The logical implementation would be as 2 different World-Body servers, both located at the original site. They will detect each other's existence in the sense that the other robot will appear in the state of the world x as detected through their senses.
Below we will discuss the special issues with robotics, where each World-Body server can only have one client in control of it at a time. Here with the 2 robots, if a client wanted to do experiments in multi-agent cooperation, they would need to gain exclusive control of both World-Body servers at the same time.
But again this would not require a World server and multiple Body servers. It can be done with a single World-Body server, with multiple clients connected to it at the same time, each sharing an instance of the world with the other clients, rather than getting its own instance. The state x that the server returns is customised for each client.
In conclusion, our joint World-Body model is no (or at least, little) restriction for a 3rd party:
If we are to allow 3rd parties place Minds in any World-Body they like, and run the result, we must accept that the Mind server may not be able to make sense of the chosen World. For instance the format of the incoming state x may be different to the format the Mind server was expecting. In fact, far from being a special case, this will probably be true of 99 percent of all combinations that a client could choose.
The basic scheme is that we allow this. The 3rd parties need to be given total freedom in their experiments in artificial selection. Combinations that don't work are not a problem. The 3rd parties will only of course advertise the combinations that work. World authors will explain the structure of the state x that they generate, and Mind authors will document the structure of the x they expect, and further servers and clients will act accordingly.
As mentioned, this model could even be used to control real physical robots or other real hardware. There are already a number of robots that can be controlled over the Internet. For an introductory list see the [Yahoo list of robots online]. "Internet tele-robotics" raises some special issues:
A robotic world server, however, demands much more of a commitment. You need to maintain the hardware, recharge batteries, supervise use of it, repair damaged parts of the robot or things in the world, tidy up objects in the world that have been scattered or put out of place, perhaps only allow client access when someone is watching, and so on. It is a lot more of a commitment. You need to fund it and set aside a room for it. As a result most of the Internet-controlled robots so far have run for a limited time only. Longer projects will presumably be possible with payment.
These issues have already been encountered in the first experimental Internet robots. For a discussion of the issues see [Taylor and Dalton, 1997]. For example, [Stein, 1998] allows remote control of the robot until the client gives it up, or until a timeout has passed (for clients that never disconnect). [Paulos and Canny, 1996] operate in a special type of problem space where each action represents the completion of an entire goal, and so actions of different clients can be interleaved.
Ken Goldberg and colleagues have operated a number of Internet telerobotics projects. In the first "tele-excavation" project, clients queued for access to the robot. In the robotic tele-garden [Goldberg et al., 1996] users could submit discrete requests at any time, which were executed later by the robot according to its own scheduling algorithm. [Simmons et al., 1997] do something similar. The robotic Ouija board [Goldberg et al., 2000] is a special type of problem where the actions of multiple clients can be aggregated. We will define the WWM server commands in this paper with a view to being able to implement all of these systems.
The nature of time in this system is interesting. The agent senses a state of the world x and then talks to perhaps a large number of servers in order to find an action a to execute. Some of these servers may be down. Others may take a long time to respond. The question is: What happens to the world while the agent is waiting to act? It seems there are 2 possibilities:
We consider which of these would be used in each type of world:
On the other hand, we could say it is not the World server's problem. We could say it's up to the client to implement time-outs (and re-check the state x) if it wants to behave adaptively. The advantage of this would be to maintain a simple client-driven model. All servers - the World server, Mind servers and AS servers - could ignore time. These servers would simply respond to requests emanating from clients. The original client controls time and so can set up a system of time-outs.
In a multi-level system, there may be communication going on that the client does not know about. For example, the "Mind" server that the client is waiting for may actually be a MindAS server resolving competition between multiple Mind servers. In this case, the MindAS server might implement its own time-out. If it hasn't got a reply from all of the Mind servers, it just makes a decision based on whatever suggested actions have come in so far. Alternatively, the MindAS server may simply wait until it has had a response from everyone, and if that takes too long then the client that is waiting for the MindAS server will time-out.
It seems that a MindM server cannot implement a time-out - it simply has to wait for the Mind server it is calling to return.
In a purely virtual world, the agent could live in a strange, "arrested-time" world. It interacts with the world, the world server is down, it waits days for the next "timestep" to see the next state of the world. Similarly, the world waits days to get the agent's action so it can advance the clock to the next state. Waiting is not a problem for either. A simulated world can run at any speed. Time speeds up and slows down, depending on the servers.
The name "The World-Wide-Mind" makes a number of important points:
For a real society of mind or network mind, we need a network of AI programs rather than a network of pages and links. We may actually be able to implement this on the existing network of Web servers, running over HTTP, but it must be designed as such. It will not simply "emerge" from schemes for linking data on the Web.
But a WWM system is not necessarily interested in learning from or interacting with the current Web or its users. We are embedding the WWM in the network not so much because of the prior existence of the Web (though we may make use of that), but mainly because of the future potential of other WWM servers.
We envisage of course that progress in AI will continue to be driven mainly by professional researchers in AI laboratories. But if these researchers write their AI minds and worlds as servers, the AI project could be massively decentralised across different AI labs. As well as allowing AI labs to share work, and specialise, there would also be a world-wide experiment always on, with constant re-combination and testing (in public) of everything that has been put online so far.
First of all, AI researchers can more easily specialise. Minds are separate from Worlds, so that researchers can reuse each other's Worlds. Anyone doing, say, AI experiments in learning, can, by writing to this protocol, use as a test bed someone else's World server, and does not have to write his own. He can concentrate purely on devising new learning algorithms. Re-using other people's Worlds [Bryson et al., 2000] will probably be the most common use initially. Learning how to reuse other people's Minds may take time.
As discussed above, having to invent your own test environment to test your new algorithm is not only a lot of extra work - it also makes it harder to objectively evaluate the new algorithm. Even if you take care to do a run of the old algorithms in your new test world, the test world might (however unintentionally) be designed to illustrate the good points of the new algorithm. Clearly, being able to compare different algorithms on the same pre-built World server goes a long way towards helping AI run objective comparisons of algorithms and models of mind.
But the WWM goes much further than that. By its emphasis on 3rd party experimentation, algorithms will be subjected to constant examination by populations of testers with no vested interest in any outcome. 3rd parties will ensure that the results can be repeated again and again. They will compare many more different combinations of servers under control conditions. (Currently we are still assuming that 3rd parties will be other AI researchers. Whether the general public will carry out useful tests will be discussed shortly.)
The whole question of how to prove one autonomous agent architecture is better than another has become an important issue recently. [Bryson, 2000] points out that, essentially, no one uses each other's architectures: "There have been complaints .. about the over-generation of architectures" and (among behavior-based models): "no single architecture is used by even 10 percent of .. researchers." Most architectures have performance statistics to support them, but these statistics have had little success in convincing rival researchers to abandon their own favourite models. In particular, if the tester invented the test world to show off his new algorithm, there are simply too many uncontrolled variables for the test results to be totally conclusive. Test results can only become conclusive when there is repeated objective 3rd party evaluation. Currently 3rd parties have to go to great effort to recreate the test situation, and this rarely happens. For instance, Maes' model [Maes, 1989, Maes, 1989a] waited years for Tyrrell to re-implement it in a performance comparison [Tyrrell, 1993]. Tyrrell does not get impressive performance for it, but reading his thesis one might argue there are still uncontrolled variables in Tyrrell's implementation. Re-implementing Tyrrell itself is a difficult job [Bryson, 2000a] which rather adds to the lack of conclusion about his results.
In any branch of AI, the existence of objective tests that cannot be argued with tends to provide a major impetus to research. This has been one of the main reasons for the popularity of rule-based games in AI [Pell, 1993]. Robotic soccer has also taken off in recent years for this reason. Noda's soccer server [Noda et al., 1998] is probably the closest in spirit to the WWM, though the user must still download and install the agent world.
On the WWM, mind and world must be presented publicly, and 3rd parties will drive objective evolution of the best solutions. They will test servers in environments their authors never thought of, and combine them with other servers that their authors did not write. The experiments will be outside of the server authors' control. They will do this in public and, if properly written up, this will implement an ongoing objective programme of artificial selection (i.e. selection by hand) and "natural selection" (machine-automated selection) of agent solutions.
I imagine that many will see the ability of any 3rd party on the Internet to choose and run their own combination of servers as a "cute" but unessential feature. That it is some essentially patronising scheme of allowing the public think they are helping with science.
But this misses a couple of points. First, that the only way of allowing any professional AI researcher to experiment with servers without permission is to allow every user on the Internet to experiment with the servers.
Secondly, in fact, I believe it will be essential to the success of the idea that more than just AI researchers can work this system. AI researchers are focused on specific projects, with deadlines and many responsibilities. There are not many of them, and they don't have much time. With 3rd parties, there are millions of them, and they have a vast amount of free time. Many schemes have harnessed the power of the millions of idle and curious Net users, e.g. "metacomputing" projects such as large-scale cryptography cracks or SETI data analysis. Within AI, there have been some evolutionary experiments attempting to recruit large numbers of users. See the [Yahoo list of ALife programs online]. Perhaps the most impressive is the "Talking Heads" language evolution project [Steels and Kaplan, 1999] in which 4000 agents have been constructed and tested by online users [Steels, 2000]. Such ideas are in fact fundamental to the Internet. Even the idea of linking itself in the Web is a classic example of harnessing the power of large numbers of people. Other people do some of the work for you, by tracking down sites and presenting you with a pre-built list of links.
Here with the WWM, the idea of a client presenting at a URL a pre-built combination of servers is deliberately modelled on the Web idea of a pre-built selection of links. Working the system will not be as easy as making a link on the Web. It will require some ability and interest - though perhaps no more so than the interest ordinary people have in raising animals and infants. People will write user-friendly software to make it easy to be a non-technical client, experimenting with combinations of pre-existing servers, taking part in ongoing competitions to beat the highest score in a particular World. Obviously the amateurs will report their successes in what will probably be a somewhat haphazard fashion. It will be up to the AI professionals to make sense of what they see, and investigate promising leads and write them up in a scientific manner.
But the power of artificial selection by large numbers of amateurs should not be underestimated by science. Putting unseen-before minds into unseen-before worlds, years after the research groups that made both minds and worlds have vanished, may be of real benefit to science. 3rd parties will run old minds in new worlds, and new minds in old worlds. They will drop new minds into old collections, and run combinations that make no sense. It is certain that they will run combinations of servers that the scientists never thought of.
The professionals may be sceptical of the value of this, but millions of people experimenting with different mind/body/world combinations year after year on the network would represent a richer experimental milieu than anything AI has ever yet built. Thousands of years of artificial selection by farmers and breeders across the world (non-scientists) is now recognised as one of the most thorough scientific experiments in history. Since modern science arose, hardly any new animals or plants have been successfully domesticated, which indicates that pre-scientists really did do all the major experiments [Diamond, 1997]. And of course the scope of what artificial selection has been able to produce, from fruit and vegetables to dogs or racehorses, is breathtaking, and was one of the central inspirations for Darwin's theory [Darwin, 1859, §1, "Variation under Domestication"]. With AI, large-scale artificial selection projects cannot begin unless the AI work comes online.
To continue our theme in defence of 3rd parties, it is often forgotten that many AI researchers are excluded from the AI project as well. The WWM scheme will allow AI researchers in poorly-funded labs, in distant and poor countries, isolated postgraduate students, and so on, to participate in the great AI adventure. Not for charitable reasons, but because their exclusion is a loss to the science.
They will make a more sophisticated use of it than the amateurs. They will write new servers themselves, either from scratch, or by partial reuse of existing servers, by writing MindAS servers, MindM servers and WorldW servers. They will re-use others' work in controlled experiments. They could take an existing world, body, problem, and basic collection of minds, and just work on simply adding one more mind to the seething collection. Things like:
And so on, with people modifying and cautiously overriding what already works. Of course they are not actually modifying what already exists in the sense of changing it for other users and removing the old version. Making a wrapper server simply means that 2 servers now exist instead of one. The wrapper author cannot force anyone to use the new wrapper in preference to the old server.
By setting up a system whereby many authors, acting over different times, will contribute to constructing a Society, the WWM provides AI researchers with the ability to do a lot more than just reuse other people's test worlds. In every field, the Internet has already allowed marginalised, distant and poorly-funded researchers participate in international research like never before, from access to primary literature that may not exist in any library in the client's country, to access to remote databases and software libraries. The WWM is simply continuing this trend.
Part of this proposal is a plea for recognition of the untapped potential in AI - the vast number of minds and worlds that are offline. Some of this comes from my own experience with putting an agent mind online. For I was one of the first people to put an AI mind on the network, an "Eliza"-type chat program in 1989 [Humphrys, 1989].
The original "Eliza" was introduced in [Weizenbaum, 1966]. Now I must say upfront that Eliza-type programs have little to do with serious AI - their understanding of conversation is an illusion. But this does not affect the argument. They are behaviour-producing systems, like our postulated Mind servers. My point is that many people conversed with the original Eliza (not online of course, for there was no network), but it did not stay accessible. Soon, the experiment was over, written up, and the original version of Eliza remained largely inactive until the modern era. In 1989 I put my own Eliza-type program, "MGonz", online on BITNET. Many people talked to it, but soon (in fact, by 1990), MGonz had ceased to interact with the world.
A brief, finite interaction with the world, seen by only a few people, and normally not even online, is the norm in autonomous agents research. In this field it has become acceptable not to have direct access to many of the major systems under discussion. How many action selection researchers have ever seen Tyrrell's world running, for example? [Tyrrell, 1993] How many robotics researchers have ever seen Cog move (not in a movie)? [Brooks et al., 1998] Due to incompatibilities of software and expense of hardware, we accept that we will never see many of these things ourselves, but only read papers on them, and sometimes watch movies. This situation seems normal, but if we ever get used to direct access to remote agent minds and worlds, it may come to seem like bad science not to allow it, and to only report offline experiments that were seen only by the creator of the agent.
But it is not just watching the agents that this is aimed at, it is interacting with them. The problem with inactive agents is that the only experiments run with them were the ones their creator thought of doing. But as we have argued, AI researchers are often too limited in time and resources to explore fully the possibilities of their creations. It is as if animal species only got to live through one individual and one lifespan. As AI develops, we should begin to regard the inactivity of our growing list of old creations as a loss, like the silence of extinct species. The WWM aims to put an end to this inactivity.
The invention of CGI and other technologies has recently resurrected some of the old agent minds of AI, including the Eliza-type programs [see Yahoo list of AI programs online]. The WWM will vastly accelerate this process, by bringing many of the recent autonomous agents minds online in re-usable forms where they can be driven by remote programs. We aim to take all of the Minds and Worlds that human ingenuity can create, and get them all online and interacting with the world indefinitely. To get AI to move away from isolated experiments, and instead develop its own rich, world-wide, always-on, ecosystem to parallel the natural one.
It is important to ask, if this scheme is going to be so useful, why AI has not taken this direction in the past. The following may be reasons why (or possible objections):
Indeed many presentations in the Animats or evolutionary fields still seem to assume that one lab can do it all. Some of them recognise the immensity of the problem as we scale up, but when faced with the complexity of dividing up the work, defining communication protocols, and coordinating the results, most have retreated back into either designing whole agents (but saying this is alright for simple agents) or else producing specialist components which may or may not ever be used as part of a larger system.
These are all reasons why re-use is difficult in the software world in general. But in the model of reuse proposed here, none of these things are necessary. It is not necessary to have a clear definition of how the work should be divided up. It is not necessary for components to understand each other. It is not necessary to install anything. Components being re-used can remain at the remote server, are used as a service from there, and are not fully under the client's control. Which leads to the next reason:
Part of the problem, I believe, is models of mind in which the loss of a single server would be a serious issue. Instead of models of mind where hundreds of similar servers compete to do the same job, researchers have been assuming the use of parsimonious minds where each component does a particular task that is not done by others. Certainly, in the early stages of the WWM, with few servers online, clients may feel that their constructed minds are very fragile and dependent on the servers. But some clients will continue to add more and more "unnecessary" duplicated minds to their societies. In a model of mind with enough duplication, the temporary network failure (or even permanent deletion) of what were once seen as key servers may never even be noticed.
The basic answer is that if it is important to us, we will copy it (if it is free) or buy it or rent it. Then we either set up our own server, or continue to use the remote server and just keep our copy offline as backup. Here's how in practice one might be running a large, complex Society of Mind with actually very little risk: Imagine that our top-level AS server is a well-known, standard type that we can get our own copy of. (Not that we actually use it. We continue to use a remote server. But we have our own copy just in case.) Say the World server is a popular test world that is implemented at multiple sites (we just use our favourite one). And we use hundreds of remote minds in a complex society. For all of these we will take the risk of some of them vanishing, and see little reason to buy or copy any of them ourselves. After all, other new and interesting Mind servers are coming online all the time.
As discussed above, one way of making a Society more robust would be to add "unnecessary" duplication. For this to work smoothly, we need an Action Selection scheme where a Society with n identical Mind servers trying to do the same thing, plus other servers, will behave more or less the same as a Society with 1 Mind server trying to do that thing, plus other servers. Then we can add extra copies of the Mind server located at different sites, and won't even notice if some of those sites are down. This property is not true of all AS schemes, though. [Humphrys, 1997] shows that it is true of individual-driven AS schemes such as Minimize the Worst Unhappiness, but is not true of the more common collective AS schemes.
So using a model like the above, Societies will degrade gradually as the number of broken links increases. In the above work I explicitly addressed the issue of brain damage in a large society of mind [PhD, §17.2.2, §18]. The reader might have wondered what is the point of a model of AI that can survive brain damage. After all, if the AI is damaged, you just fix it or reinstall it surely? Here is the point - a model of AI that can survive broken links. This leads to the more general reason why this whole approach to remote re-use has not been used:
It is clear enough to see how sub-symbolic conflict resolution can occur via numeric weights. But if Minds are strangers written by different authors, in what symbolic language could they communicate? Which leads to the following objection:
We argue, though, that it is not totally premature to start defining mind network protocols at the sub-symbolic level. There are already many schemes of numeric weights, and Action Selection based on weights, in the literature. The sub-symbolic WWM in this paper has been designed so that all current numeric agent architectures (that the author is aware of) can be implemented under the scheme.
There will no doubt be further sub-symbolic (and later, symbolic) protocols. But designing the early ones for simple numeric weights will give us an idea of how to do it in the future for more heterogenous agents with different representations [Minsky, 1986, Minsky, 1991]. This will be the first in a long family of protocols.
This is neither of these two, but rather is multiple minds solving multiple problems in one body. If anything, it is closer to the field of Adaptive Behavior and its interest in whole, multi-goal creatures whose goals may simply conflict. Agents researchers also tend to work at the symbolic level only, rather than the sub-symbolic as we do here (and as many people do in Adaptive Behavior).
Artificial Life and evolutionary researchers are certainly interested in collective, and even collective network-based models [Ray, 1995], but again the minds are localised, as in the MAS approach. In [Ray, 1995] it is a society of agents that is distributed across the network, not a single agent mind.
Machine Learning (e.g. Reinforcement Learning) researchers have tended to focus on solving a dedicated problem, rather than juggling many partially-solved conflicting problems. That it, they tend to take the DPS approach.
The internal server workings do not have to be made public. The only demands are that the server replies to external requests according to the protocol. The Mind server can be a symbolic mind, a neural network, a genetically evolved program, or anything else. It does not have to tell us. This will be important for commercial servers who want to protect their investment, or sell access to their server. It may also be important for academic projects. Although it could be argued that all academic projects should publish their source code (and there is no excuse not to, now that the Web exists) so that experiments can be replicated.
An interesting side effect of hidden server internals may be a more level playing field between different types of AI. Currently people tend to look for algorithms only within a particular sub-field - neural net researchers look for other types of neural net, symbolic AI researchers look for symbolic routines, and so on. Here, each algorithm will stand on its merits, and it may be better science if we do not know, at least initially, what is inside the server. The doctrinaire neural net researcher may be embarrassed to discover that the excellent server he has been using for years is in fact a symbolic AI server. Such objective symbolic v. non-symbolic competition has in fact already occurred in robot soccer [Stone and Veloso, 2000].
It will be interesting to see how this develops. Some researchers may refuse to use servers unless they know how they work. They will argue that it is bad science, since any systems built using such servers can only be replicated as long as the server owner continues to provide the secret and unknown service. Others may argue the above case - that it is a breath of fresh air (and good science) to be able to judge algorithms entirely on performance without, at least initially, knowing anything else about them. The answer really is that it will be good science if, while server authors do explain in some detail what is inside, such information is routinely ignored as different combinations of systems are constructed and tested on merit.
Allowing servers to be used as components in larger systems is central to the WWM idea. One issue though is: Can you track how your server is being used? Imagine that your server is being called by a popular MindM server. To the outside world, does the MindM server get all the credit? Can a 3rd party look at a successful Society, and see all of the servers involved in it at all levels? Or do they just have to assign all the credit to the top-level server, not knowing what is behind it? We suggest the following:
The latter point is important. On the WWM we expect that thousands of server authors will leave their servers to be run in the research community long after they have gone. Hence we want these lists to be published online, rather than simply known internally by the programs or recorded in logfiles.
An interesting question is whether you could write a malicious wrapper server, where the MindM server does not acknowledge that it is calling another Mind server, and tries to take the sole credit for the functionality. There are many possible answers to this (e.g. servers publish their usage logs online), but we doubt it will be an important issue. More likely the AI community will simply ignore servers unless they come (i) from serious and respectable sources and (ii) explain in exhaustive detail how they work and what other servers they are calling.
If Mind servers learn from interacting with a World, while part of a Society, where is that new knowledge stored? One might say it should be stored at the Mind server, and that seems reasonable in many cases (if the clients can tolerate the fact that the server may change). There are, however, a couple of problems:
This does not mean that other clients cannot access the new knowledge. It just means they have to do so explicitly: "Give me the Mind that you learnt with client c".
A Mind server that learns also raises the question of which of these we want:
We may pick one of these at the start of the run, or even half-way through the run we may decide to get the server to go back and re-learn. We can control all this by passing a "Temperature" parameter to the server. Temperature = 0 means we want the server to exploit its current knowledge with no exploration. High Temperature means we want the server to do a lot of exploration. As Temperature -> infinity the server tends to engage in maximum exploration. The use of the word "Temperature" is explained in [PhD, §2.2.3]. Exactly what counts as a "high" Temperature for this server will be explained by the server at its URL. Rather than just provide the Temperature at the start, we might provide it on a step-by-step basis, so that at any point the client may send Temperature = 0, which means "Send me your best action, based on your learning so far". Generally, the temperature will decline over the course of the learning run. We have two basic strategies:
If the world changes, we may ask the Mind server to re-learn its Q-values from scratch, i.e. increase the Q-Temperature. If the collection of minds changes (i.e. the competition changes), we may ask the Mind server to re-learn its W-values from scratch, i.e. increase the W-Temperature.