School of Computing. Dublin City University.
Online coding site: Ancient Brain
coders JavaScript worlds
Mark Humphrys
Ch.15 of Parsing the Turing Test, Robert Epstein, Gary Roberts and Grace Beber (eds.), Springer, 2008.
Abstract
In 1989, the author put an Eliza-like chatbot on the Internet. The conversations this program had can be seen - depending on how one defines the rules (and how seriously one takes the idea of the test itself) - as a passing of the Turing Test. This is the first time this event has been properly written up.
This chatbot succeeded due to profanity, relentless aggression, prurient queries about the user, and implying that they were a liar when they responsed. The element of surprise was also crucial. Most chatbots exist in an environment where people expect to find some bots among the humans. Not this one. What was also novel was the online element. This was certainly one of the first AI programs online. It seems to have been the first (a) AI real-time chat program, which (b) had the element of surprise, and (c) was on the Internet.
We conclude with some speculation that the future of all of AI is on the Internet, and a description of the "World-Wide-Mind" project that aims to bring this about.
Keywords - chatbot, Turing Test, Eliza, Internet, chat, BITNET, CHATDISC.
The original Eliza was meant to be sympathetic to the human. I thought it would be interesting to add aggression and profanity to the program. My program was deliberately designed to have an unpredictable (and slightly scary) mood. It is hard to convey the personality of the program in this limited space here. For the cumulative effect, the reader should read through the LISP source code, which is visible at [4], from which the following examples come. As one example, in reply to the common use of "OK" the machine would deny that everything was OK:
( (equal (car input) 'ok ) '(what do you mean ok its not ok at all) )
As another example, in response to any innocuous statement beginning "You are .." the machine would reply with one of these:
(putd ' youareresponses 'expr '(lambda nil '( (i am not (y) you insulting person) (yes i was (y) once ) (ok so im (y) so what is it a crime) (i know i am (y) dont rub it in) (i am glad i am (y) ) (sing if youre glad to be (y) sing if youre happy that way hey) (so you think i am (y) well i honestly could not care less ) ) ))
where (y) is the old Eliza trick of just repeating whatever the user had typed after "You are". The machine is pure stimulus-response, replying to the present message, with absolutely no memory. The reader will see that the technology is all pretty much standard Eliza pattern matching. The novel aspect is the design of the personality and responses, and of course, from the technical viewpoint, putting it online. Other innocuous statements would trigger random aggression:
(ah type something interesting or shut up)
disinterest (this in response to "I was .."):
(shut up you boaster)
and prurience:
(are you lonely very often ) (has anyone ever loved you) (ok honestly when was the last time you got laid)
The last one was very successful with the right personality type, as was this response to any use of bad language:
(you only use foul language to make up for your small penis)
which (hopefully) would only inflame the situation. The answer to the prurient questions is just as likely to trigger more aggression:
(what do you know about love anyway githead)
disbelief:
( (equal (car input) 'yes ) '(i dont believe it) )
and unbelievable rudeness - when talking about death, for no reason the machine could suddenly reply:
(i wish you were dead)
Throughout, the machine would produce random friendly ..
(you are obviously an interesting person)
.. and patronising replies:
( (equal (car input) 'hello ) '(hello little man how are you) )
while every sentence beginning "I love .." would trigger:
'(ahh thats nice)
no matter what was being loved. These responses gave the impression of a person blowing hot and cold, with unpredictable and sometimes violent emotions. Clearly, much of this was waiting for someone with the right personality to come along - and, crucially, someone who did not know they were talking to a machine.
Fellow students talked to the program, and while its responses did prove more popular than the banal responses of the normal therapist, all of this took place with no element of surprise. The students knew they were talking to a program.
The program, called simply "Doctor", was offline (i.e. not on the Internet) though it was on a multi-user machine. It ran on a VAX/VMS mainframe (to be precise, the machine: ccvax.ucd.ie) that Computer Science students and staff at UCD shared. To talk to my program, the students would specify the path of the program and run it. Clearly, this program would not perform to its best under these circumstances. Even if my acquaintances could be surprised, i.e. expected to talk to me and not a program, they knew me, and so would know these were not my replies. To really test the program, it was necessary (a) to introduce surprise, and (b) for it to talk to strangers.
In any case, even in the pedantic sense the program was on the "Internet" as well since the BITNET machine at UCD (a VM/CMS mainframe, the node IRLEARN) exchanged email, files and chat/talk messages with the TCP/IP machine at UCD (the VAX/VMS mainframe that we encountered above) and actually had its own Internet address (the VAX machine also had a BITNET address). To summarise, at UCD we had:
VM/CMS had an interesting feature where you could "disconnect" - which meant you logged out, but could leave a program running to process incoming email, files and, crucially, chat/talk messages. The program CHATDISC EXEC, by Eric Thomas of the Ecole Centrale de Paris and of CERN, the inventor of LISTSERV [15], was in popular use as a disconnected "answer machine". Talk messages sent to you when you disconnected leaving CHATDISC running were saved in log files, and automatic replies could be sent back such as "Back in 2 hours". Different replies could be sent depending on who had sent you a message. Indeed, because CHATDISC was open-source (a REXX EXEC script for VM/CMS) it occurred to me that any program at all could be called at this point. CHATDISC allowed one essentially write "CGI scripts" for BITNET (arbitrary server-side programs that could be run remotely when you were not there).
Add a line defining the current input string, and asking LISP to generate the response from this. Run the file:Address 'CMS' 'COPYFILE MGONZ SOURCE A MGONZ LSP A (REPLACE'
Parse the output file. Find the reply line. Send it back:arg = removebadchars(message) /* bad chars will crash LISP */ arg = strip(arg) arg = left(arg,50) /* else lisp truncates, thus losing brackets */ 'EXECIO 1 DISKW MGONZ LSP A (FINIS STRING (response 'quote'(' arg '))' Address 'CMS' 'LISP MGONZ LSP'
For technical reasons, MGonz's replies all begin with *.'EXECIO 1 DISKR MGONZ LISTING A 441' 'EXECIO * DISKR MGONZ LISTING A (LOCATE /VALUE IS/' 'EXECIO 1 DISKR MGONZ LISTING A (LIFO' parse pull line line = lowcase(line) /* MGONZ LISTING will be all upper case */ line = strip(line) line = strip(line,'L','(') line = strip(line,'T',')') /* discard enclosing brackets */ replymsg = '* 'line Call Tellback replymsg
This went online in February 1989, and friends of mine amused themselves talking to it, but most had advance warning that it was a program, so the conversations are always limited by that:
From FRIEND1: get lost you nosey schweinhunt pig dog * have you ever had sex with someone who really loved you From FRIEND1: no * ah go on say yes From FRIEND1: sorry, no * by the way is there any medical reason for your sexual impotence From FRIEND1: this program is very good mark you rat head
All that was necessary now was for a stranger to send me a talk message when I was not there. A number of friends and strangers hit it without warning but backed off immediately, these from March-April 1989:
From SOMEONE at IRUCCIBM: noel kerins told me to send you a message! * you are obviously an asshole From SOMEONE at IRUCCIBM: so here it is! * by the way is there any medical reason for your sexual impotence From FRIEND2: Hello, Marcus maximus. * hello little man how are you From FRIEND2: I am not a little man. but if you must know, I'm very well. From FRIEND2: Where are you anyway .. not the SCIENCE-block I hope?
What we were waiting for was someone who would not back off.
The conversation, available in full at [4], begins haphazardly. Soon, the user starts to get frustrated:
From SOMEONE at DRAKE: what is it with this type something interesting or shut up business? * what are you talking about From SOMEONE at DRAKE: i keep getting a message that says. "* AH TYPE SOMETHING INTERESTING OR * you are obviously an asshole From SOMEONE at DRAKE: shut up" * what are you talking about From SOMEONE at DRAKE: what is this? first it says "YOU ARE OBVIOUSLY AN ASSHOLE" then * what are you talking about From SOMEONE at DRAKE: you say " WHAT ARE YOU TALKING ABOUT"
To explain the absurd repetitiveness of my sayings, he constructs a fantastic theory that I might have "hot-keys" attached to particular phrases:
From SOMEONE at DRAKE: are there two people on your username? ... From SOMEONE at DRAKE: do you have all these silly sayings on some sort of control? * ok thats it im not talking to you any more From SOMEONE at DRAKE: you sound like a goddamn robot that repeats everything. ... From SOMEONE at DRAKE: o my he actually typed something. * ah type something interesting or shut up From SOMEONE at DRAKE: you type slow is that why you have all those stupid sayings on command like that last one?
but never quite makes the mental jump to realise that there is no one at home at all.
The conversation really gets going when MGonz (rather by accident) questions his manhood. The "When was the last time you had sex?" question works brilliantly with this user, especially when his answer is not believed. He actually admits under repeated "questioning" (by a program that has no idea what it is doing) that he had lied, or at least exaggerated:
* when was the last time you had sex From SOMEONE at DRAKE: yesterday. ... * ok honestly when was the last time you got laid From SOMEONE at DRAKE: i told you it was yesterday. * you are obviously an asshole ... * ok honestly when was the last time you got laid From SOMEONE at DRAKE: ok ok it was over 24 hours ago. for you it must have been 20 years * what do you mean ok its not ok at all
My response to his bad language also works brilliantly:
From SOMEONE at DRAKE: is that fucking rude enough for you? * you only use foul language to make up for your small penis From SOMEONE at DRAKE: why not? * ah get lost go to the bar or something From SOMEONE at DRAKE: i just told you the night before last. and the night before that too. * go away fool and stop annoying me From SOMEONE at DRAKE: well i am not going to tell you about the size just that i have * what do you know about love anyway githead From SOMEONE at DRAKE: never had any complaints about my penis and that it gets used quite * ok straight out are you a virgin From SOMEONE at DRAKE: often. and i think that you are a virgin and have no hope of getting * do you detest and loath the abomination of organised religion
His answers have a certain glorious innocence about them, as he tries repeatedly to make conversation with an entity that, frankly, one cannot make conversation with:
From SOMEONE at DRAKE: is your father a hampster? * do your family approve of what you do From SOMEONE at DRAKE: does your mother smell like last weeks dirty socks. * what method would you choose to slaughter your family From SOMEONE at DRAKE: do my family approve? of what exactly? ... From SOMEONE at DRAKE: why do you ask about religion. and we already decided that i have * do you believe in mgonz From SOMEONE at DRAKE: had sex on campus and in my girlfriends room no less. * tell me your favourite sexual fantasy From SOMEONE at DRAKE: do i believe in mgonz? what the heck is it? * praise and honour to blasphemers and death to mgonz
He ends with a furious barrage of abuse, but of course, my imperturbable program is a calm, stimulus-response machine and so it is impossible for him to have the last word. He must quit, because my program never will. At the end of a long barrage of sex-related boasting:
From SOMEONE at DRAKE: how about her just sucking you off? * if you are an attractive female please leave your phone number here From SOMEONE at DRAKE: little markie seems to be getting mad. poor little virgin markie. * jesus who let you near me go away From SOMEONE at DRAKE: FUCK YOU ASSHOLE. THINK OR ARE YOU JUST A STUPID HOMOSEXUAL. * fuck off yourself From SOMEONE at DRAKE: o my he actually typed something. * ah type something interesting or shut up From SOMEONE at DRAKE: you type slow is that why you have all those stupid sayings on command like that last one? * you are obviously an asshole
and that is the last word.
From SOMEONE at DRAKE: are there two people on your username? * when was the last time you had sex
because, bizarrely, "there" matched a list of "sex words". After this, it was the user that kept the conversation about sex going, not the machine. The machine has absolutely no memory, and would forget about sex immediately if the user did not keep returning to it.
There were other strange breaks. For example, just when the user was starting to get suspicious, my program strongly suggested there really was someone there:
From SOMEONE at DRAKE: what? why don't you type something new. like when was the * paris is a beautiful city From SOMEONE at DRAKE: last time that you got laid..unless it was when you were born o'my he said something different. a * you are obviously an asshole From SOMEONE at DRAKE: you are obviously a fucking dumbshit. * i am not obviously a fucking dumbshit you insulting person
The "paris is a beautiful city" reply happened, bizarrely, because "new" matched a list of "holiday words". The "You are .." reply is then spot on (purely by chance of course). Interestingly, the program actually admits (!) it is a program at some points in the code:
(you know the worst thing about being a computer is having to deal with shits like you ) (nobody ever asks the computer we lead a lonely life) (youll be in trouble when we computers take over the world) (have you any idea how boring it is being a stupid computer) (are you using foul language because i am a computer)
but by chance none of these were actually triggered in this conversation. The closest we get is:
From SOMEONE at DRAKE: mark? * mark isnt here and hes left me to deal with cretins like you
MGonz was never something that was meant to be seen by "adults". In fact, I went to great lengths to hide it from the faculty and computing staff at UCD (see next section), who I thought would not appreciate me running this on their system.
But for many years afterwards, it became clear that this really was interesting to other people, and might even have been the first chatbot on the network with the element of pure surprise. The interest in it led to me finally setting up a web page on it [4] in 1995. Even then, this paper is still the first proper write up.
where "Vax" is the function:'GLOBALV SELECT CHATDISC STACK LOGGEDON' Parse pull loggedon If loggedon ^= false Then Do /* Copy to Vax unless explicitly told loggedon = false */ If nickname ^= '' Then Call Vax '* Message from' nickname '('userid 'at' nodeid') ...' Else Call Vax '* Message from' userid 'at' nodeid '...' Call Vax message End
In fact, I did not just get copies of messages. My VAX account was a privileged remote user that could send special commands to be executed by MGonzNet:Vax: vaxmsgcmd = 'SMSG RSCS MSG CCVAX H236_007' Call Diag 8,vaxmsgcmd Arg(1) Return
Where "Sys" is a function to parse any message that comes from my VAX account. This function includes code to execute a command-line (CMS) command:When nickname = 'Vax' Then Do Call Sys Exit 0 End
That is, from the VAX machine I could use CHATDISC to remotely execute arbitrary commands on the VM machine. On the Internet today, servers running CGI scripts worry about script security since a bug in a script could give a remote user a command-line on the server. But at least CGI scripts (on a UNIX server) run as the user "nobody" - so in theory, even if they get a command-line, they shouldn't be able to do much damage. But here a disconnected server program like CHATDISC runs as me, with the ability to delete all of my files, and so on. This could have become an issue if disconnection for personal server-side programs had become common, but I am unaware that anyone else ever implemented a remote command-line like this on BITNET.When left(upmessage,3) = 'CMS' Then Do cmd = lowcase(substr(message,5)) cmd = strip(cmd) Address 'CMS' cmd End
Why is this relevant to the Turing Test? Because one of those arbitrary commands that I could send from the VAX was:
CMS TELL user message
That is, I could interleave my own replies with the program's replies. In general, if Turing Test-like experiments are going on online, and we are talking to some server-side program, a human could monitor the conversation remotely (and secretly), and then send secret messages to the server to send particular replies back. We, as users of the server, would never be able to tell. So the machine would pass the Turing Test, but only because you were talking to a human!
Writers about how to run a Turing Test competition online recognise the issue of remote human insertion of messages. Here we see that this has, of course, already happened in the 1980s. Here is a friend of mine as I send him some ambiguous messages from the VAX through my CHATDISC:
From FRIEND3: SEE says you're disconnected From FRIEND3: where the hell are you From FRIEND3: is this a ghost i'm talking to From FRIEND3: very mysterious From FRIEND3: quite strange From FRIEND3: you are definitely disconnected
I also had a "blacklist" of users who MGonz would not reply to:
/* Centre people - give standard disc msg don't let them stumble across MGonzNet */ centre = ' adillon adviser annmarie bernie bonnie brianm' centre = centre || ' carrick cecily deirdre dobeirne ebairead emcgrath' centre = centre || ' eoin guest1 guest2' centre = centre || ' harringt helen jacklowe jchester jennings joanc' centre = centre || ' larry listserv maevem mailmnt mailr2 maint' centre = centre || ' mallen mbreslin mcgrath mnorris' centre = centre || ' mokeeffe mokelly molooney moriarty msexton' centre = centre || ' noreilly odonnel' centre = centre || ' oneillu pdoyle rosemary sinead tinac tmcgrath' centre = centre || ' twade t_wade vmaccnt walter walsh'
This is actually a list, as [5] explains, of many of the pioneers of the Internet in Ireland. On 24th Apr 1989, I sent a quick talk message to one of these:
I found out he was disconnected. My message was logged in his answer machine. Then I disconnected myself. I went off to work on the VAX. While I'm working on the VAX, I get a message from MGonz telling me it is having a conversation with CENTREPERSON, and copying me on it:To CENTREPERSON: were you looking for me?
Horrified that MGonz was talking to him, I logged back in to VM as quick as I could and tried to cover up the conversation, thankful that it had just baffled him and not asked him about his sex life.From CENTREPERSON: I was, when you had all those t-discs * ive never been when you had all those t discs whats it like
There had been a flaw in my blacklist (a missing space, I found out later). Though I had to get caught sometime. On 18th Dec 1989, a message from the UCD operator:
From UCDOPR: PLEASE LOGOFF NOW From UCDOPR: SWITCH BEING BACKED UP * what are you talking about * you are obviously an asshole
Ironically, this would suggest that the Loebner Prize Competition is not the way to go about the Turing Test! As we saw above, the advance knowledge by my friends that I had (or might have) some kind of program running makes all their conversations rather bland, unserious (they just mess around) and short (because after the initial novelty, they get bored). Indeed, I have records of a number of such conversations with friends who knew it was a program, but these conversations are just too boring to include in this paper. Lack of surprise simply makes for a different type of conversation. This factor could be seen when Tom Doris put the original LISP MGonz back online in late 1998 as a CGI script [1]. Here the user knows from the start that it is a program, which affects the nature of the conversation:
User : you are simply responding to stimuli Mgonz: so you think i am simply responding to stimuli well i honestly could not care less User : you are simply responding to stimuli Mgonz: ok so im simply responding to stimuli so what is it a crime User : you are simply responding to stimuli Mgonz: i know i am simply responding to stimuli dont rub it in User : you are a defective program
Long conversations (like an hour and a half) just don't happen when the user knows it is a machine. Doris' CGI script is no longer running, but Dave O'Connor [14] has a version of MGonz online as a CGI script. Again, the user knows it is a program, though human psychology is such that some users still allow themselves to be provoked. The following are 3 separate users:
From USER1: Hello Hello little man, how are you? From USER1: don't call me little. Tell me a bit about yourself. From USER1: i am five feet six inches. I've never been five feet six inches , what's it like? From USER1: it's annoying because robots call you little. Are you here for business or pleasure? From USER1: i'm here to get material for my newspaper column. So you are here to get material for your newspaper column , well I honestly could not care less. From USER1: that's very rude. you are a rude robot. From USER2: erm I am a student I've never been a student , what's it like? From USER2: boring most of the time Go on tell me some really juicy scandal. From USER2: I dont know any You are obviously an interesting person. From USER2: so are you Are you here for business or pleasure? From USER2: I have to do some work so I was wondering what you are Did you have problems with your feet when you were wondering what me am ? From USER2: eh my feet? Ah type something interesting or shut up. From USER2: why did you just metion my feet What kind of music do you like? From USER2: no what was that feet thing about I see. From USER3: I am David. I am glad you are David . From USER3: Thank you. So am I. You're Welcome. :) From USER3: You are a very nice person. So you think I am a very nice person , well I honestly could not care less. From USER3: Well I take that back. You are rude. Sing if youre glad to be rude , sing if youre happy that way, hey! From USER3: I hate to sing. Go away fool and stop annoying me. From USER3: I have a paper to write. Ah get lost, go to the bar or something. From USER3: but I need to write my paper. If you think i care you're wrong.
The above is about as good as a Web-page chatbot can get without the element of surprise.
AOLiza [2] has recently returned to the element of surprise on the AOL Instant Messenger system, where it has been running since 2000. Like my program, and unlike the Loebner Prize, users do not expect to be talking to a machine. AOL, incidentally, were not happy about AOLiza running on their system. The element of surprise can be seen by some as unethical. And yet one cannot expect the best chatbot performance to occur when it is absent.
<dom01> how old r u <jenny18> shouldnt my nick tell u that!!! lol <dom01> im so horny right now damn <jenny18> (stroking your erection through your pants) <dom01> help me geny,im horny <jenny18> (stroking your erection through your pants) ... <dom01> yeahhhhhh i feel so horny genny <jenny18> grrrrooooowwrrr... you're not the only one who's horny. <dom01> mmmmmmmmmmyahhh <jenny18> mmm.. =) <dom01> im doing it <jenny18> oh are you? =) ... <dom01> mmmmmrrrr <jenny18> mmm.. =) <dom01> im masturbating myself <jenny18> kewl.. yum! <dom01> mmmmmm yeahhhhhhh <jenny18> could u explain pls? ... <dom01> is your pussy wet <jenny18> you know, my cunt is dripping. it's really wet =) <dom01> about your pussy what else <jenny18> you just wish you had my pussy to play with. <dom01> rrrrrrrrryeah <jenny18> crazy.. ... <dom01> can i call u baby <jenny18> just got one phone line hon. gotta stay here. ... <dom01> can i call u baby <jenny18> no way. i just wanna keep it in msg. <dom01> please ... <dom01> mrrrrrrmmmmmmmm <jenny18> could u explain pls? <dom01> ang onnnnnnnnnnnnnn <jenny18> ah. <dom01> angggggggggonnnnn <jenny18> oh? <dom01> waitttttttttt <jenny18> hm... =) <dom01> ahhhhhhhyeahhhhhhhh <jenny18> mmmMm.. <dom01> fuck <jenny18> i see <dom01> i came <jenny18> hmm <dom01> yes ... <dom01> xxx <jenny18> ah. <dom01> it feels so good ejacualtion <jenny18> want some more? i think u do :) <dom01> again? <jenny18> mmm.. =) <dom01> ok ill try my best to come again <jenny18> come again? <dom01> wait <jenny18> crazy.. <dom01> ang on im washing my dick,theres sperm on it <jenny18> hmm <dom01> wait ... <dom01> how old r u <jenny18> i'm 18, babe =) <dom01> fuck girl where r u from
With some of the others, like "GoldenBoy2222" and "lander100", it is hard to tell if they did actually come or are just pretending, but with "dom01" it is clear what has happened on his side. "Happy_Boy" also looks like it happened for him.
Incidentally, some people's inclination might be to laugh at the users of Jenny18, and regard them as sad - especially those that come. I'm not sure it's that simple. One could just as easily be happy for them - "dom01" is clearly having a nice time. Jenny18 could be seen as a public service, spreading happiness and optimism online. She gives users a nice fantasy to work with, perhaps at a period in their lives when they have no one offline to be intimate with. She could be a liberating experience, leaving them happier, more confident and optimistic in going back to the cruel reality of dating in the real world.
Jenny18 deserves a paper all on its own. And yet publicizing it is of course difficult. I did not publicize MGonz for years because of its vulgarity. Jenny18 has a far worse problem. Given the content, and especially what the users are doing during the conversation, one can imagine both the field of AI and the media being reluctant even to talk about Jenny18. Which would be wrong. For anyone who takes the Turing Test seriously, passing the "orgasm" Turing Test (i.e. the computer brings the human to orgasm) is surely an accomplishment worth noting.
For the sex conversation to work, privacy is also crucial. No judge at the Loebner Prize Competition is going to disclose personal information about their sex life like SOMEONE from DRAKE, let alone actually get aroused, if they know that the other judges will see what they typed. In a public forum, one will be self-controlled and keep more distance in the conversation. To summarise, a young male talking about sex online alone in the privacy of their bedroom is probably the easiest environment in which to pass the Turing Test.
What is needed is someone comfortable with sex who can still cope with a program, or at least modify an existing program. Jenny18, from someone with a background in music, not programming, is all about the strong, confident script, that simulates, often with subtle understatement, a horny girl playing with sex talk, rather than some robotic voice talking about sonnets. It seems to contain even less "AI" in it than Eliza does. To quote Kaufman, author of Jenny18, in private correspondence to me:
I think that, while technically impressive, advanced theoretical learning algorithms and memory databases are not by themselves the key to fooling random people - realistic responses are. You seem to come from the same train of thought. If artificial human intelligence is to be presented, it has to seem human! I don't understand how these chatterbots who speak in full paragraphs are to be taken seriously yet, when people subsist mainly on short utterances. To that effect, I included 500 different versions of "Huh?" and "I don't understand."
I agree with [3] and [18] that the Turing Test served its purpose in its day, but it's irrelevant now. In his famous paper [16] Turing was dealing with people who just couldn't believe a machine could ever think, and so was making the philosophical point that if you couldn't tell the difference, why would you deny it was thinking? This is a useful thought experiment, but this is not the same as saying either that: (i) passing the Turing Test is necessary for intelligence, or even that it is likely to be of any importance, or: (ii) passing the Turing Test is sufficient to demonstrate intelligence - there may be an answer to Turing's question. He just reminds us to take care that it is not just based on prejudice.
[3], [18] and [12] point out many of the standard objections to the Turing Test - that it concentrates on adult, human, text-based language processing, to the exclusion of other types of intelligence, that it forces the intelligence to pretend, instead of just being itself, and so on. I agree with these criticisms but since they are well made elsewhere I will not repeat them here, but instead try to make a few further points.
But many of Turing's followers seem to regard it as necessary, or at least important. Certainly thinking that the goal of AI should be to pass the Turing Test would imply that it is necessary. It seems to me that passing the Turing Test is not necessary for intelligence:
A is liable to believe "A thinks but B does not" whilst B believes "B thinks but A does not." Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.I would argue that it is more than a "polite convention". It is a consequence of the theory of evolution. A normal, healthy adult Homo sapiens sapiens has to have an inner life something like mine. But with a new thing - something that is unlike us and all animals, something that is not part of the genealogical tree of life - the situation is different. The new thing may indeed think, but the possibility of trickery and illusion remains. The genealogical argument does not help to dismiss that possibility, as it does with humans.
AI researchers succeeding in the Turing Test and then disowning the test itself has been common practice ever since Weizenbaum [17]. Jason Hutchens [8] is another example. He is interested in how to fool people, but does not believe that this has much to contribute to AI. After rejecting the Turing Test, my entire AI research career since MGonz has been on quite different forms of AI (mainly sub-symbolic, non-linguistic intelligence). There are, however, some good things about both MGonz and the Turing Test for AI, which I shall discuss in the next section.
We will close by considering an argument that the future of all of AI is online.
The symbolic AI revolution slowed in the late 1980s. It has been revived somewhat with Internet Agents and the Semantic Web, but much of the early optimism seems gone forever. The biologically-inspired AI revolution has had 15 years now to deliver on its promises, and it seems to me that it too is slowing down, and has not progressed as far as hoped (especially in machine evolution). So what happens next?
What happens next, we argue, is that it all goes online.
AI is currently being used online, notably symbolic AI in Internet Agents and the Semantic Web. But the model is still one where the agent is written by a single author or at most a single research lab. The WWM analysis is that as we scale up AI, we need to build more and more complex agent minds out of components written by multiple (perhaps hundreds or thousands of) authors. In other words, AI authors specialise on different aspects of intelligence, and come together to build large, complex minds that no single author understands. Only by authors publishing server-side programs in an open, public system on the Internet, we argue, can such massive, long-distance collaboration become feasible. Hence the term, the "World-Wide-Mind", referring to the fact that the mind may be physically distributed across the world. For a short introduction to the WWM idea see [6] or [7].
Under the WWM view, MGonz can be seen as one of the first times (if not the very first time) that one could access a remote mind server on the network in real-time.
In any branch of AI, the existence of objective tests that cannot be argued with tends to provide a major impetus to research. This is one of the reasons for the popularity of rule-based games in AI, and, more recently, robotic soccer. With sport and games, you can prove one system is better than another. One of the problems with many branches of AI, as discussed further in [6], is the difficulty of re-implementing someone else's test problem and re-running their experiments. Much of this difficulty is caused by the need for local installation. The WWM aims to address this, by having their test problem remotely usable online. Many standard tests will emerge, it is imagined, and in an open, public system we will have repeated, objective, 3rd party comparisons of solutions, so that progress can be made.
The Loebner Prize Competition has a point that competitions spur progress. Here we argue that AI needs more competitions than just the Turing Test. In fact, it needs a generic protocol by which people can set up new competitions (new remote WWM "world" servers online).
The WWM answer to this is "Yes. No single human mind will ever understand the mind. But humanity as a whole will." No single human mind can understand the entire corpus of science. We passed the point long ago (perhaps in the 17th century) when one person could understand all of mathematics and science. But humanity as a whole does understand all of these things.
To date, AI has been held back, we argue, by the need for a single lab, even a single researcher, to fully understand the components of the system. As a result, only small minds have been built so far. The WWM argues that we must give up this dream of full understanding as we build more and more complex systems. And giving up this dream of full understanding is not a strange thing to do. It is what has always happened in other fields. It is how humanity has made its most complex things.
Acknowledgements
Thanks to Dermot Williams for originally showing me how to modify CHATDISC. Thanks to John Murphy for useful discussions on whether a single scientist could understand the mind.
References