In 1989 I hooked up an AI chatbot called "MGonz"
to the Internet.
Many people talked to it,
and one in particular argued with it for an hour and a half,
convinced he was talking to a human being.
This story is told on my page:
"How my program passed the Turing Test".
* MGonzNet
* --------------------------------------------------
* help Help
* who Nice VM Who
* scoop The truth!
* get Get the p program for your machine
* p Query VM/SCS printer queues
* p :printer: Query specific printer
* --------------------------------------------------
The "Turing Test"
is a thought experiment suggested by Alan Turing in 1950.
He asked, if you could not tell the difference between a machine and a human,
then on what grounds would you deny the machine was intelligent?
I think the Turing Test is very much the wrong way
to think about AI.
See my criticisms below.
The Turing Test features
in the movie
The Imitation Game (2014).
This centres round Turing's work on breaking German codes at
Bletchley Park in WW2.
It is a bit annoying though because it portrays his bosses as idiots who do not understand his work.
This is nonsense.
Discussion - The Turing Test is the wrong way to think
Can machines think?
Of course they can, for we are machines and we think.
Could a non-human machine think?
Sure. Aliens could evolve and become intelligent.
Could a non-biological machine think?
What's the definition of non-biological?
Why does it matter if the machine is "biological" or "non-biological"?
Could an artificial intelligence
(or indeed any non-human intelligence)
convincingly pass for a human?
No, of course not.
Even if you're not allowed see it,
you can ask it where it is from, who its mother was, where it went to school.
Talk about events in your childhood.
Where did you live?
Oh did you know such and such?
Did you ever go to this place?
No one can lie forever.
If the machine is not allowed talk about its real life history,
you'll soon find the cracks.
And if you're forced to only talk about abstract topics,
the stilted conversation will prove little.
But who cares?
The goal of AI should not be to pass this ridiculous test.
But how will we be able to tell it's intelligent then?
When aliens discover us, how will they be able to tell we're intelligent?
We won't be able to pass as convincing aliens.
But, says Turing,
You only think I'm intelligent because of my behaviour.
No I don't. I know you're intelligent without even meeting you
or hearing a word you say.
I know you're intelligent because
I'm related to you.
In fact, I am currently
trying to prove this literally.
Thinking that the goal of AI should be to pass the Turing Test
is such the wrong way to think that I don't know where to start.
The Turing Test served its purpose in its day, but
it's irrelevant now.
Turing was dealing with people who just couldn't believe a machine could ever think,
and so was making the philosophical point that if you couldn't tell the difference,
why would you deny it was thinking?
Nowadays everyone (at least everyone in cognitive science)
agrees that machines could think (for we are examples of such),
and the question is what sort of machines
and how to construct them.
So the Turing Test may be unimportant,
but AI carries on.
There's plenty going on in AI,
especially in the branches inspired by animal rather than human behavior,
and in self-modifying systems of all kinds
- in particular those using learning, but also those using evolution.
For an introduction to this type of AI see
here
and here.
My research student Bernard Gorman
(PhD granted 2008)
invented a
Turing Test for 3D-Game bots.
These are bots in a first-person-shooter game
that need to convince the player that they are playing against a human.
This is a behaviour-based, non-linguistic Turing Test.
This idea led to the
BotPrize competition
for human-like bots
(first competition in 2008).