Some issues at the "edge" of science
are really part of philosophy rather than empirical science.
These issues are debated endlessly, but at this stage in history it is hard to "prove" who is right.
Maybe later in history we will know.
One such area is "How does the brain work?" and related questions like:
Is there a way the brain works?
Is the brain a machine?
What is consciousness?
Can we make intelligent machines? Is AI possible?
If AI is not possible, how far can it go?
If AI is possible, should we do it?
Will intelligent machines eventually replace us?
Names
Philosophy of AI is a history of "big names".
The debates are fun to watch.
Here are some big names and my take on them.
You don't have to agree with me.
Philosophy (in science) is about ideas
where we usually do not yet know enough to "prove" people right or wrong.
Summary:
Stop debating what intelligence means.
If you can't tell the difference, it is intelligent.
I say: Has a point,
but The Turing Test is easy to pass.
And the question of
trickery and fakery still remains,
and is a legitimate question.
And it's all very linguistic-oriented.
His books, What Computers Can't Do (1972)
and What Computers Still Can't Do (1992).
Summary: AI machines only look intelligent
because they are programmed to output their meaningless tokens
as English words.
They have no idea what they are saying.
He seems at times to say AI is impossible.
I say: Fair criticism of much of AI.
Doesn't apply to the new stuff.
Dreyfus leads us to the Symbol-grounding problem.
Summary: AI is impossible.
Instantiate your algorithm as a roomful
of 1 billion people passing meaningless tokens around.
You're telling me you can have a conversation in English
with China, yet not one of the Chinese understands English?
Summary: AI is impossible because of
Godel's theorem (1931)
on the limits of logical systems.
There are propositions that we can see are true
that the AI logical system can't.
I say: Any working AI would not be a logical
(truth-preserving) system.
It would be stochastic, statistical.
See My comments on Penrose.
Summary: AI is coming, and humans will go extinct.
And that won't be necessarily bad.
AIs will be our inheritors.
I say: I like a lot of Moravec, but I have doubts about this.
First, Who's to say we won't become AIs ourselves?
Second, who's going to "mop up" the humans who don't
co-operate with this "evolutionary inevitability"?
Evolution is not in charge now. We are.
And the only way humans will go extinct is by genocide.
I say: We've hundreds of years to adjust.
AI is hard, and even if machines do get intentions,
life will be much, much harder for them than us.
What we are really doing is not creating another species
but understanding how we ourselves work,
so we can mess around with it.
Long term,
this is about immortality, not extinction.
Summary: A more recent book on the possible dangers of AI from an AI person.
Russell is involved with
the movement to ban the manufacture of autonomous (decision-making) weapons.
See the
Campaign to Stop Killer Robots.
And their film
Slaughterbots (2017).
The movie
her
(2013).
Turing would have loved this.
Is this passing a Turing Test? Or is it real?
The film
Slaughterbots (2017).
Stuart Russell and the Campaign to Stop Killer Robots warn about the future of AI.
Here, advances in AI produce
swarms of miniature killer drones,
meant to change warfare,
but which lead to a breakdown of society.
How about souls?
The problem of what we are and what is our core being has been discussed for thousands of years.
Most religions and most humans throughout history
have believed in some "essence" or "soul" or "spirit"
as the core of what we are.
What do scientists think?
Very few scientists propose such ideas.
Most scientists are materialists:
The core of what you are is matter,
specifically your brain, nervous system and body.
One reason why is the origin of life and origin of humans:
A materialist theory can explain our slow evolution from non-humans.
Another issue is we have a suspect for the seat of intelligence etc.
The brain.
The brain seems to be
the most complex object in the universe.
If it was a simple thing, then more scientists would look for a seat of intelligence elsewhere.
There have been attempts to construct a scientific argument
that the brain is not enough.
But none are widely accepted in science.
Another issue is that any "soul" or "spirit" must have causal effects
on your brain/body.
So at what point does it interact?
In the brain, should we expect to see uncaused causes?
Neurons firing for clearly no physical cause?
Is this what happens?
Maybe, but no one has shown that yet.
Q. Could we ever prove this?
Could AI and Cognitive Science prove that we are material?
A. Very hard to prove it.
If we invent AIs, they clearly have no soul.
But that doesn't mean we don't.
The Mind's I,
Douglas R. Hofstadter and Daniel C. Dennett, 1981.
DCU Library, 155.2.
A mind-bending collection of essays exploring the possibilities
of Strong AI. If Strong AI was true, could you be immortal?
Could you copy brains?
See chapters.
Darwin's Dangerous Idea,
Daniel C. Dennett, 1995.
DCU Library 146.7.
Makes the case for Strong AI,
embedding it firmly in a biological world view.
Argues that Strong AI is just the consequence of
ordinary scientific materialism,
and any alternative better fit into evolutionary materialism as well
as AI does.