Could an AI ever be a Philosopher?
Answer by Geoffrey Klempner
One might think that there is something rather strange about this question. An AI — an artificial intelligence — is by definition intelligent, and if any creature is intelligent, surely it can understand and grapple with the questions of philosophy, leaving aside the question whether it would want to.
Let’s first deal with the question of AI. There are basically two routes one can take. The first, which is at present the only subject of research, is to construct ever more complex programs — or alternatively connectionist networks — which approach ever more closely to the kind of verbal output that is indistinguishable by the Turing Test from a human being.
There are questions here about whether mimicry or simulation could ever be as good as the ‘real thing’ — to which the best answer (in my view) is that you need to give your AI ‘arms and legs’ (or the equivalent). A creature that has intelligence necessarily has desires as well as beliefs, and in order to have desires a great deal of physical structure is presupposed besides mere possession of a ‘computing organ’.
On this reading, maybe the first ‘genuine’ AI will have wheels instead of legs, maybe it will look more like Dr Who’s Daleks than a human being. But it will want things. It will have an agenda. When we talk to it, it will talk back because it wants to (because it is interested in us and what we have to say, even if only as a pleasant game to pass the time).
What could we talk about? Well, that’s the problem. This creature (I won’t call it a machine) has ‘desires’ that are largely incomprehensible to us. Perhaps we share intellectual curiosity, perhaps that’s enough for scientific collaboration or something similar. But that’s as far as it goes.
How about joining a Philosophy Department? Our AI would be a whizz at formal logic. However, my view, for what it is worth, is that to be motivated to philosophize one needs specifically human failings. (There’s some truth in the old joke: ‘My son is a Doctor of Philosophy.’ ‘What sort of illness is philosophy?!’)
Maybe, our AI would turn out to have some or all of these ‘failings’ too, maybe not. There’s no way to be sure, because we are so far from getting to the bottom of the source of the philosophical impulse that it is really impossible to say. To philosophize, you need to find, in Neo’s words, ‘something wrong with the world’. There is something wrong with the world because there is something wrong with us. That’s what the struggle to philosophize is ultimately about.
I said before that there were two possible routes to AI. The second hasn’t been tried yet, but I can’t see any logical objection to it. You start by replacing a single brain cell by a silicon substitute with identical input-output functions. I don’t want to minimize the monumental difficulty of this task, which is far beyond what present science can achieve. However, if this could be done, in principle, then by repeating the process you could create a substitute brain (and body too, with a human-like nervous system).
Why go to all the trouble? Biology is the best method we know of growing a human being but maybe in future human-like AIs could just be manufactured on a production line. Various materials go in at one end, and human replicants come out the other, just like automobiles. What would these human replicants lack? A human life. A childhood.
In principle, these could be built in too, by duplicating not just the function of the brain cells but their actual state at a given time. Then a replicant would walk away thinking that it was you, or me. In that case, your question is answered.
But I guess that’s not the answer you expected.
2 thoughts on “Could an AI ever be a philosopher?”
Correct me if I’m wrong, but your answer is built upon the functionalist approach to philosophy of mind, which states that mind is an epiphenomenon of brain activity, therefore if one can replicate the functions and states of a brain, its mind will be automatically replicated.
There are many challenges to this view, the strongest being the simple fact that to date no causal relation between brain and mind has been found in spite of enormous advancements in neuroscience.
Have you considered the answer to the question in case one of the alternative hypotheses is right, for example, Thomas Nagel’s argument that due to its subjective nature, consciousness is by definition inaccessible to physicalist reductionism?
I agree with Nagel. In fact, I would make a stronger claim that rejects physicalism and substance dualism: I might have not existed but someone exactly like me (same physical and mental properties) might have existed in my place. No theory (that I know of) can explain that.
However, one can still raise the question whether (a) a machine that mimiced human verbal behaviour could be constructed that was able to contribute usefully to philosophical debate, or (b) whether I might turn out to have replacement brain cells made of silicon (implanted by friendly aliens during the night) and I never even knew.
In my answer I assume an agnostic view of physical reductionism. Leaving aside mimicry, any intelligence that is not human (alien, silicon based, or whatever) will have problems engaging with the many of the kinds of philosophical questions that humans wrangle with.