Philosophical zombies

Jimi asked:

If philosophical zombies existed, would they talk about consciousness just like we do? Or, if you tried to talk to a philosophical zombie about consciousness, would they be fundamentally unable to understand what you mean? If the latter is true, would we then be able to tell if others were conscious just by talking about it?

One might argue that what a philosophical zombie ascribes to consciousness differs from what we do. They ‘think’ that a certain physical phenomenon is what we call consciousness, but they aren’t experiencing it like we are. This explanation is unsatisfying to me because it’s impossible to pinpoint what exactly they’re ascribing consciousness to.

Another possible explanation is that the philosophical zombie is physically ‘programmed’ to have a discussion about consciousness. If this were true, if there were a world of only p-zombies, where would the concept even come from? It would seem pretty random that consciousness, something so abstract and unheard of, would be such a common topic of discussion among p-zombies.

Answer by Jürgen Lawrenz

The definition of a philosophical zombie includes their readiness to speak of consciousness, ‘although they don’t have any consciousness’. That seems to answer your first question. However, p-zombies are (or should be) classed as thought experiments. No-one expects a p-zombie to be found walking on the streets. At bottom, this kind of philosophical enquiry revolves around the problem that we humans also speak of consciousness, as it were all plain sailing. But we don’t really know what it is. So like a p-zombie we go around believing we have a consciousness of self. It leaves the door open for researchers to wonder if it’s just a ‘necessary delusion’, as one writer put it.

It is an uncertain attribute in the sense that consciousness of self inevitably refers to a residual ‘I’. Therefore it relies wholly on ascription; the projection of the ‘I’ on ‘you’ and ‘they’, as the outcome of a recognisable phenomenology. As a concept, however, ‘self-consciousness’ suffers from an underdetermined description: it is not actually possible to give an all-inclusive definition.

We know from Kant and Hegel that our consciousness of self exhibits a well-rounded phenomenology, that it has structure and that there is an evolution behind it which combines the priming of our sensibility by the natural, but especially the social habitat.

It is an altogether different issue when we take note of the claims of an AI industry which has a stake in such discussions. It might (and perhaps should) leave us with an uncomfortable sense that this presupposes a quantitative and instrumental view of consciousness of self, i.e. the tacit assumption that this feature is ultimately reducible to (thus far unknown and wholly indeterminable) chemically engendered energy relations among our neuronal assemblies.

It is the supposition behind your word ‘programmed’. That word brings huge problems into the argument, because it insinuates an intentional agent implanting a faculty into zombies. You can see at once that this would have to be a human or superhuman intentional being. It is therefore an unsatisfactory proposition, as the program devised by a software designer does not drive a brain, but a logical inference device. A brain is a logical inference device to only a small fraction, as it has to cope with real-life situations and engenders the ‘residual I’ so as to facilitate physical navigation in the world and communicative interaction with other people. This cannot be reduced to rules, as the rules themselves emerge from unquantifiable situations. The intervention of a superhuman agent does not solve the problem, because consciousness is also unquantifiable, and therefore the superhuman agent would have to possess a total overview which is dubious on two counts: First, it runs into infinite regress (even for one person, let alone the population of the Earth of the past and future); and second because the superhuman agent has no human-type experience, intuition or self-consciousness. So the whole issue is ultimately self-contradictory.

The only avenue towards a resolution of your question would seem to be a biological approach. Unfortunately we don’t do anywhere near enough research on it. On the contrary, we are so obsessed with explaining consciousness as an emergent feature of physical or electronic processes that we are losing sight of the fundamental difference between intentional and ‘programmed’ (algorithm-driven) behaviour. Therefore we have nothing resembling a theory of intentionality, even though it could conceivably explain such things as consciousness of self in terms of the outcome of many intentional agents creating a ‘superintentionality’ for a body — whether it is the body of a human animal or the body of a corporation like ants and bees. But this is only a hint at something which we in our scientific presumptuousness have hardly even looked at.

 

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.