John asked:

*A possible argument that a computer running an algorithm cannot be conscious?*

Imagine, to the contrary, that a computer could experience a moment of subjective awareness by running some program code. Let us put that code inside an infinite loop and set the program running with a counter that increments with every iteration of the loop. In principle the code runs a counter infinity number of times and the computer experiences an infinite number of identical moments of consciousness.

Now imagine the computer ‘waking up’ in one of these moments of consciousness. It asks itself the question ‘what is the prior probability that I should find myself in a particular conscious moment with some definite counter number n’ As it knows that it will run forever then the prior probability of finding itself in this moment n is 1/infinity which is zero. But this reasoning is true for all n so that the probability of finding itself in any moment is zero. This contradicts our assumption that the computer does find itself conscious.

*Perhaps a computer running a program cannot produce conscious awareness.*

__Answer by Craig Skinner__

Consciousness occurs naturally in humans and some other animals, so that it seems to me it should be possible to produce it in a sophisticated enough artefact. Maybe an embodied, enactive computer, embedded in and learning from its environment (rather like you or me) as opposed to a box on the floor running software. I suspect it’s only a matter of time.

To turn now to your argument, a reductio ad absurdum whereby you prove something by showing that assuming the contrary leads to a contradiction.

First, I find it confusing, and will say why. Secondly, even if we allow the confusing bit to go through, the argument about probability and infinity is flawed.

As regards the confusion, you start by assuming the computer experiences subjective awareness by running a program. Well, then it’s conscious. Why the need for it’s ‘waking up’.

The flaw. You say the probability of any particular moment being selected is 1/infinity (zero). But the rules of probability only apply in this way to finite sets. Here’s why. Selection of any particular member from a set of s is 1/s only if each member has the same probability of selection.

For example, if a number is to be randomly chosen from 1-100, the chance of it’s being in the range 1-50 must be the same as the chance of it’s being in the range 51-100. But this can’t happen with the (countable) infinite set of natural numbers. Because any number you specify, however big, is always in the ‘lower half’ of the range with an infinity of numbers larger than it. It’s impossible to randomly choose a number from this infinite set. Of course you can still choose a number non-randomly, and we often do.

In misapplying probability to infinity, you are in distinguished company. The famous philosopher of science, Karl Popper, did the same. He didn’t like Bayesian analysis, and deplored it’s growing popularity in science. He sought to undermine it’s application to theorem choice in light of evidence. His argument was:

1. There is an infinity of theories compatible with any body of evidence (this is strictly true, Duhem’s thesis).

2. Prior to any evidence, we shouldn’t consider any theory more likely than another (he called this the Principle of Indifference).

3. Hence every theory must get equal prior probability.

4. For an infinity of theories, this probability can only be zero, since any finite probability, however small, would make the total probability infinite, and total probability can’t exceed one.

5. Hence the prior probability of any theory is zero.

6. Hence the posterior probability (after new evidence) of any theory remains zero, since zero multiplied by any number remains zero.

7. Hence Bayesian analysis never gets started and is useless.

The argument is valid. But unsound because 2. is false, and, just as in your argument, leads to the false conclusion in 4. We needn’t, and shouldn’t, consider every theory equally likely. Some are more plausible and deserve higher prior probabilities than others. It would be absurd, for example, in assessing theories of why things fall to the ground, to give the same probability to the theory of free fall in curved spacetime (Einstein’s theory) as to alternatives such as the theory that four elves pull a thing down by invisible string, or five elves by invisible string, or four elves by invisible rubber bands. etc etc. And of course, once we abandon the requirement that every choice gets equal probability, we can easily assign a finite probability to every one of an infinite collection without total probability exceeding one. A simple assignment is prob1/2 to theory 1, prob1/4 to T2, prob 1/8 to T3, 1/16 to T4 and so on. So probability survived, Bayesianism flourished, and Popper’s view is mostly forgotten.

In conclusion, I don’t know if conscious machines are possible, but suspect they are, but I do know that infinity should be handled with kid gloves.