Gödel’s Incompleteness theorem

Elias asked:

A question similar to this one has already been asked by someone else, however it was in my opinion not answered in the original spirit of the question.

Gödel’s incompleteness theorems as far as I understood them showed that no system of human logic can prove its own consistency.

This is also obvious to common sense because every logic accessible to mankind requires a reason for everything it postulates. Therefore it also needs a reason for its own laws to be true, which cannot be given based on those laws, since those laws have to be established, i.e. reasoned to be true and existing, first.

So it seems to me that when one tries to explain reality by human logic one must conclude that there is at least one ‘Something’ (in the broadest sense of the word) which is illogical in the sense that it is not bound by the laws of human logic and therefore does not require a cause or a reason for it to be. Since this ‘Something’ is illogical and humans have only logic and empirical observation to describe reality (or guess on reality) no description of this ‘Something’ is possible for us (unless we observe it empirically).

This would show that human logic can never explain reality, i.e. answer the question ‘Why is there Something?’ and that either (A)we conclude that there is ‘Something’ transcending logic, which would not be far away from the concept of god or (B) that human logic is initially flawed (since it is not consistent) and we therefore can not know anything. Is this reasoning correct?

Answer by Shaun Williamson

Elias you have the wrong idea about all of this. Logic is not meant to explain explain the world, it simply acts as an explanation of our concept of a logically valid argument.

The theorem you are talking about is called ‘Gödel’s Incompleteness Theorem’, it is not called ‘Gödel’s Inconsistency Theorem’. The Incompleteness Theorem does NOT apply to our system of logic. The Incompleteness theorem applies to our system of mathematics. Our systems of logic can easily be proved to be both consistent and complete.

The problem in mathematics arises because our language allows us to make self referential statements. So for example the sentence ‘This sentence contains five words’ is both a true mathematical statement and a sentence that talks about itself. Gödel was able to use this idea to show that no axiomatic system of mathematics could be proved to be complete and we need an axiomatic system in order to prove the consistency of our system of mathematics. Logic doesn’t suffer from the same problem.

An alternative proof of the incompleteness theorem is contained in Alan Turing’s Theory of Computable Numbers. The American logician Alonzo Church also found an alternative proof of the same thing.

Hope this isn’t too confusing the main points to keep in mind are 1. Logic isn’t meant to explain reality. 2. Our systems of logic are provably consistent and complete 3. We can never prove that our system of mathematics is both consistent and complete unless we restrict it in some way e.g by not allowing self referential statements in mathematics. 4. We can know lots of things given the ordinary meaning of the word ‘know’. However we can also ask questions that we don’t know how to answer and that may not even make sense. It is humans who ask questions and if you ask a question you must be prepared to explain in detail why the question makes sense and what sort of answer you would accept. I am not sure that the question ‘Why is there something?’ makes any sense.

 

Answer by Peter Jones

I find your reasoning mostly correct. By a similar process of reasoning Kant and Hegel are led to the idea that both the extended universe and human consciousness require an original phenomenon that is not an instance of a category. The categories of thought cannot reach all the way down for the reasons that you give, and so cannot be fundamental. Hegel calls your logically necessary ultimate phenomenon a ‘spiritual unity’. Here the term ‘unity’ would indicate that in no case is it ‘this’ or ‘that’, and it is therefore beyond the reach of logic and the intellect.

However, it would not be ‘illogical’. It would transcend the ordinary world of duality where everything is always ‘this’ or ‘that’, but it would be sound logic that leads us to this conclusion. The importance placed on an understanding of dialectic logic in the Buddhist universities can be explained by the ability of logic not only to refute false views but also to betray its own incompleteness. If we cannot accept your reasoning, and thus the limits of bivalent logic, then we cannot make a systematic theory of the world fundamental. We would always have to leave something out for exactly the reasons you give. This is Russell’s paradox, the reason why he could not axiomatise set-theory. The attempt to make any bivalent logic fundamental leads to intractable problems of self-reference.

The question of whether we can acquire certain knowledge by the use of logic is easy. Logic produces only relative truths and falsities. This need not be a pessimistic view of knowledge, however, for it is possible to know things without any dependence on logic. You might know you are in pain, for instance. For the third time today I’ll quote Aristotle’s crucial observation that ‘true knowledge is identical with its object.’ No mention of logic.

As you say, this analysis of the limits of logic leads us to the idea of something that seems rather like God. But it can be seen that where philosophers undertake this analysis they usually call this phenomenon by a different name such as Tao, Nirvana, Ultimate Reality, Unity, Godhead, Bliss, the Authentic, the Undifferentiated, the First or Original. Plotinus uses the term ‘Simplex’, which indicates its lack of conceptual complexity, the idea that it lies beyond the categories of thought, or, in the words of one Christian mystic, ‘beyond the coincidence of contradictories’.

But this would not mean we cannot know it. We can know it intimately if we are it, and we must be if, as logic suggests, Reality is unified at an ultimate level. For this view we need not abandon logic, but we would need a logic of contradictory complementarity. Hegel’s idea of ‘sublation’ is important here. If you are mathematically-minded then for an example of how such a logic would work you might like to read George Spencer Brown, who solved Russell’s Paradox in his book ‘Laws of Form’.

Your final question asks whether human logic is flawed because it is not consistent. I would say not. It can be consistent just as long as we do not imagine it is complete. It is only when we imagine (in either ontology or epistemology) that the duality required by the functioning of our intellect reaches all the way down that our logic becomes inconsistent and flawed. This was Russell’s problem. He did not know (or want to know) religion well, so did not have the principle of ‘nonduality’ at his disposal and could not transcend dualism for his philosophy or mathematics. If, however, we accept that the universe is not the set of all sets, which would be a paradoxical idea, but something Kantian that is entirely beyond sets, then our system of logic can be consistent from the ground up, as is demonstrated by Brown with his ‘calculus of indications’.

If you wish to pursue these issues then a directly relevant experimental essay is ‘Exploring Connections: Music, Cosmology and Mathematics’ at http://theworldknot.wordpress.com/

 

Denying the consequent

Meg asked:

If you pass the test, then you’ll get an A for the course.
You didn’t get an A for this course.
Therefore you didn’t pass the test.

In this argument, we are ______ the consequent.

Answer by Craig Skinner

We are DENYING the consequent.

We are dealing here with a Conditional (If X then Y: expressed in symbolic logic as X–>Y).

X is the ANTECEDENT, Y is the CONSEQUENT.

Conditionals yield 4 arguments in classical logic, two valid and 2 invalid (fallacies):

1. AFFIRMING the ANTECEDENT.
X–>Y
X is the case
Hence Y is the case
Valid

2. AFFIRMING the CONSEQUENT.
X–>Y.
Y is the case
Hence X is the case
Invalid (Fallacy of Affirming the Consequent)

3. DENYING the ANTECEDENT
X–>Y
X is not the case
Hence Y is not the case
Invalid (Fallacy of Denying the Antecedent)

4. DENYING the CONSEQUENT
X–>Y
Y is not the case
Hence X is not the case
Valid

Running through each using your example.

1. You pass the test, so, as the conditional says, you’ll get an A.

2. You get an A, but this could be due to other good results even though you failed the test, so it doesn’t follow you passed the test. There are other ways of getting an A, passing the test is just one of them.

3. Same as 2. You fail the test. Fine, there are other ways to get an A.

4. Same as 1. If you pass the test you get an A. But you haven’t got an A. So you can’t have passed the test.

The Principle that Denying the Consequent entails Denying the Antecedent (your example, and 4. above) has the Latin name ‘Modus Tollens’ meaning ‘Way that Denies’.

The Principle that Affirming the Antecedent entails Affirming the Consequent (1. above) has the Latin name ‘Modus Ponens’ meaning ‘Way that Affirms’.

These Principles were, I think, first explicitly stated by the Stoics.

 

Would ‘I’ still exist if I was conceived one hour later?

Per asked:

Dear experts,

I have been wondering for quite some time now. If I was conceived say a day or just an hour later than I actually was, what would become of ‘me’?

Answer by Craig Skinner

Wonder no more.

There would be no you. There would be somebody else, with your name (if of your gender), and he/she might be wondering what would become of him/her if conceived a day or an hour earlier.

The essential feature which individuates each of us as a particular human being is being the product of a particular sperm and ovum, thereby conferring metaphysical uniqueness, and genetic too (or co-uniqueness for identical twins). A being produced from a different sperm or egg would be somebody else.

So, YOU could not have been conceived at a different time or by a different parent.

This illustrates the huge improbability of your (or my) existence. Had a different sperm out of the millions competing to penetrate the ovum been successful on that fateful occasion, had your father been away on business on the day you were conceived, you would not exist. Also the huge fluke that your mother and father chose each other from all the alternative mates available to each. And it’s mind boggling to think that not a single one of your millions of forbears, going back over 3 billion years, failed to reproduce. If just one of your myriad fishy ancestors had been eaten by a bigger fish when young, you would not exist. But, like the national lottery, given that the jackpot has to be won, somebody wins it with millions to one odds against, so, given that you exist, somebody has to be you.

The uniqueness of each of us is stressed by Derek Parfit in his analysis of the effect of new policies on future generations. We often hear how future people will be adversely affected by our actions. But it is very doubtful that anybody will be affected. Changed policies alter behaviour, often in subtle ways, people marry later, or have children later, move around the country more, etc etc. And the upshot is that after 2 or 3 generations, all the people being born would not have existed under the old policies, whilst all those who would have been born had the change not been introduced, don’t get born. So nobody can be adversely affected, no matter how bad the future world is. Future people benefit by existing when they otherwise wouldn’t. And those who would have been born under a no-change policy suffer no adverse effect since nonexistent people can’t be affected in any way.

And the metaphysical uniqueness view is recognized in most accounts of possible worlds. Thus each of us is world-bound (to the actual world) and couldn’t exist in any other possible world. So that if I say that I might have been a good philosopher, putting it as ‘there is a possible world in which I am a good philosopher’, the person who is the good philosopher in that world is not me but my COUNTERPART.

 

Claims about what ‘most’ philosophers believe

David asked:

I’ve noticed that in some of the answers given here, there are sentences of the form ‘few philosophers now believe in Plato’s forms/ Moore’s intuitions/ the tooth fairy’. These survey claims are interesting. What weight, if any, should be attached to such preponderance of philosophical opinion, when thinking about the facts?

Answer by Geoffrey Klempner

Yesterday’s BBC TV News was dominated by the repercussions of the British Gas price hike of 8 per cent. Few energy consumers felt that this was justified. Or did they? The situation wasn’t helped by the calamitously bad decision by British Gas to give their customers the opportunity to vent their feelings online via Twitter — gold to BBC news editors, who picked out the choicest (and rudest) comments. I could have told British Gas this would happen, and so could you. How do we know something that the management of British Gas didn’t (apparently) know?

As a comparative outsider in relation to English-speaking academic philosophy (comparative, because I still consider myself to be working in the broad English-speaking analytic tradition) I am not very well placed to make observations about what ‘few’ or ‘many’ academic philosophers believe. What are the current views about the analytic/ synthetic distinction? Did Quine win the argument, or lose? I’m not sure. My own view on this doesn’t count for a lot, or at least, not as much as the view, e.g. of Saul Kripke or John McDowell or Tyler Burge. Ask them. (I have my view, which I’m saving for another occasion.)

Plato’s Forms is an interesting example. Iris Murdoch offers a robust defence of (something like) the Platonic view in Sovereignty of Good. If you asked me to explain further I would say that Murdoch doesn’t believe in in the literal existence of Forms as metaphysical entities. Her concern is to oppose subjectivism about ‘the Good’. But what does that mean? Does Plato believe in the literal existence of the Forms, or is it just a ‘useful myth’? (his ‘theory of recollection’, e.g.). When it gets to issues like Plato’s Forms (or Moore’s ‘intuitions’ about what is Good, another nicely chosen example) there isn’t a clear answer in terms of ‘belief’ because the position that we are discussing is deep, has hidden depths, you could say.

There is a criticism one could make that academic philosophers generally are rather too quick to offer their views about what ‘most’ of their brethren believe. But there I go again: how do I know that? It’s an impression. I wouldn’t call it knowledge. So have I the right to make that statement? Here’s where we get to the nub of the question. Making an assertion implies that you know. If you’re not sure, if you are only guessing, or expressing a feeling, then you should qualify your statement accordingly. But who does that? In everyday life, we don’t. It’s called idle chatter. The same applies to philosophers who indulge themselves in that manner.

 

Moral dilemma over career vs family

Mehdi asked:

What is the best way to resolve a moral dilemma?

I am facing the greatest moral dilemma of my life; both sides of the dilemma have great material and emotional implications.

At one side there is chasing my lifelong dream of living independently in a free country and having the chance of being financially successful by emigrating and starting my own business there but at the cost of leaving my parents alone in their old age which gives me a massive crisis of conscience.

On the other side of the dilemma is to stay and forget about my dreams but avoiding the bad conscience and having a boring but easy and well paid job in a oppressive and not so civilised middle eastern society. My parents are quite well-off so by leaving them I will not be troubling them financially but emotionally. My only sibling left the country eight years ago and they really only have me.

Answer by Craig Skinner

You certainly face a moral conflict.

Either choice entails an inevitable loss. In the end, you just have to plump for one or the other, live with it, and, whatever happens, never say you should have made the other choice, the reason being that you can’t know whether the outcome would thereby have been any better.

In Kantian terms, you are torn between a duty to others (looking after parents) and a duty to yourself (realizing your potential using your talents). In Aristotelian terms, between two different ways of flourishing. In existential terms, between making yourself one or other sort of person.

Your conflict reminds me a bit of Sartre’s example of the young man torn between joining the Resistance against the Nazi occupiers who had killed his brother, and staying with his old mother whose only consolation he was. Sartre emphasises the radical freedom we have to make choices, the need to make some or other choice, the absence of a clear right or wrong choice, the total responsibility we have for the effects of our choices, the absence of any person, god or principle that can decide for us, the loneliness of the situation. It seems you feel all these things.

You don’t say what your parents think, or thought when your sibling left. Maybe they say or hint that you should stay, and perhaps this adds sadness or anger to your feelings, that they are selfish. Perhaps they say nothing, and this makes you feel guilty. Perhaps they urge you to go, wanting their beloved child to make his/her way in the world according to his/her own lights, and this just makes you feel more guilty.

Good luck whatever you choose.

Finally two general philosophical points, the first ethical, the second technical.

1. Whilst duty to family and freedom to live one’s own life both feature in Western and Eastern moral codes, the ranking of the two differs. In the West, individual self-determination usually comes first, and many Western people wouldn’t see a MORAL conflict in your situation. To be sure, those emigrating would miss their parents, but equally would miss their friends, and this can be a sadness, but without being a moral matter. I know a few old couples whose children are making their lives on the other side of the world, and they speak proudly about their children’s success, are sad never or rarely to see them, but don’t make a moral judgment about it. In Eastern cultures, duty to family ranks more highly than in the West. Also, historically in the West, and still so to some extent nowadays, more expectation is put on women than on men to look after the old or needy, but I don’t know whether that is so in your country.

2. Whilst their are moral conflicts, I don’t think that, strictly, there are moral dilemmas. According to standard formulation, a moral dilemma requires:

(a) each of two actions is morally required
(b) neither requirement overrides the other
(c) I can do either action but not both
(d) there is inevitable moral failure

When you measure actual conflicts against these standards, none is a dilemma. Two examples will illustrate.

In Sophie’s choice, there is a moral requirement that she protect both her children. But this, the only moral action she could take, is denied her by the cruel prison guard. So a choice to save one or other child, though heartbreaking, is not a moral choice, she may as well toss a coin to decide. And if she does save one child, she is not guilty of a moral failure (though she would feel she was all her life).

In your conflict, the choice is a moral one, there is an inevitable moral cost, but, unlike the options put to Sophie, neither of your options is morally required (mandatory), only morally desirable.

More formally,there is a proof that a moral dilemma, as defined above, plus The Principle of Deontic Consistency (the same action can’t be both obligatory and forbidden) and the Principle of Deontic Logic (if, necessarily, doing A yields B, then if A is obligatory, B is obligatory) yields a contradiction. So, to hold on to moral dilemma, you must give up one or both Principles (unpalatable), or change the definition of dilemma, but this then blurs any distinction between dilemma and conflict. I find it simplest and least confusing just to refer to moral conflicts.

 

Heraclitus on change revisited

Lauren asked:

I have a question in my textbook that I was wondering if you could help. The question is:

How would Heraclitus have responded to the following statement? ‘Heraclitus’ theory is wrong because the objects we see around us continue to endure throughout time; although a person, an animal or plant may change its superficial qualities, it still remains essentially the same person, animal or plant throughout these changes. In fact, we recognize change only by contrasting it to the underlying permanence of things. So permanence, not change, is the essential to reality.’

Answer by Helier Robinson

There is a fundamental principle that any qualitative difference entails a quantitative difference. It is easily proved, as follows: whatever A and B may be, if there is a qualitative difference between them then there is some quality, Q, such that A is Q and B is not-Q (or vice versa); if A and B are one, then one thing is at once Q and not-Q, which is impossible, so A and B are two. So Heraclitus would have responded that if something changes then the earlier thing is qualitatively different from the later thing and so the earlier and later things have to be two – in which case it is quite wrong to suppose that there is one, permanent, thing changing through time. (A change is a qualitative difference in parallel with a duration.) It does not matter how superficial the change is: the principle that qualitative difference entails quantitative difference applies to any qualitative change whatever.

There is another point that Heraclitus could have made: how do you know that empirical objects continue to endure through time? This is only a belief. To be sure, it is a belief of common sense, and so held by most of us, but there is an opposing belief that empirical objects are not real objects, they are only images of real objects, and as such they exist only for as long as they are perceived. Esse est percipi as Bishop Berkeley said: to be is to be perceived; and empirical objects are perceived objects. This view arises because the only satisfactory explanation of illusions is that since they are unreal they have to be misrepresentations (or false images) of reality, not reality itself.

Heraclitus could also have said that we do not in fact recognise change by contrasting it to the underlying permanence of things, we perceive it directly. Look at clouds on a windy day: do you see them changing, or do you contrast them to their underlying (whatever that is) permanence?

It is worth noting in this context that Parmenides’ rejection of Heraclitus is based on the same principle that qualitative difference entails quantitative difference. For him, all change is illusion and so nothing really changes so only permanence is real.

And finally, this principle has some awkward consequences for philosophy. For example, your empirical world is qualitatively different from mine, so yours and mine must be two, they cannot be one. And by extending this argument, there must be as many empirical worlds as there are perceivers. And because each of these empirical worlds contains illusions, they are all qualitatively different from the real world, so none of them is the real world. It is perhaps because of such conclusions that you do not often find the principle that qualitative difference entails quantitative difference in philosophy textbooks.