Nagel’s example of the infant and the brain-damaged man

Wesley asked:

Explain Nagel’s example of the infant and the brain injured man.

Answer by Nathan Sinclair

I assume you mean Nagel’s imaginary example of a person who suffers brain damage and is turned into a ‘big baby’ (this is found in Nagel’s essay ‘Death’ in Mortal Questions).

In the example someone painlessly suffers an injury which leaves them with the mental life (desires, beliefs, attitudes etc) of a 3 year old baby. Before the injury their life was pleasant enough, the injury itself was painless (suppose it happened while unconscious), and after the injury the person is cared for, they are treated well and their desires are met about as well as most three year olds, their nappy changed, food provided, interesting toys to play with and pleasant and attentive carers.

The point is that at no stage does the person suffer an unpleasant experience, but Nagel expects that we will find such an injury to be a disaster for the person who suffers it. If this is correct then there are bad (and presumably good) events that can happen to people that do not involve good and bad experiences, or indeed any non-relational properties – thus you can’t tell just by how someone is in isolation how well or badly off they are, you have to include their.relationships to other people and things, even relationships which they are unaware of.

In short, what they don’t know (or mind) can hurt them.

 

Did Berkeley really ask the question about the tree in the forest?

Alvin asked:

Did bishop Berkeley actually write the question if a tree fell in the forest and no one was present to hear it, would there be a sound?

Answer by Tony Fahey

Yes Alvin, it seems that Bishop Berkeley did raise this question. It should be understood that George Berkeley’s central thesis is that all we can ever know about objects is merely the ideas we have of them. According to Berkeley there is no such entity as a physical world, or matter, in the sense of an independently existing object. Rather it is that all that we ordinarily call physical objects are actually collections of ideas in the mind. The appearances we experience are the very objects and the appearances are sensations or perceptions of a thinking being. His most famous saying is ‘esse est percipi’ – ‘to be is to be perceived’. According to the ‘esse is percipi’ thesis, all the things surrounding us are nothing but our ideas. Sensible things have no other existence distinct from their being perceived by us. This also applies to human bodies. When we see our bodies or move our limbs, we perceive only certain sensations in our consciousness.

Using a series of arguments, often called by philosophers as the ‘veil of perception’, Berkeley argued that since we never perceive anything called ‘matter’, but only ideas, the view that there is a material substance lying behind and supporting these perceptions is untenable. For Berkeley everything was mind-dependent: if one cannot have an image of a something in the mind, then it fails to exist – hence his thesis ‘to be is to be perceived’. Berkeley’s response to those who argued that if there were no material substrate behind our ideas, how is it that things persist when no one perceives them, was to argue that all our perceptions are ideas produced for us by God. Thus, it can be said that in response to the issue of the tree falling in the forest, Berkeley would take the view that since God have conceived the idea of such a scenario, then God would hear it fall.

Perhaps I should add that one of the most ardent of the many critics of Berkeley’s philosophy was Dr. Samuel Johnson who is said to have famously refuted the eminent bishop’s theory of immaterialism by kicking a stone in the churchyard after one of Berkeley’s sermons exclaiming ‘I refute it thus!’

 

Answer by Craig Skinner

Not exactly. He says something more radical, namely:

‘The objects of sense exist only when they are perceived: the trees therefore are in the garden, or the chairs in the parlour, no longer than while there is some body by to perceive them’ (A Treatise Concerning the Principles of Human Knowledge 45).

For Berkeley, ‘to be is to be perceived’. Everyday objects, such as trees, exist, but are not made of matter (there is no such thing). Rather they are ideas. They exist in God’s mind, and sometimes in our minds.

For those of us who think material trees do exist, a tree falling in a lonely forest will produce a sound defined as ‘vibrations in the air able to be heard’, and won’t produce a sound defined as ‘sensation produced by vibrations in the air impinging on an eardrum’.

My wife’s version is trickier:

‘If a man speaks in the forest and no one is present to hear him, is he still wrong?’

 

Is artificial intelligence possible?

Eddie asked:

Is artificial ‘intelligence’ possible? It seems to me that no matter how sophisticated and powerful a ‘thinking’ machine nowadays could be, it is still a mere network of electronic circuits, i.e. it only deals with specific inputs and then gives the corresponding outputs as programmed. It does not actually think but just strictly executes programmed instructions. There manifests no free will at all. Isn’t free will a necessary element of intelligence?

Answer by Stuart Burns

You ask an interesting set of question, Eddie. The subject has attracted the attention of many philosophers, and many workers in the computer industry. Many of these thinkers would agree with you – that it is impossible in principle for a computing device, no matter how sophisticated and powerful, to ‘think’ or to be ‘intelligent’. Although I have not encountered the argument you make here, that ‘free will’ is a necessary element of intelligence.

But there are also many thinkers in this arena who would disagree with you. And I am one of those. Having spent 30 years of my working life programming computers of various sorts, and in my spare time pursuing my hobby of philosophy, I have learned to see in your comments four very challenging issues that must be addressed before any meaningful answer can be offered to your questions.

(1) The first of these issues is ‘Just what do you mean by ‘intelligence’?’ Rather than trying to provide a genus and species definition in terms of necessary and sufficient conditions that would identify ‘intelligence’, perhaps it would be easier to consider the issue in terms of ‘How would you recognize ‘intelligence’ if you encountered it?’ In 1950, Alan Turing provided an interesting idea known as the ‘Turing Test’. Suppose that you were closeted in one room, conversing with a pair of other conversationalists, each closeted in their own little rooms. It makes no difference how you imagine this conversation taking place, or where these two other conversationalists are located. How would you go about deciding whether either (or both, or neither) were ‘intelligent’?

Suppose, using whatever criteria might occur to you, you decide that at least one of these two conversationalists was ‘intelligent’. And suppose that you then discovered that this ‘intelligent’ conversationalist was in fact a computer. Would that not qualify as an ‘artificial intelligence’? If not, why not?

In order to pass a ‘Turing Test’, all a computer would have to demonstrate is an ability at free ranging English conversation at a competence level equivalent to any flesh and blood person who you would consider as ‘intelligent’. If you imaging this conversation taking place by keyboard, we already have sophisticated computer programs to read and ‘understand’ (within strict limits) English written conversation. But with modern speech recognition and synthesis software, and image manipulation software, your imagination need not be limited to the keyboard.

The ELIZA program was written at MIT by Joseph Weizenbaum between 1964 and 1966. ELIZA emulates a Rogerian psychotherapist. But it has almost no intelligence whatsoever, only tricks like string substitution and canned responses based on keywords. Yet when the original ELIZA first appeared in the 60’s, some people actually mistook her for human. (You can try talking to ELIZA at The illusion of intelligence works best, however, if you limit your conversation to talking about yourself and your life. As I said, ELIZA is not very sophisticated.)

Since ELIZA first appeared, much has been learned about English conversations and computer programming. Modern versions of these programs (now known as ‘chatterbots’) continue to fool people. ‘CyberLover’, a malware program, preys on Internet users by convincing them to reveal information about their identities. The program flirts with people seeking relationships online in order to collect their personal data. A much more sophisticated, and much more free-ranging, computer conversationalist is possible – although they are still very obviously not as competent in English as a young child (but we often classify a young child as ‘intelligent’). At least they are demonstrating that convincingly passing the Turing Test is most likely simply a matter of time. Consider the accomplishments of Watson – IBM’s contestant on Jeopardy.

(2) The second issue I see in your comments is ‘Why do you think that electronic circuits are any different from neurons?’ Of course, this question pre-supposes that you are something of a materialist, and do not think that the human mind (and human intelligence) is a result of some non-physical ‘soul’ (for lack of a better label). If you are, in fact, a mind-body dualist, then the issue of ‘artificial intelligence’ is probably moot. Most likely, you have already decided that anything without an immaterial ‘soul/mind’ cannot, by definition, be intelligent. But, if you believe that the human mind is a product of the biochemistry of the neurons in the brain, then why do you think that neurons are any different from electronic circuits. Neurons are just the same as electronic circuits – they only deal with specific inputs and then give the corresponding outputs as biochemically programmed.

It is true that we have not yet constructed a computer with a neural-net architecture that has anywhere near as many ‘circuit components’ as a human brain. So it is not surprising that those we have constructed have not demonstrated behavior we would characterize as intelligent. But we have constructed several smaller versions of such computers, and they have demonstrated remarkable mimicry of human capabilities in narrow fields (such as pattern recognition, for example). But if the neuroscientists are anywhere nearly correct in their characterization of the brain, we are just as equally constructed out of ‘circuits’ (neurons, in our case) that each ‘only deals with specific inputs and then gives the corresponding outputs as programmed.’ So on that basis, there is no evidence that a properly architected, and sufficiently large electronic computer could not evidence ‘intelligence’.

(3) The third issue I see in your comments is ‘Just what do you mean by ‘think’?’ This is not a superficial issue. For as long as computers have existed, some people have maintained that they do not ‘think’ because they cannot do some example of what humans do. And throughout the history of ‘artificial intelligence’ computers have repeatedly demonstrated the ability that was previously claimed as exclusively belonging to human ‘thinking’. Everything from Deep Blue’s defeat of the World Chess Champion Garry Kasparov, to the Jeopardy victory of Watson. From painting, to writing original musical scores, to poetry. So if you claim that computers just strictly execute programmed instructions but do not actually ‘think’, you need to come up with some notion of what constitutes ‘thinking’ that you suggest cannot be demonstrated by the computer.

In general parlance, ‘thinking’ is just the human process of evaluating alternatives and deciding which alternative is ‘best’ according to some established criteria of ‘best’. But that, of course, is just what computers are very good at – given the appropriate data. What humans are currently clearly better than computers at doing, is collecting the appropriate data. But that is not what is usually meant by ‘thinking’. And might reasonably be explained by the fact that we do not provide computers with general purpose ‘data collection’ devices.

(4) The fourth issue I see in your comments is ‘Just what do you mean by ‘Free Will’?’ At the level of physics, the electronic circuits that make up a computer are clearly deterministic. But then, according to the neural scientists, at the level of biochemistry, so are the neurons that make up our brain. At the most basic level, we are a deterministic biochemical computing device. Both the computer and the brain, at the most basic level, only deal with specific inputs and then yield the corresponding outputs as programmed (or learned). So where is there an opening for Free Will? Remember that you can’t rely on random processes here, because it is inherent in the concept of Free Will, that if anything other than ‘You’ make the choices, then the choice is not a product of your Free Will.

But what, then, are ‘You’? Are you not a product of your nature (your construction) and your nurture (the things you have learned)? When you make a choice, when you exercise your Free Will, you bring to that choice your experiences and memories, your character and desires, your goals and your values. That is what ‘You’ are. And given who you are and what you are made of, in any particular circumstance the only way that you could choose other than how you do choose, is if something external to you forced the issue. In many circumstances, your friends can predict the way that you will choose, because they know who you are. And if you were to choose other than you would normally choose, your friends would look for some unusual reason for this unusual choice. And is this not the same as for a sufficiently sophisticated computer? (If you would like to peruse an essay I have written on the compatibility of Free Will and Determinism, please see .

So what is your notion of Free Will? And how is it related to your concept of ‘intelligence’? I would be very interested in your thinking on this issue. As I said above, I have not seen before an argument linking Free Will to Intelligence as you do here.

In Summary then, before a meaningful answer can be provided for your very interesting questions, I think that we must address just what you mean by ‘intelligence’, ‘think’ and ‘Free Will’, and just why you think that neurons are different from electronic circuitry when it comes to those notions.

 

Answer by Jürgen Lawrenz

You’ve practically answered your own question. Artificial intelligence is not a philosophical issue by any means. Stringently regarded it is merely a vocabulary of puns based on analogies with human intelligent behaviour and used in a restricted technological sense.

Human beings are vulnerable to taking their own puns seriously, however, and it is a problem in this case because of the social ramifications of misuse of this verbiage in education. Widespread public interest leads to the puns generating their own paradoxes, which are then debated for as long as interest can be sustained. While in the short term the chief beneficiary is the entertainment industry, the possible damage to social value systems must not be disregarded or underestimated. One insalutary side effect can be contempt of the evolutionary acquisition of human intelligence, based on the delusion that artificial intelligence is of a ‘higher’ grade. The potential for such doctrines to exert a detrimental effect on the self-perceptions of humans is not to be dismissed.

The point you have made can be also be expressed another way. The only form of intelligence known to us–involving free will–is creature intelligence. The ‘trick’ used in innumerable debates (even learned works) to persuade us that this is all illusory is at bottom quite simple. There are severe restraints on the exercise of free will. Other wills exists and resist or cancel mine; social impediment like laws and customs intervene or obstruct it; and there are also physical constraints. But once you understand this, it is no more than saying, ‘I have muscles and a certain strength’ while acknowledging that a crane can lift greater weights. When you are in chains your strength cannot be exercised. The strength exists nonetheless; likewise free will is a basic feature of humans (and animals). Impediments to their exercise don’t render them illusory of non-existent.

The example of the crane is, of course, central here. A computer is no more intelligent than a fire alarm. So, just as we design cranes to lift great weights, using technological resources, so we can design computers to execute certain mechanically reproducible thinking tasks. Where the AI brigade comes off the rails is in the belief that a mere sham is real, because superficially regarded it looks real. But you already nailed it.

Let me congratulate you on your independent thinking. You will have great difficulty maintaining your attitude. But I give you a nice bon mot to remember for all occasions of debate (sorry I forgot the author of this quote): ‘The greatest danger is not that computers will ever think like us, but that we will start to think like computers.’

 

Answer by Helier Robinson

First of all, free will is not a necessary element of intelligence: it is possible that the entire Universe is determined, in which case there is no free will, but there is still intelligence; in other words, there is intelligence whether or not there is free will, so free will is not a necessary condition for intelligence. But this is a small point. The major point in your question is whether an electronic network can be intelligent, and the answer is yes because our brains are electronic networks. To be sure, no one has produced artificial intelligence yet, but that does not mean that it cannot be done.

The key point here is emergence. When a structure gets sufficiently large new properties emerge in it, to form the next higher level structure. For example, life emerges out of sufficiently complicated molecules, and consciousness emerges out of brains. Our brains are enormously complicated networks; in fact they are the most complicated structures that we know of. A structure of inter-connected transistors that was equally complicated could well be both conscious and intelligent.

 

Which is most real: the chair or its molecules?

Lisa asked:

Which is most real: the chair you are sitting on, or the molecules that make up the chair?

Answer by Nathan Sinclair

Reality doesn’t come in degrees. Things are real or not, but no real object is any more or less real than any other. The commonplace example of a property without a degree is pregnancy — just as there is no-one can be half-pregnant, nothing can be half real.

Both the chair and the moecules are real, and thus as real as each other. Perhaps it seems odd that two different real things (the molecules and the chair) can be at the same place at the same time. But this is simply because the molecules are parts of the chair. It is no more odd than that some of my body is in the same place as my arm – my arm is a part of my body.

 

Answer by Helier Robinson

There are two definitions of reality. One, which might be called empirical reality, is all that we perceive around us that is not illusory. (The illusory is unreal by either definition.) The other, which might be called theoretical reality, is all that exists independently of being perceived: that is, it exists regardless of whether anyone perceives it or not. (‘Theoretical’ means ‘non-empirical’.)

The (empirical) chair that you are sitting on is mostly empirically real, the molecules that make it up are (very probably) theoretically real. Note that these molecules cannot be perceived; many people these days believe molecules to be empirical because structures of them can be imaged by a scanning tunnelling microscope. But the microscope only produces an image of the molecules, just as an electron microscope produces an image of a virus and an optical microscope produces an image of a bacterium. In each case the microscope does not enlarge the real object, it only enlarges an image of it. (And, by the way, some molecules are empirical: a diamond is a single molecule.)

 

On knowledge, theory and the unknowable

Josh asked:

What, beyond knowledge itself, may be known or validated, under any hypothetical circumstances?

An inanimate object existing outside of all conception, objectivity claims, can and does exist. Yet through what means? Supposing that there isn’t an omnipresent, omniscient God, such an object is considered existent; if nobody is looking upon a certain dining table, we do not speak with each other openly that it does not exist. Of course it will be there when we turn around, we say, as experience has taught us, and furthermore, of course that must mean that it is there now. You should be able to think of many hypothetical situations in which this assumption is flawed, but rather than asking ‘Can it not be there now?’, perhaps we should ask ‘Can it be there now?’. For what authority determines the existence of an inanimate object outside of all conception?

I have not resolved this question, and as such I am of the opinion that we are led astray by the natural equation of conception with physical actuality. What is the difference between assuming there is a car behind you, because you just parked it and are walking away from it, and imagining a series of events, such as those in Star Wars, and actually believing in them? Or even believing in other minds, for that matter; an assumption due to the emotion of empathy, i.e. the process of analogy of apparent experience. We are also capable of feeling empathy for objects and simulations; how do we jump from observation of weeping matter to the assumption that there is a mind that embodies that matter and is currently under some torment? Even in practice, we may only understand others through analogy, which often leads to misunderstanding (which isn’t always obvious either!). Hence why we empathise much less with more alien life forms, such as insects and plants.

And so, as I have asked many, many times before: How do we justify the presumption of true knowledge of the literally unknowable?

Answer by Jürgen Lawrenz

This is really difficult, because the first answer that has to be given is this: that as human beings we are ensouled creatures and we have a mind. And both of these together engender prejudices and presumptions based on our need to have enough knowledge to survive as bodies.

But if we are to ensure that we do possess this knowledge, the first criterion is that individual knowledge is just a tiny stone in a big kaleidoscope. The central concept here is collective knowledge. What we know is the result of pooling knowledge, teaching it, checking it, verifying. So, most of our knowledge is consensual. You know that ‘X’ exists, because everybody does. What I don’t know, someone else will know. Therefore in principle, what one person knows, everybody can know, because we all have the same equipment for gathering and evaluating and learning knowledge.

But stringently regarded this applies only to factual knowledge. We can turn this knowledge into science, technology, laws and socially useful practices. On the strength of our capacity for extrapolating the order which is implied in this knowledge, there are features beyond our immediate capacity to detect them, which we are entitled to hold as either possible or true. This is the basis of our exploratory drive. We discover and invent things, because we can perform these projections of existence from known states of affairs upon unknown states of affairs.

This reflects to some extent the division Kant made between phenomena and noumena. As rational creatures we are enabled to gather all phenomena, including those which we know nothing about (but may at some time discover), and put them in one basket as ‘existents’. E.g. until 1877 we knew nothing about the planet Neptune, but certain planetary phenomena pointed to the existence of an unknown planet whose energy influenced the wobbles in the orbits of known planets and it was duly discovered. The problem now arise that this can give birth to an attitude that we need not stop anywhere with this presumption. Being able to convert some data into presumed phenomena (e.g. the energy waves from Neptune could for some time only be rendered as mathematical patterns), we now assume that we can make mathematical models and ‘discover’ things that do not convey actual energy patterns – such as the ‘big bang’. Here we run into a discrepancy between observation and imagination. We can do no more than presume that the phenomenal aspects of the universe continue infinitely, and we certainly cannot prove it. All we can prove is that, by a rational performance, such existents may be possible. They are, to this extent, noumena – ‘creatures of the mind’.

When we teach this, especially to people without the relevant training and experience, they have no choice other than to put their trust in the speculations of people who do this work. That presumption is, that the world is altogether as we can experience it, in other words, phenomenal and therefore material. But you and I know very well that some things which we know indubitably to exist, cannot be measured. You mentioned ’empathy’. There is no device by which we can measure it; and in fact no way by which we turn it into a mathematical model. These issues are apt to plunge us into big problems.

They suggest to us, that a belief (or faith) in non-physical existents is an ‘irrational’ attitude. On the one hand, we therefore continue with our effort to explain them indirectly, e.g. ‘the mind’ is nowadays often explained away as a sort of chemical interaction occurring in the brain. If only we had enough knowledge of the work of neurons, we could make a physical model of the mind. This may be a delusion; at least it is misplaced confidence.

From here we can go on to ask a really fundamental question: if there is no intelligence in the universe at all, but the universe exists as it does, who is there to vouchsafe its existence? Anyone who points at the discernible history of the universe is making the same point, though unawares: that consciousness lights up the universe and raises it into existence. So existence is not a word that has an objective meaning. It postulates that ‘I exist’ and ‘this fork exists’ and then, by peering through a telescope, I can say ‘Mars exists’. It is a logical conclusion from this that, if I (deputising for all living creatures in the world) cannot say ‘I exist’ — because there is no life — then existence is a nullity.

This is where God comes in; but I refrain from enlarging on it. We are not debating faith here, but knowledge.

In sum: knowledge is not an absolute. Even though there is a science/philosophy called epistemology, this is little more than pumping up the human intellect on the model of God. Nearly all absolute demands on us are made by people who (pretend to) forget that we are mere creatures. If Kant is right — and I think he is — then the sum total of our knowledge is what phenomena deliver to us. By the same token it is not irrational to believe there is more to the world than empty space irregularly interrupted by a few million galaxies and their dirt. Life itself has never been satisfactorily explained. On this and many other (quite important) issues, our knowledge would not bring a thimble to overflowing. It’s worth bearing this in mind. This is one tiny planet in an immensely vast cosmos. True knowledge is a rare and precious acquisition. It is a human foible to try and stretch it well beyond what can be reasonably accepted. Instead we should learn to value what we can know, and not dabble as much as have done, and still do, in uncertain domains for answers.

 

Struggling with Gödel’s incompleteness theorem

Phillip asked:

Is it an implication of Godel’s Incompleteness theorems that all statements/ claims are equally valid/true because all statements/claims are based on unprovable assumptions?

Answer by Craig Skinner

Ah, Godel – the man without whom mathematics would be complete!

No, there is no such implication as the one in your question.

His theorems are technical proofs about formal axiomatic systems, and have implications only for such systems – for maths & logic, for classical (algorithmic) computing, and for cognitive science insofar as cognition includes formal deduction.

Plane geometry was axiomatized by Euclid, non-Euclidean geometries (using variant parallel lines axioms) followed in the 18th/19th century, and arithmetic was axiomatized in the 19th century.

It was rather assumed that all true theorems and no false ones could ultimately be derived from the axioms, so that mathematics was complete and consistent. Hilbert and followers struggled to prove this. But hopes were permanently dashed by Godel, Turing and Church who all proved in their different ways that the goal is impossible.

Godel proved that any axiomatic system contains some true statements not provable in the system (incompleteness), and that no axiomatic system can ever be proved to be consistent. His genius was to use the notation of a system (arithmetic) not just to derive theorems within the system, but to construct a theorem ABOUT the system, which theorem was obviously true but couldn’t be proved in the system.

His theorems are hard going if you have Uni-level maths, near impossible otherwise. A standard ordinary-language illustration is as follows:

Consider the statement: This sentence is unprovable.

If it is true, what it says is correct, so it IS unprovable, and we have a true statement which cant be proved (system is incomplete).

If it is false, what it says is incorrect, so it ISN’T unprovable, it’s provable, and we have a false statement provable within the system (system is inconsistent).
It follows that in any consistent axiomatic system there exist true but unprovable statements.

As regards maths, we happily work with incompleteness, and although consistency cant be proved, we reasonably assume it since no inconsistency has ever shown up.

Godel’s arithmeticized statement, and plain-language equivalents, are self-referential. They are not just statements in a language, but about it. And this can seem paradoxical.

A simple example is that these (contradictory) sentences both come out true:

‘This sentence contanes exactly two erors’
‘This sentence contanes exactly three erors’

The first sentence has two spelling mistakes, so two errors.
The second sentence has the same two errors, plus a mistaken claim about there being three errors, which constitutes the third error.

To deal with this, rules have been drawn up governing first- and second-level expressions (languages and metalanguages) which I wont go into.

Human consciousness is self-referential – I talk about ‘my self’, ‘knowing my own mind’, and right now am aware of writing this sentence, and am aware that I am so aware, and of that awareness, and so on. So Godel is sometimes cited in explanations as to how life and consciousness can arise from dumb matter, notably Hofstadter’s brilliant book ‘Godel. Escher, Bach: an Eternal Golden Braid’ which makes extensive use of self-referential theorems, art and music

A classical computer (algorithmic software analogous to a formal axiomatic system) cant grasp the truth of a Godel-type sentence, it cant step outside the system. Humans can. Some people (notably Penrose) think this is an essential difference between humans and any possible computer. This view may provide some spiritual comfort for us, but I see no good reason to hold it, and advances in computing look set to produce systems that wonder whether humans can grasp the truth of Godel sentences.

Finally, many new big ideas in science or maths get misapplied. A Newtonian ‘science’ of human affection was attempted with people being ‘attracted’ to each other, ‘gravitating’ to the more attractive others, affection diminishing inversely as the square of the distance between lovers, and other nonsense. Darwin was swiftly misapplied to justify ruthless laissez-faire capitalism (‘Social Darwinism’). Then Einstein (‘everything’s relative’), Heisenberg (‘nothing’s certain’), quantum mechanics (‘there is no mind-independent reality’ or ‘expand your mind by resonance with quantum field vibrations for only 69.99’), Godel, catastrophe theory, chaos theory and more.

 

Answer by Helier Robinson

No. First of all, not all antecedents of arguments are unprovable assumptions. Descartes’s cogito is an example. Secondly, Godel’s theorems apply to formal systems such as Whitehead and Russell’s Principia, and not all philosophic arguments are such. More basically, Godel’s incompletness theorem says that a formal system large enough to include arithmetic necessarily cannot prove itself to be complete; and this does not mean that all statements are equally valid.

 

Answer by Shaun Williamson

No it isn’t and that is all can say because I don’t know where you got this idea from. Godel’s proof applies to our system of mathematics. It doesn’t apply to our system of logic. You should also keep in mind that we have proofs in mathematics and in logic. The idea of proof does not apply to everything.

Suppose you look out of the window and say ‘It is raining’. Someone else says ‘Prove it’. That can only be interpreted as a joke because we don’t have a system of proofs for remarks about the weather nor do we know what such a proof would look like.