Quine’s attack on the analytic-synthetic distinction

Gershon asked:

Do philosophers still believe in the analytic-synthetic distinction? Did Quine in his attack on the analytic-synthetic distinction go too far or did he get it about right?

Answer by Massimo Pigliucci

Here is how Willard O. Quine put the challenge, in his famous paper, Two dogmas of empiricism, published in 1953:

"It is obvious that truth in general depends on both language and extralinguistic fact. … Thus one is tempted to suppose in general that the truth of a statement is somehow analyzable into a linguistic component and a factual component. Given this supposition, it next seems reasonable that in some statements the factual component should be null; and these are the analytic statements. But, for all its a priori reasonableness, a boundary between analytic and synthetic statements simply has not been drawn. That there is such a distinction to be drawn at all is an unempirical dogma of empiricists, a metaphysical article of faith."

We first need some context. Quine was reacting to the then prevalent tradition in philosophy of science, logical positivism. The logical positivists in turn were interested in furthering David Hume’s famous dismissal of metaphysics in favor of math and empirical science (Hume was one of the most influential British empiricists of the 18th century). Here is Hume, in An Enquiry Concerning Human Understanding, first published in 1748:

"If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion."

Hume, in other words, recognized only two types of meaningful statements: he called them ‘relations of ideas’ and ‘matters of fact.’ Today we would say that the first pertain to logic and mathematics, the latter to the natural and social sciences. Everything else is ‘but sophistry and illusion.’ (With all due respect to Hume – of which I have a lot! – it needs to be highlighted that the entire Enquiry contains no math and very few matters of fact, and yet it would really be a pity to commit that masterpiece to the flames.)

The logical positivists formalized Hume’s fork and emphasized the distinction between analytic statements (true by virtue of their semantic meaning) and synthetic ones (true in virtue of observation or experiment). A classic example of the first type of statement is: ‘All bachelors are unmarried,’ since ‘unmarried [man]’ is synonymous with ‘bachelor.’ More importantly, though, all of logic and mathematics are also into this same business of producing analytic truths. Examples of synthetic truths, by contrast, are statements such as ‘Saturn has rings,’ or ‘my house has two bedrooms.’

The reason the contrast between analytic and synthetic statements matters is this: all synthetic truths are a posteriori, i.e., arrived at by empirical means. But analytic statements fall into two categories: those that are true by definition (the bachelor case) and those that can yield non trivial a priori truths, such as the Pythagorean theorem, for instance. Quine, who was a pretty radical empiricist, was bothered by the possibility of a priori truths, and consequently also did not much like the idea of analytic ones.

His famous critique of the ‘dogma’ of the analytic-synthetic distinction, however, hinges on a very technical matter, and one that leaves a number of philosophers not entirely convinced. Basically, Quine argued that in order to claim that an analytic statement is truly such one has to provide an account of synonymy, since it is the latter concept that does the actual philosophical work: for instance, when we say that ‘All bachelors are unmarried’ we understand this as an analytic truth precisely because, as stated above, we mentally equate the terms ‘bachelor’ and ‘unmarried [man],’ i.e., the two terms are synonymous. But whence synonymity? According to Quine, at some point, even the notion of synonymity itself needs to be anchored by some sort of empirical fact, for instance about marriage, or men. If that’s the case, then the apparent solidly impenetrable barrier separating analytic and synthetic statements is no such thing and all truths, at bottom, are synthetic. This is Hume on steroids, in a sense.

As usual in philosophy, there have been thorough and numerous responses to Quine’s claim. For instance, Paul Grice and Peter Strawson pointed out that Quine’s skepticism about synonymy quickly leads to skepticism about meaning itself, which in turn leads to the problematic conclusion that one cannot actually determine whether a translation of a given sentence is or is not correct. This was a bullet that Quine later was apparently happy to bite rather than dodge, in his Work and Object (1960), where he presented the idea that translations are, in fact, indeterminate. Hilary Putnam argued that Quine’s critique actually confuses two different targets: analytic statements and a priori truths, which are not co-extensive. John Searle pointed out that even if Quine’s attack is granted some force, it doesn’t follow that the notion of analyticity is in fact useless.

I’m with Searle here: Quine’s analysis was an example of how philosophy makes progress by questioning previous assumptions (in this case the idea that there is a sharp distinction between analytic and synthetic truths) and providing reasons to think that things are different (and usually more complicated) then previously thought. But even if we admit that ‘All bachelors are unmarried’ eventually does connect to some empirical fact of the matter necessary to anchor the meaning of the phrase, it is somewhat daft to claim that there are therefore no interesting distinctions between that sort of sentence and more obviously synthetic ones like ‘Saturn has rings.’ Moreover, it seems that mathematics and formal logical truths still stand very much in the realm of analyticity, Quine’s stamping of his feet notwithstanding.

 

Things or processes as fundamental?

Jed asked:

Can anyone here explain process philosophy?

Answer by Craig Skinner

Here’s a simple explanation.

From the earliest days (the Presocratics), philosophers have been divided as to whether the world ultimately consists of things (‘substances’) or processes.

On the substance view, things are primary and process (change) secondary. Change is due to interaction of unchanging things. For example atoms in the void (Democritus), or ‘No thing comes to be nor does it perish’, change happens by ‘mixing together and dissociating’ (Anaxagoras).

On the process view, process is primary, things secondary. Change is fundamental and unceasing, whilst things are merely temporary stabilities or patterns in the eternal flux. This was Heraclitus’ view (‘all things flow’). Famously he said that nobody can step into the same river twice: a river isn’t so much a thing as a temporary pattern in the constant process of flow, itself part of the water cycle (evaporation, clouding, raining, flowing). Other processes include growth, decay, heating/ cooling, thinking.

Aristotle sided with substance metaphysics, and this became the Western paradigm. Aristotle was posthumously adopted and ‘baptized’ by the Church, and his views became dogma, with lively debates about how the divine and earthly substances were united in the body of Jesus etc. Later, Descartes pitched in, suggesting there are two basic substances res extensa and res cogitans (matter and mind). The notion of mind as a substance was controversial and has now rather faded away, but matter held its ground as substance. Atomic theory supported the notion for two hundred years.

But by the 20th century the game was up. Atoms were found to be mostly empty space, and subatomic ‘particles’ seem to have no size at all, being merely loci of high energy in quantum fields, and even without definite positions, but rather existing in states of quantum superposition. Bertrand Russell truly said that the ‘matter’ of modern physics was no more substantial than anything we might be invited to witness at a seance. Modern physics supports the process view.

The time was riper for revival of process philosophy, which had continued, in a minor key, after the Presocratics, in the views of Leibniz, Bergson, William James and others. And so Russell’s colleague, Whitehead, championed process philosophy (Process and Reality, 1929).

His system is the most developed version of process philosophy but is hard going, including purposeful development (‘becoming’) of the world, and of God, moment by moment, and experiential events called ‘actual entities’ as the basic world elements.

A better, readable, general introduction to process philosophy is Rescher N (2000) Process philosophy: a survey of basic issues (Pittsburg UP).

To date process metaphysics hasn’t had the attention that substance metaphysics has enjoyed over the centuries, and so is less well developed than materialism or idealism, but I fancy it will survive and thrive, becoming a serious rival to substance metaphysics as it was with the Presocratics.

A point worth noting is that time scale is important in deciding whether to call something a thing or a process. Thus, a rock, over a human lifetime, hardly changes, and we think of it as a thing. But over a billion years, it forms (eruption/ solidifying, or sedimentation), erodes, disappears, in short is a temporary stability in a geological process. Similarly, in terms of attoseconds, a drop about to fall from a dripping tap seems to last forever as an enduring thing. Things are just part of very slow processes, and processes include very short-lived things.

 

Is metaphysics possible?

Philip asked:

Is there a possibility of Metaphysics?

Answer by Geoffrey Klempner

Is metaphysics possible? In this day and age?

One answer would be of course metaphysics is possible because philosophers are doing metaphysics. In any English speaking university you will find courses that include discussion of the so-called ‘problems of metaphysics’.

Just to give you a taste, here is a selection of essay questions for the University of London BA Philosophy Metaphysics paper taken by students on the International Programme:

* Times can be thought of as past, present and future or as earlier and later. Is one of these ways of thinking about time more fundamental than the other?

* What do all red things have in common?

* Can arguments be given to establish that your pen is not a bundle of ideas?

* Can a fully objective view of human beings account for the subjective qualities of mental states?

* ‘A cause has its effects in virtue of its properties. So causation cannot be a relation simply between particulars.’ Discuss.

* ‘If I were to divide into two people tomorrow, neither of the resulting people would be me. But this would not be as bad as death.’ Is this true? If so, why? If not, why not?

* If ‘free choice’ is to be better than something random, must determinism be true?

* Is an army truly a substance?

* Can we intelligibly claim that Sherlock Holmes does not exist?

* Can two objects be in the same place at the same time? Justify your answer.

* Is an object identical with the parts that compose it?

* ‘A stone is a particular, but a stone’s falling is not.’ Discuss.

So we have here the nature and reality of time, the nature of universals, events and substances, identity and spatio-temporal continuity, personal identity, causation and free will, the nature of existence, etc.

All of these topics can be handled by the methods of analytic philosophy – the discipline I was trained in. But I have come to be dissatisfied with this way of approaching metaphysics. Surely the possibility of answering these kinds of question is not put in doubt when one asks, ‘Is metaphysics possible?’ And yet I think it is a legitimate question.

There is another way of thinking about metaphysics. Here is a quote from the Inaugural Address from Hegel’s Lectures History of Philosophy which gives a sense of the kind of thing I am talking about:

“But in the first place, I can ask nothing of you but to bring with you, above all, a trust in science and a trust in yourselves. The love of truth, faith in the power of mind, is the first condition in Philosophy. Man, because he is Mind, should and must deem himself worthy of the highest; he cannot think too highly of the greatness and the power of his mind, and, with this belief, nothing will be so difficult and hard that it will not reveal itself to him. The Being of the universe, at first hidden and concealed, has no power which can offer resistance to the search for knowledge; it has to lay itself open before the seeker – to set before his eyes and give for his enjoyment, its riches and its depths.”

When Hegel talks about ‘science’ he doesn’t mean empirical science. He is talking about metaphysics. As metaphysicians we are after nothing less than the ‘Being of the universe’, the ultimate nature of things. Post Hegel, philosophers have come to doubt that such a thing is possible. And for good reason, you could say. What gives us puny beings the right to think that we can reason out the universe to its very ‘depths’?

I mentioned analytic philosophy, but there are other traditions which can also be seen as a reaction to Hegel’s bullish optimism about the powers of human cognition such as pragmatism, phenomenology, existentialism, each of which embraces human finitude as setting limits to what can be known concerning the ultimate nature of things.

The problem with accepting limits is that we can’t know what our limits ultimately are, any more than Hegel can be sure of the ‘power of mind’ that he talks of. I prefer to keep my options open. I don’t know what is possible, or what is impossible. For that, you need to assume a theory, and I prefer not to make any assumptions. So I will continue my ring quest for metaphysics and see where it takes me.

 

Demarcating science from pseudoscience

Ryan asked:

What are the various ways in which one could go about trying to demarcate science from pseudoscience?

Answer by Massimo Pigliucci

The distinction between science and pseudoscience constitutes what in philosophy of science is known as the demarcation problem, a term coined by Karl Popper in the early part of the 20th century. Popper was actually interested in solving David Hume’s famous problem of induction. Induction is the general type of reasoning by which we make inferences about things we do not know about on the basis of things we know. For instance, we have seen the sun rise many times before, therefore we reasonably infer that it will do so again tomorrow.

But Hume showed that we unfortunately do not have any solid logical foundation for such inferences. The usual justification for induction is that “it has worked so far” (call it the pragmatic response), but this amounts to say that we believe in induction on the basis of an inductive argument (it worked in the past, ergo it will work in the future), which amounts to deploying a type of circular argument – not exactly a kosher move in logic.

The problem is made more cogent by the fact that a great deal of scientific theorizing uses induction, which is why Popper was so worried. He figured that a possible solution was to move from an inductive to a deductive justification of scientific theorizing. Instead of thinking of science as making progress by inductive generalization (which doesn’t work because no matter how many times a given theory may have been confirmed thus far, it is always possible that new, contrary, data will emerge tomorrow), we should say that science makes progress by conclusively disconfirming theories that are, in fact, wrong. This is Popper’s famous criterion of falsification, which can be formulated as an instance of standard modus tollens in deductive logic: If theory T is true, Then fact F should be observed. Fact F is not observed; therefore theory T is false.

There are several reasons why Popper’s idea of falsification doesn’t actually solve Hume’s problem of induction, which we shall set aside for another time. Popper also thought that falsification could function as the demarcation criterion between science and pseudoscience: if a theory can, in principle, been falsified, then it is scientific; but if there is no way to ever reject it, regardless of what empirical evidence may become available, then it is pseudoscience. Popper thought that Einstein’s theory of relativity is a good example of the first, while Freudian psychoanalysis and Marxist theories of history exemplify the second.

The problem is that there are plenty of pseudoscientific notions that are eminently falsifiable (and have, in fact, been falsified), from astrology to homeopathy. Moreover, there is a history of scientific notions that were initially either unfalsifiable or appeared to be falsified, and yet led to advancements in science. For instance, the original version of the Copernican theory in astronomy (which posited the Sun, instead of the Earth, at the center of the solar system) didn’t do a good job at predicting the actual positions of the planets in the sky. And yet astronomers like Galileo and Kepler kept playing with it, until the latter figured out a relatively minor tweak that solved the problem: Copernicus had assumed that planetary orbits are circular, while they better approximate an ellipsis. Once Kepler introduced the change the theory worked like a charm, so that a Popper-style abandonment after initial falsification would have been unwise.

Because of the problems with the idea of falsification, Larry Laudan published a famous paper in 1983 declaring the demarcation problem dead in the water. He suggested that there is no small set of necessary and jointly sufficient conditions that define “science” or “pseudoscience”; he also claimed that if there were such a set, the only way philosophers could test their classifications of epistemic activities would if they agreed with the judgment of scientists (in which case why not jus ask the latter in the first place); and that the very term “pseudo”-science is problematic on the ground that it pretty obviously implies a negative epistemic judgment, rather than a neutral analysis.

Laudan’s paper was very influential, and did in fact manage to slow down philosophers’ work on demarcation to a trickle. This changed recently because of a collection of essays on the subject published by the University of Chicago Press (and which I co-edited with Maarten Boudry, from the University of Ghent, in Belgium). The book begins with several (belated) replies to Laudan, and then proceeds to explore a number of philosophical, historical, and sociological issues surrounding demarcation.

With respect to Laudan’s first point, several authors have pointed out that just because it is not possible to define science or pseudoscience precisely, it doesn’t follow that the two concepts aren’t meaningful and useful. Wittgenstein famously introduced the idea that a number of complex ideas share a “family resemblance,” meaning that – just like in the case of the members of a biological family – one can see that different instantiations of a concept are related to each other by various threads, even though there is no single or small set of criteria that definitely rule individual instances in or out. Wittgenstein argued that even an apparently straightforward concept like that of “game” is actually difficult to pin down, because for any group of criteria one may propose to define it (“has rules,” “it is done for fun,” “there are winners and losers”) one can easily find either games that do not fit all the criteria, or non-games that fit a number of them. So, a better way to think of science and pseudoscience is as two peaks on a continuous landscape of epistemic activities, some of which are more (or less) scientific (or pseudoscientific) than others.

As far as philosophers having to agree with whatever judgment scientists come up with (Laudan’s second point), the heck with that! Some of the most interesting criticisms of science itself have come from philosophers (e.g., about claims made by evolutionary psychology, or in the name of string theory), so it is far more constructive and interesting to see scientists and philosophers engage in a continuous dialogue about these matters, without either having to simply defer to the other.

Finally, yes, of course the term “pseudo”-science carries a negative connotation (Laudan’s third point). That is on purpose: to put a warning label on an epistemically deficient activity that only apes the trappings of science without actually being a science. And philosophers have always been in the prescriptive, not descriptive, business, so it is okay for us to deploy critical terminology, as long as the deployment is warranted by a detailed analysis.

 

Answer by Craig Skinner

There was lively debate about this in the 20th century. The upshot was that there is no clearcut demarcation. It’s all gone quiet now.

Suggested criteria for science were:

* falsifiability (Popper).
* puzzle-solving (Kuhn).
* progressive research programme (Lakatos).

Falsifiability: Karl Popper was impressed by how Einstein’s Theory of Gravity offered itself up for possible falsification by predicting something unexpected and testable (light bending around the sun: observations during the 1919 solar eclipse found for Einstein and against Newton) whereas Freudian psychoanalysts or Marxists could explain away any observations without giving up their theories. He suggested falsifiability as the mark of science. Scientists liked it – they were portrayed as heroic figures willing to let their beloved hypothesis be slain by a single awkward fact. But real scientists are different: they hang on to their hypotheses like grim death, blaming auxiliary hypotheses for the apparent falsification (the experimenter missed a confounding factor which affected the outcome; the blood-sugar machine was faulty: etc). Also, strictly, no hypothesis can be falsified: any observation or experiment necessarily tests several hypotheses at once, and we can always say one of the auxiliary ones is wrong, not the main one we are testing.

Puzzle-solving: Thomas Kuhn pointed out the drawbacks of falsifiability, said that scientists rarely even tested their main (paradigmatic) hypotheses, but solved puzzles within paradigm. The hallmark of science was systematic, progressive, puzzle-solving.

Progressive research programme: Imre Lakatos said that scientists (as opposed to, say, astrologers) do research, testing their views against the empirical world, and expect to make progress, discovering new things, reaching better understanding, refining and amending hypotheses.

In a famous 1981 USA court ruling that “creation science” was religion, not science (and so didn’t merit equal school classroom time with evolution), Judge Overton gave the criteria for science as:

* explanation by natural law.
* views testable against the empirical world.
* views held tentatively, not dogmatically.
* views falsifiable.

Creation science has changed its name to Intelligent Design, claims puzzle-solving and research activity, and battles on against “Darwinism”.

Ultimately demarcation depends on detailed understanding of how science works, but even here, scientists differ as to whether some views are science or not e.g. string theory, multiverse hypotheses.

Finally, if you pick up an alleged popular science book, suspect pseudoscience or a religious agenda if the blurb, review or text includes the following words, phrases or links:

* “scientific materialism”.
* “irreducibly complex”.
* “astonishingly complex molecular machines”.
* “academic freedom” (code for acceptance of creationism).
* “Darwinists/Darwinism” (real scientists usually say “biologists/evolution”).
* “blind, random, undirected” (referring to evolution).
* link between quantum physics and freewill
* link between Darwin and the Holocaust

 

What is holism?

Jeff asked:

What is holism?

Answer by Massimo Pigliucci

The term holism refers to a variety of concepts, from the idea (in medicine) that the body should be treated as a whole, rather that by focusing on its individual parts, to the rather vague concept (in New Age mysticism) that all things in the universe are somehow connected. In philosophy and the natural sciences (particularly biology), though, holism is best contrasted with reductionism, so perhaps it would be better to start with a brief analysis of the latter.

Reduction is a technique in philosophy – and by extension in the natural sciences – that was formalized by Descartes. In The Meditations, he suggested that we need to approach any given problem by the method of “divid[ing] each of the difficulties under examination into as many parts as possible, and as might be necessary for its adequate solution,” or we “reduce involved and obscure propositions step by step to those that are simpler, and then starting with the intuitive apprehension of all those that are absolutely simple, attempt to ascend to the knowledge of all others by precisely similar steps.”

The Cartesian method – of which the above is a crucial part – was adopted by other natural philosophers, such as Descartes’ contemporary, Galileo, and became an intrinsic component of the successes of physics and other natural sciences. In philosophy, the approach eventually evolved into the method of analysis used in the appropriately termed “analytical philosophy.” It allows the translation of common language sentences into logically coherent and “cleaner” versions (if you are interested, this refers to Bertrand Russell’s theory of descriptions).

There are different kinds of reductionism, only some of which may be usefully contrasted with holism. For instance, in philosophy of science one can talk about theory reduction in cases in which a higher level scientific theory (say, Mendelian genetics) can be “reduced” (i.e., reformulated) in terms of a lower-level theory (say, molecular genetics). There is no holistic counter to this type of reduction: either a theory is successfully reduced to another, or it isn’t.

More interestingly, we can distinguish between ontological and epistemic reduction, both of which can be contrasted with their holistic counterparts. Let’s consider an example to fix our ideas: imagine you meet a physicist at a cocktail party and he tells you that everything that happens in the universe reduces, at bottom, to physics. What could he mean by that? One way to interpret the claim is that the physicist is simply saying that, ultimately, everything is made of subatomic particles (or strings, or whatever the latest physics comes up with to identify the basic constituents of reality). This is certainly a reductive explanation, and it is true, as far as it goes. It also represents a case of ontological reductionism, because it says that the only things that exist are subatomic particles (or strings, or whatever). (Remember that ontology is the branch of metaphysics concerned with claims of existence.)

But now consider another possible meaning of the utterance made by our cocktail party physicist: that physics is the only relevant science because everything in the universe can be explained in physical terms. This claim is epistemological (epistemology is the branch of philosophy that deals with how we know things), and much more debatable. Even if it is true that the ontology of everything (human beings, economies, galaxies) is reducible to fundamental physics, it doesn’t follow that fundamental physics can do away with biology, psychology, economics or cosmology as independent disciplines with their own proper explanatory levels. Unless your physicist friend is ready to provide you with, say, a quantum mechanical explanation of why you two are having that particular conversation, his epistemic reductive claim fails abysmally.

Now, a holist could take issue with both the ontological and the epistemic claim, but she would probably be more successful with the latter than with the former. It is hard to imagine, given the current status of scientific knowledge, denying that everything is, in fact, made of subatomic particles (or strings, or whatever). While at the same time it is rather easy to make the case that there are many levels of complexity in the world (atoms, molecules, living organisms, ecosystems, celestial bodies), and that different types of explanations work best for distinct levels of reality: quantum mechanics isn’t going to replace economic theory, likely ever.

However, even in the ontological sense, reductionism isn’t necessarily a slam dunk. A number of philosophers have suggested that certain kinds of non material “objects,” such as mathematical structures (numbers, theorems, etc.) exist in a mind-independent, non physical fashion, and are therefore not reducible to subatomic particles (or strings, or whatever). This notion is known as mathematical Platonism, but we’ll set it aside for another time.

There is one more sense in which ontological reductionism may turn out to be problematic – and hence a holistic, or system-level, approach to be useful – though it is still somewhat speculative. I am talking about the idea of emergent properties. These are properties of complex matter that are not reducible to the properties of its simpler constituents. Take, for instance, the fact that a large enough number of molecules of water acquires the property of being wet (at certain temperatures and pressures). “Wetness” is not defined at the level of an individual molecule, or even of a small number of molecules. It only emerges when enough molecules interact together, which means that it can be studied only holistically, so to speak, not reductionistically.

Generally speaking, it is more constructive to think of the holism-reductionism contrast as a complementarity rather than an antagonism: on the one hand, we can often make progress in understanding complex systems by breaking them down into smaller chunks and see how they can be put together again; on the other hand, we still need to take seriously the idea that some things only function, or even exist, at certain levels of complexity and not below them.

 

Do we use only 10 per cent of our brain?

kate asked:

What is relevant data that supports the inferences about “Do we use 10 per cent of our brain?”

Answer by Massimo Pigliucci

The claim is simply false, amounting to no more than an urban legend. In reality, it is pretty clear that we use the entirety of our brain, though not necessarily all at the same time. There are several empirical lines of evidence that refute the 10% idea. To begin with, it would be very strange if natural selection had created an organ that consumes a whopping 20% of our metabolism (while accounting for only 2% of our body mass), and 90% of it went unused. Such waste should have massive consequences on an individual’s viability and reproductive competitiveness, and would have therefore been quickly eliminated during the early stages of human evolution.

If you don’t find a priori arguments too convincing, then consider that when brain cells become inactive they degenerate, and even superficial scans or living brains (or autopsies of dead ones) clearly show that not to be the case. Speaking of brain scans, nowadays we can do fMRI and similar investigations while subjects are actually using their brains on a variety of tasks, and such techniques clearly show large portions of the human brain to be active at any given time. Moreover, scientists have studied a number of people with extensive brain injuries, and invariably these people suffer from reduced cognitive functionality. If 90% of their brains were normally untapped one would expect to see no impairment following even extensive brain damage. Barry Gordon, a neurologist at the Johns Hopkins School of Medicine in Baltimore, interviewed by Scientific American, said that the very idea that we use only 10% of our brains is “so wrong it is almost laughable.”

Where, then, does this idea come from, and why does it persist? Different scenarios have been proposed for the origin of the 10% myth, and they may all be correct, since it is possible that the idea has popped up independently a number of times. One of the best documented stories traces it back to philosopher and pioneer psychologist William James, who carried out research on the unrealized potential of human IQ back in the 1890s. James concluded – on the basis of his study of a child prodigy – that ordinarily we may achieve a fraction of our mental potential, which may be a defensible claim as far as it goes. However, in 1936 the American writer Lowell Thomas referred to this idea in a foreword he wrote for How to Win Friends and Influence People, by Dale Carnegie, falsely stating that “Professor William James of Harvard used to say that the average man develops only ten per cent of his latent mental ability.”

Another plausible root for the origin – or perhaps persistence – of the myth is the well known fact (since the early part of the 20th century) that only about 10% of brain cells are actually neurons, the remainder being made of glial cells, which play crucial supportive and protective roles for the neurons themselves.

Ezequiel Morsella in Psychology Today describes yet another possible origin story for the 10% myth, this one tracing it to more recent research conducted by American neurosurgeon Wilder Penfield. Penfield developed techniques to treat severe forms of epilepsy and was interested in exploring the possible damage that a surgical mistake may cause in the patient. He therefore embarked in research that involved the localized stimulation of areas of the brain with electrodes, while the subject was awake and could report the effects. Penfield and his colleagues discovered that stimulating a small number of areas of the brain (accounting for about 10% of the total) was sufficient to generate detectable effects. Needless to say, this is not at all the same as saying that we only use that percentage of our grey matter.

The 10% myth is popular both among New Age believers and paranormalists, as well as in the entertainment industry. The suggestion has been made that paranormal phenomena, like telepathy, clairvoyance and telekinesis (for which there is no convincing evidence) become possible when one somehow “taps” into the normally (allegedly) non-functioning 90% of one’s brain (see this article by Ben Radford in Skeptical Inquirer). New Age believers explain the (again, alleged) existence of psychic powers in the same fashion.

A number of movies and television shows have been loosely based on the premise that the 10% myth is true, including the pilot episode of Heroes and the movies Defending Your Life (1991), The Lawnmower Man (1992), Limitless (2011), and Lucy (2014). The 10% use is not the only widely accepted but incorrect notion about the human brain, this article by Laura Helmuth in Smithsonian magazine lists another nine.