Robbing Peter to pay Paul

Mary asked:

When is it ethically acceptable to rob Peter to pay Paul?

Answer by Paul Fagan

Often, this saying is used where distributions of wealth are considered to be a zero-sum game: nobody really benefits from an act of ‘robbery’ as resources are merely moved around. However, if one or more parties could benefit from an act of redistribution then this becomes an easier question to answer: under certain circumstance, some political philosophies would not hesitate in redistributing Peter’s property in order to make Paul’s life, or the lives of both parties, better. Some simple examples are provided to demonstrate this.

For instance, imagine Peter and Paul live quite happily on a desert island. Peter is the island’s landlord, and using his own efforts produces two bushels of corn with the land, and this provides enough for both parties to subsist.

However, if all of the land were given to Paul, he would be able to produce four bushels of corn. This would enable the island to subsist, produce some surplus grain and allow the island to trade with nearby islands to acquire other goods. Now, if Paul could not come to an arrangement with the landlord, whereby he could lend or lease the land, then some utilitarians, seeing how this second scenario benefits the island materially, would wish to see Peter’s landholdings given over to Paul.

A third scenario may be favoured by egalitarians who would wish to see equal holdings of land. They may favour a situation where Peter’s property in land is distributed equally between the two islanders. This would be likely to yield 3 bushels of corn and although more productive that the initial arrangement, would not be as productive as the second. However, if you value the equal distribution of land over everything else you would be content with this this arrangement.

So there you have it, some political philosophers would be quite ready to ‘rob’ Peter to pay Paul. That said, one should be warned that other political philosophies would vehemently oppose the enforced distribution of any goods held by an individual, and some libertarians may even consider a redistribution of a person’s goods to be akin to an assault upon the person (and the reader may like to visit my recent article on this site, entitled Nozick’s libertarianism and self-ownership). The libertarian Robert Nozick was adamant that only voluntary donations should ever be redistributed: in his Anarchy, State and Utopia Nozick felt that the vast majority of persons would voluntarily contribute to schemes to rid society of an ‘evil’ such as poverty for example, as people desire to be part of the solution to such problems (Nozick 1974: 265-7).

Although this may seem to be a very simplistic question to ask, it actually opens up a hornet’s nest for political philosophers and yields a variety of answers (and should the reader have time to spare, then a visit to Stanford Encyclopedia of Philosophy’s entry on ‘Distributive Justice’ may prove to be enlightening: https://plato.stanford.edu/entries/justice-distributive/). However, in concluding, as most societies continue with some form of redistribution between the ‘haves’ and the ‘have-nots’, then it may be a deeply held ethical view amongst human beings that acts of redistribution, similar to those demonstrated, hold great value.

Visual appearance and illusion

JimJim asked:

Visual size is illusory: it shrinks in all three dimensions. Before we correct for this, not only do the railroad tracks meet in the distance, but a train travelling down them gets shorter, narrower, and smaller. So my question is: how far away must a visible object be for us to see it real size?

Answer by Geoffrey Klempner

The premise of your question is false. There are visual illusions, which require a special setup to work, but in general things appear the size that they are, no larger or smaller.

You can verify this for yourself easily. Pick an object on the far side of the room and walk towards it. Does the object (a framed picture, say) ‘get larger’ as you move towards it. Of course not. Hold out your arm and look at your hand. Now move your index finger towards your eye. At what point does your finger appear bigger than it is? At no point.

The notion, e.g., that a train travelling away from us ‘gets shorter, narrower, and smaller’ is based on a overly simplified model of perception. When you look at the train as it travels into the distance, the image projected upside down onto your retina gets smaller and smaller. But what you see, what you perceive, isn’t that image. You see the train. Moreover, you see it as a train, that is to say, an constructed object of a kind that maintains its size over time. (I’m ignoring the fact that a train gets longer or shorter if you add or subtract carriages.)

There are common objects that get larger and smaller. A balloon, for example. Let’s say we are watching a clown walking the road with a large balloon. The balloon has a puncture, and is visibly shrinking, getting smaller and smaller as we look on. The clown turns towards us and shakes his head, sadly. Being able to tell when things actually get bigger or smaller is a pretty important ability, don’t you think?

In order to explain this, a distinction is sometimes made between what we ‘actually see, with our eyes’, and the perceptual judgements based on what we see. So, in your example of the train, we ‘actually see’ the train get smaller, but this is then corrected by our judgement.

There are special cases where this is true. The moon in the sky doesn’t look that large. But then when you take into account the information that the moon is a quarter of a million miles away, a quick calculation shows that it must be pretty big if we can see it at all at that distance.

Then there are artificially constructed experimental setups where a man walking across a room appears to get smaller because the ‘room’ in question is designed with a false perspective: we see the room as rectangular, but in fact the far wall is twice the size of the near wall.

In each of these cases, judgement is required to correct what we see, or seem to see. But these are necessarily exceptions to a rule: The rule being that our faculty of perception (eyes, optic nerve, brain — not forgetting our capacity to physically manipulate the objects that we see) is ‘designed’ by evolution to produce veridical appearances. We need accurate information coming through the senses on which to base our judgements. That’s how perception works.

The concept of perception applies not only to the five senses but also to things like understanding what a person is saying. We perceive meaning. Sometimes we can be wrong, and often those errors can be corrected by judgement. But judgement needs something to work on on. Language isn’t a cacophony of sound, or squiggles on a screen or on paper that we then have to interpret — although, as in the special case it can be, e.g., if you don’t ‘know the language’ and have to work what the person is saying from a phrase book.

A good question to ask in alleged cases of perceptual illusion is, How would things look otherwise? Discussing ancient beliefs about the cosmos, one of Wittgenstein’s students once remarked about the fact that the sun appears to go round the Earth. ‘And how would it look if the Earth appeared to go round the sun?’ was his reply. — I’ll leave you with that question to think about.

Houses in the sky

Cena asked:

If crisis came, can humans build houses in the sky?

Answer by Geoffrey Klempner

From a technological standpoint, it it is perfectly conceivable that human beings could live permanently in habitats floating in the sky and held aloft either by giant impermeable Helium bags, or possibly jet thrusters (as in the TV series Altered Carbon, 2018 https://www.imdb.com/title/tt2261227/) although the latter would require a substantial permanent energy source.

Another possibility, explored in the movie Elysium (2013) https://www.imdb.com/title/tt1535108/ is a giant orbiting structure — the ‘Stanford torus’, proposed in a 1975 NASA study — which could provide an Earth-like environment for tens or hundreds of thousands of human beings.

The first option might not be available in the event of a nuclear war, as the Earth’s atmosphere would be contaminated. On the other hand, either option could be used in the event of an ecological catastrophe that rendered the surface of the Earth uninhabitable, although underwater cities have also been proposed.

From a philosophical standpoint, the main question is an ethical one. The population of the Earth is around 7 and a half billion. When the crisis comes, if it comes, it could be double or treble that.

Realistically, only a small fraction of that total number would have the chance to enjoy life in the clouds, or in orbit. That’s the problem.

The scenario has been visited many times in science fiction. In a way, it exists now. A relative few enjoy a nice life, while for the many day-to-day existence is gruelling, requiring unrelenting toil. But even if the problem of poverty could be permanently solved, that would not do anything to address the challenge of deciding who gets the chance to escape after the Sun flares, or the missiles fall.

Should it be a lottery? Or should only the best and brightest be offered the chance to survive? If you’re testing ethical theories against intuition, that question is every bit as effective as the more frequently discussed Trolley problem.

If the only consideration is the future of the human race, one might opt for the ‘best and brightest’. But who is to choose, and on what basis? How do you balance IQ against musical talent, for example, or sporting prowess? Far easier, and fairer (for the many) would be a lottery, but this would bring its own negative consequences. The great and the good would have to take their chance along with the hoi polloi — a prospect that you may well find repugnant. Imagine waving good bye to Einstein, or Mother Theresa, or the Beatles. ‘Sorry chaps, your numbers didn’t come up.’

In the absence of the political will to make that hard decision and enforce it, the default option is the one explored in ‘Elysium’. The ones who get to go are those who can afford the ticket. So, Beatles yes, Mother Theresa no.

I’m not going to end this with some specious nonsense about ‘hoping it never happens’. It probably will. So maybe it would be a good idea to start discussing the problem now.

Sartre on radical freedom

Chris asked:

True or False? — Sartre is a believer in radical free will. His ontology of free will is similar to the ontology of free will offered by the substance dualists.

Answer by Gershon Velvel

True and false.

Sartre has a view of freedom which fully merits the description ‘radical’. However, if Sartre’s ontology of free will was really ‘similar’ to the ontology of free will offered by the substance dualists then he would be a substance dualist. And he most certainly is not.

Sartre is not a dualist or a monist. A monist believes in one basic substance: matter, or the subject matter of physics. In Sartre’s metaphysics (as developed in Being and Nothingness) there is absolutely nothing to say about what ‘is’, as such. Rather, there are two fundamentally different ways of approaching ‘what is’: under the category of the ‘For-itself’, and under the category of the ‘In-itself’.

Who is doing this ‘approaching’? The For-itself. The In-itself is inert. It doesn’t ‘do’ anything or ‘approach’ anything. We, that is to say human beings — each of us individually an exemplification of the ‘For-itself’ — make sense of reality by applying one or the other fundamental category.

But this is where things get tricky. Because each of us, although an exemplification of the ‘For-itself’ is also an exemplification of the ‘In-itself’. We possess physical bodies, and the molecules and cells that make up our physical bodies obey the laws of physics, chemistry, and biology without room for exception.

Descartes, by contrast, held that there is a loophole in the laws of physics which allows an immaterial soul to interact with the body’s ‘animal spirits’, the physical conduit for all perception and action. For obscure reasons, this interaction was supposed to take place in the pineal gland. — I don’t think that this is such a bad theory, if you explore all the options, but it is not a theory that Sartre would ever have considered. He didn’t need to.

And yet, despite this, Sartre states, unequivocally, that if human beings are free then determinism is false. (I don’t have the reference to hand, but he says this in more than one place in Being and Nothingness.)

I think he is wrong about this. In order to make the point about freedom that he is making, it isn’t necessary to say anything about the thesis of determinism. It’s completely irrelevant. I suspect that Sartre identifies determinism with the much more dubious claim that, in principle, any physical system is predictable. And that would wreck his account of freedom. It is perfectly possible to hold that determinism is true, but that some physical systems (e.g. those that have a brain) are unpredictable in principle.

Given unpredictability in principle, this is all Sartre needs to defend ‘radical’ freedom. His fundamental point about decision and action is that every situation is necessarily unique. There is not, and could not be, any kind of ‘template’ that one could apply (e.g. ‘This is an X-type situation, and I am a Y-type person, therefore I must do Z’). Anyone can choose to do anything within his or her physical capacity, in any situation, regardless of any choices made in the past. That’s all just water under the bridge.

There are two reasons why our actions generally do not cause any surprise to those observing us. The first is the anodyne point that most of the time there is no reason for us to deviate from what we have done on previous occasions. As Aristotle noted, habit is the basic building block of character. F.H. Bradley in Ethical Studies considers the example of someone who chooses to do the ethically right thing despite strong disincentives, while a friend remarks, ‘I’m surprised that you did that.’ The angry response is, ‘You should have known me better!’

The second, connected reason is that the vast majority of the choices that confront us every day are not life changing. But when they are life changing, that is when Sartre would say, we have to be vigilantly on guard against the bad faith of believing that there is such a thing as ‘what a person like me would do’, or ‘what a person in my position would do’. You are on your own, without a rule book. It is up to you to come up with an original and creative solution to the problem that now confronts you.

Thinking too much

Howzer asked:

How to stop thinking too much, but feel instead? I need inspiration and courage to do what I want.

Answer by Gideon Smith-Jones

What do you really want?

In the TV series Lucifer https://www.imdb.com/title/tt4052886/ God’s son Lucifer has quit his job presiding over Hell and now owns a night club in Los Angeles. His one super-power (apart from being able to scare people by putting on his ‘devil face’) is asking that question. And when he asks, you can’t resist no matter how hard you try. You just have to blurt out what you really, really want. And some of the answers are pretty embarrassing, to say the least.

We’re in Freud territory, although Sigmund rarely gets a mention in the episodes. Another TV series Westworld https://www.imdb.com/title/tt0475784/ hits the nail on the head. Human beings are not more complicated than ‘hosts’ (artificial humans, androids). On the contrary, they’re much more simple. In a key episode, we learn that a human brain has only ‘a few thousand’ lines of code. All human human behaviour can be explained by reference to just a small number of unalterable basic drives. The rest is just calculation. Or calculation plus two or three millennia of culture if you want to bring in Freud.

I would say that in addition to inspiration and courage (things we all want) you need to trust yourself more. What you call ‘thinking too much’ is basically lying to yourself. For example, pretending that a situation is more complicated or challenging than it really is.

— You know this, don’t you?

Let’s get down to basics. There’s a girl that you really fancy. (I don’t want to be sexist, by all means substitute ‘boy’ if that’s more relevant to your case.) You can spend all night working out what the person in question would say if you said…, or if you said… . Or you can just walk up and start a conversation. Let the inspiration of the moment guide you.

Oh, I forgot, you don’t have inspiration. Or the courage. Well here’s a tip. Ask yourself what a courageous or inspired person would do, and do that. Pretend it isn’t a problem. You might surprise yourself. (I’m only repeating basic advice that you could find on a hundred web sites.)

Leaving aside basic wants that we all share, in various ways, there is something unique to you, that no-one else has. No-one else has lived your life. So in a way, your wants are unique too. Think of it this way: you are an artist and your life is your art. You are free to create anything that pleases you. Free to experiment. Forget the others, this is about you and only you.

You’re right that you need to avoid thinking too much. It isn’t necessary to work out everything in advance. Try something, and if that doesn’t work, try something else. If you keep going and don’t falter, you will get there — wherever it is you ‘really’ want to be.

A hundred years from now, you’ll be dead. And then it will be too late.

Thought and language

William asked:

While written words symbolize spoken words, what do spoken words symbolize?

Answer by Geoffrey Klempner

Imagine the following scenario:

After a long, desperate fight lasting all day and into the evening, the battle has been won.

A messenger is sent out to give the news to the King. He runs all night and all the next day, then collapses and dies from exhaustion before he is able to deliver the message.

If only written language had been invented! The message would have been delivered, whether the messenger lived or died, provided that he arrived at his destination.

But suppose that spoken language had not yet been invented, what then?

The battle has been won. But the only ones who know, are those who fought. And when, eventually, the weary warriors return home, how can they ever describe what they saw with their own eyes, judged with their own hearts and minds — corpses strewn over the battlefield, dismembered arms and legs, decapitated heads, the remaining enemy troops in full flight?

Michael Dummett remarks somewhere (it could have been in ‘What is a Theory of Meaning?’ either I or II) that ‘language increases the range of human perception’. You can look out the window to see that it is raining, or someone else can look out the window and tell you, in words, ‘Hey man, it’s raining!’

And so we are tempted to put forward the following analogy: just as written words reproduce (or ‘symbolize’) spoken words, so spoken words reproduce the language of thought.

When the warriors judge, ‘we have won the battle’, the thought they express, severally and collectively, is expressed in mental language, a language that has no ‘words’ or ‘sentences’ as such, and yet has the power, the capacity, to give meaning to spoken and written language (once it has been invented).

Dummett calls this the ‘encoding/ decoding’ model of language, which he claims is refuted by Wittgenstein’s argument against a ‘private language’ in Philosophical Investigations. (Dummett goes on to make some very questionable deductions from this about the necessity for a ‘theory of meaning’ which we need not go into.)

I endorse the view that language is necessary for thought. Before language (historically, spoken language) was invented, human beings simply did not have the power to ‘think’ the kinds of thoughts that language is able to express, specifically, thoughts about the past or future, or about generality. (This point is made persuasively by Jonathan Bennett in his book Rationality, 1964.)

Then Jerry Fodor came along with his The Language of Thought (1975) and gave the idea of ‘language in the brain’ a new twist. There has to be some ‘structure’ there to begin with for language learning to be possible, something ‘mental’ — although physically embodied in the brain — that is in some way isomorphic to written or spoken words.

However, Bennett’s point still stands. In an analogous way to Darwinian evolution, an individual human brain ‘evolves’ structures over time in response to human interaction and other external circumstances (Daniel Dennett, Consciousness Explained, 1991), and it is plausible to claim that the ‘language of thought’, if there is such a thing, only came into being as spoken (and written) language developed.

What Darwinian evolution gave homo sapiens was the extra plasticity required to build structures in the brain where none had existed before, which then enabled the development of language. As with other evolved structures (a wing, for example) we can hypothesize that some survival benefit was conferred by this extra brain plasticity apart from the capacity to develop language — but that’s just speculation.

What, then, do spoken words symbolize? Written or spoken words represent that something is the case, or is not the case: something that is true if the words represent that something is the case and it is the case, or if the words represent that something is not the case and it is not the case, or false if the words represent that something is the case and it is not the case, or if the words represent that something is not the case and it is the case. — That’s how Aristotle explained the concept of truth.

The technical term that we would now use for this is: ‘truth conditions’. Instead of looking for some ‘thing’ in the brain that is the ultimate bearer of meaning, we describe what meaning does, what it is, in effect. Statements, or judgements, made in written or spoken language, have truth conditions, and that is how they get their ‘meaning’. That is how language is able to work.

You might object to this that nothing has really been explained. Isn’t there still a mystery about how meaning — or the capacity to express thoughts or statements that have truth conditions — can arise at all? There is much that we still do not know. But I am going to leave it there.

Classic texts for the beginning student

Alan asked:

Discussing which philosophers’ original work to read, GK intimated that Spinoza’s ‘Ethics’ would not be a good choice. Is this because you consider him a poor philosopher, or that his philosophy is so self contained it allows little constructive discussion? Or something else completely?

Answer by Geoffrey Klempner

How do you know that Spinoza was a great philosopher who is eminently worth discussing? Because that’s what you were told in some lecture course or in a YouTube video? Maybe the speaker was putting you on. Maybe the whole ‘spinoza’ thing is just a big joke played by philosophers on the non-philosophical world.

Spinoza is difficult to read without a supporting secondary text (or lecture course or YouTube video). That’s why when starting out in philosophy it is better to find a classic text that you don’t need to have explained to you, where you don’t need to be spoonfed.

Locke is one philosopher who has suffered from generations of misinterpretation. Reading texts from the 60s you’d think he was complete dumbass. Just read the unabridged Dover edition of his Essay Concerning Human Understanding in two volumes from start to finish and you’d have a very good and accurate view of Locke. And you only need to read it once — because he goes to such great lengths to explain himself.

Pity the poor students who relied on the ‘expert guidance’ available at the time without taking the opportunity to judge for themselves!

That’s just one example. There are plenty of classic texts that you don’t need to have explained to you, for example you could try some of the texts in Section 3 of the Pathways introductory book list, which I reproduce here without comment:

George Berkeley Three Dialogues Between Hylas and Philonous (1713)

Rene Descartes Meditations on First Philosophy (1641)

David Hume Dialogues Concerning Natural Religion (1779)

Plato Phaedo (around 385 BC)

Ludwig Wittgenstein The Blue and Brown Books (Blackwell)

Kirk, Raven and Schofield The Presocratic Philosophers (2nd Edition CUP)

— You can approach philosophy in the spoonfeeding way or you can see this as an opportunity to learn to think for yourself. The decision you make now will have profound consequences.

[Note added: for more on this topic see my post on the Philosophy Pathways blog On reading.]