Thinking in words – or not?

Jan asked:

What are the arguments for and against the proposition that humans think in words?

Answer by Jürgen Lawrenz

All arguments ‘for’ are driven by philosophical, linguistic, and even religious notions, as well as certain intuitive ideas based on the peculiarity that humans are alone among all species of animals with this capacity.

It is an attractive proposition to say to ourselves, as we think, “I’m thinking with words”, because that’s what we commonly seem to do. I’m doing this right now, thinking as I’m writing these words. By the same token, I am aware — as perhaps we should all be — that before I put pen to paper, there is an idea in my head I wish to express, and this idea is not a sequence of words — rather the words come as I write, as if my thinking mind triggers a process that collects the words ‘on the fly’, so to speak. Exactly the same pertains to speaking, which is precisely the reason that makes me write a speech down before I deliver it. I cannot trust my mind to make me think with the right words if I speak without prior preparation, and therefore writing them down is a safeguard against getting stuck or confused, letting wrong words slip out or simply missing something that I wish to say.

All these familiar hiccups are an argument ‘against’. I have known people who can “speak like a book”, but they are rare. And this applies to writing as well. Just look at the most common problem that afflicts writers: They grimace at their text and wonder why the words just don’t seem to reflect what they were intended to say. Tolstoi is reputed to have written War and Peace seven times over; and I think (again with few exceptions) this is the rule. For everyone, speaker or writer, it’s a struggle to find the right words to express their ideas.

This is not forgetting that words do not generally stand alone, but must obey the grammar and syntax of the language and that, importantly, most words must be fitted to this mould specifically, i.e. must occupy a specific place in the sequence, which is not predetermined, but can vary depending on the intended message.

There is enough in the above to show that thinking is not done with words. On the contrary, these struggles testify against it. If we thought with words, why do we make mistakes? It’s illogical to believe that I think words and then can’t speak or write them! So all this points to some faculty that is connected to, but not identical with, the “dictionary” and “grammar primer” in our memory. But we have to be careful to keep ambiguity at bay. It means that, although thinking is not done in words, nor with words, the words and grammar are ‘in reserve’, like infantry, cavalry, artillery etc. lining up for battle. In other words: We must have learnt the words as well as grammar and syntax first, before thinking is possible. And, incidentally, every infant would (if they could!) tell you the same thing.

Therefore the answer to this dilemma is the existence, in our brain, of verbal and motor cortices, all connected to the conceptual faculty and memory, which do this work for us. As I start to think with intention to speak or write, my cortices go hunting for the words, put them in sequence and activate the appropriate muscles — lips and tongue, or the hand driving a pen or tapping a keyboard. All the errors I mentioned above are reminders that it is a far from perfect performance. If we really thought in words, these things would not happen!

To sum up: I am not, generally speaking, convinced that science is in possession of appropriate tools to handle the many subject-related topics on which philosophy thrives; and this includes theories of the mind. But there are exceptions; and on your question we have one of these few, in that neurophysiology has by and large succeeded in unravelling an issue on which, as it turns out, philosophy is not well equipped to offer a plausible explanation from its own stock of concepts.

Occam’s Razor

Felicetta asked:

What philosophical “blade” encourages one to prefer simple explanations when they fit the evidence?

Answer by Jürgen Lawrenz

The “blade” you refer to is called “Occam’s Razor”. It is the name given to two arguments by the English scholastic thinker William of Ockham which stressed the “principle of greatest economy” in the search for truth and insight.

The first of these says:

It is pointless to do with more that can be done with less.

The other points out that:

A plurality should not be assumed without necessity.

Both these sentences are essentially warnings of the dangers of multiplying hypotheses to bolster a proof. An hypothesis is not a certainty; therefore five hypotheses will only render a proposition more uncertain and dissipate focus on the essence of an issue.

In addition, hypotheses are often framed with special nomenclatures requiring a definition, which is tantamount to a separate proof. But if only one of these is uncertain, then the whole ensemble is impaired.

Ockham, who was born in the 13th century, was targeting primarily the reliance of theological “proofs” on syllogistic principles. This method, he said, rests on confusion between concept and denotation — the first is a creature of the mind whereas the other points to something in the world. In metaphysical speculation, however, they are of equal value; therefore syllogisms which rely on supernatural causes run their course without contradiction and end up “proving” arguments that are plain nonsense.

There is a nice little book on this by Stephen Tornay: Ockham Studies and Sketches. It goes almost without saying that the march of science since the 18th century relies altogether on “Occam’s Razor”; it is nothing less than the First Commandment of scientific research.

Principlist approach to bioethics: works in medical practice

Omowunmi asks:

Describe advantages of a principlist approach in bioethics.  Describe disadvantages of a principlist approach in bioethics.

Answer by Craig Skinner

The principlist approach is what I learned when I was a medical student in the early 1960s, although this name wasnt given to it till the 1970s.

It is a way of debating and deciding ethical difficulties and conflicts in medical practice. It can be applied to legal and political conflicts too, but I used it in medical practice.

Essentially, we appeal to the following 4 Principles:

  • beneficience (do good)
  • non-maleficience (do no harm)
  • autonomy (patient’s right to self-determination)
  • justice (fairness).

A few words about each:

Beneficience: pretty straightforward. Giving treatment that works is good. Treatment that’s useless isnt.

Non-maleficience: we dont want to harm. But we may need to balance good and harm, say when an effective treatment has bad side-effects.

Autonomy: patient has right to choose whether to accept recommended treatment based on full info given to her (informed consent). She may decline even if failure to treat will be fatal.

Justice: fair allocation of scarce resources. In principle, all have equal claim under UK NHS. So, if not enough for all right now, then join waiting list and get treated when you reach top of list. Maybe some patients should get preference. Two examples:-

  1. The “good innings” view. This is the idea that a young person with most of her life ahead of her, gets preference for life-saving treatment over an old person who has already had his life. I agree with this. If I need dialysis for kidney failure, and it’s in short supply, I will have no complaint if a young woman with 3 young children gets preference over me.
  2. Illness not self-inflicted gets preference over self-inflicted. I was unconvinced by this. Famous examples were (a) the footballer George Best who ruined his liver by drink, was given a transplant (thereby denying a patient with liver failure due to non-alcoholic cirrhosis), only to resume drinking after a while and ruin his new liver; and (b) surgeons who refused vascular operations to heavy smokers whose vessel disease was caused by their smoking.

For each Principle we must decide it’s scope. Thus, autonomy doesnt extend to children or severely mentally impaired — others must decide. But who: if a child needs blood to save life, and parents dont agree with transfusion (being Jehovah’s Witnesses), do we respect family’s wishes and let child die.  Or, does “do no harm” extend to assisted dying or can a doctor ethically help a patient with an incurable and terrible illness who wants to end it all.

And of course principles may clash: I wish to do good by giving my patient a very expensive effective drug, but justice demands that the cash to fund the drug be used instead for hip replacements to make 12 old ladies pain-free and mobile.

So, in deciding ethical dilemmas in medicine, deliberate using the 4 principles, having regard to their scope, and making judgments as best you can if the principles conflict.

You ask about advantages. Here they are:

1. It is readily understood by everybody, including those with little or no training in ethics or philosophy eg doctors, nurses, managers, most patients, politicians.

2. It is acceptable as a framework to people of any or no religious belief.

3. No commitment to any normative ethical theory (utilitarian, deontological,            virtue ethics).

4. It works in practice.

As for disadvantages, those claimed include:

1. With exception of non-maleficience, principles are non-specific and just remind decision-maker about what needs to be taken into account.

2. No distinction between moral rules and moral ideals.

3. No agreed method for resolution when principles conflict.

I dont think any claimed disadvantage is great, which is why the method has been standard in approaching medical ethical matters for 50 years or more.

Conceptions of justice

Mary asks:

Is Justice the same as fairness? If there is a difference, what is it?

Answer by Craig Skinner

Fairness is just one conception of the concept “justice”. There are others, as I will explain.

We are talking about distributive or social justice – who gets what – not retributive justice (apt punishment for crime).

The main conceptions of distributive justice are:

  1. Justice as fairness.
  2. Justice as entitlement.
  3. Justice as desert

Here, I can give only a mere sketch of each.

1. The most famous advocate of justice as fairness is John Rawls. He thinks we are more likely to choose fair principles if we dont know how these will affect us as  individuals – if I dont know which bit of the cake I will get, I am more likely to cut fairly. He imagines people choosing behind a “veil of ignorance” – I dont know what talents or status I will have in the society we are making choices about. I might for instance, be an old, white, married man with a good job, or I might be a young, black, unmarried woman on welfare looking after three children. So my choices wont be biased toward either, or to any other social group. He thinks we would thereby choose equal basic liberties, equal opportunity to train for any job, and inequalities justified only if they serve to maximize the position of the worst of. This last principle, the difference principle, is his most contentious. Rawls’s view is in the hypothetical social contract tradition of Hobbes, Locke and Rousseau.

2. The best-known advocate of justice as entitlement is Rawls’s colleague, Robert Nozick. He disagrees that justice is about agreeing fair principles by imagining we dont know how lucky or unlucky we have been in life’s lottery. Rather it’s about protecting people’s legitimate rights to their property. If we own things by initial acquisition and legitimate transfers, then we are entitled to them and to do with them as we wish. He feels that taxation is theft by the state, objects to the state as redistributor, and favours laissez-faire. He’s not saying it’s always fair or that people deserve what they get, often they dont, it’s just their good or bad luck, but it’s not a matter of justice as he sees it.

3. Justice as desert is the view of most non-philosophers. The conventional position is that a person can deserve to earn more than another even if due to factors beyond their control. So, Jane Plain and Christiano Ronaldo work equally hard in demanding jobs, she as a social worker, he as a footballer, but Ronaldo deserves his much higher earnings because is blessed by exceptional ball skills greatly in demand by clubs and fans. An extreme view is that people dont deserve to earn more if they work hard or have talent, because a person’s hard-working character or talent is something they have by luck.. More popular is a mixed view, that people dont deserve more for things beyond their control, such as being born rich, but do for things that are a matter of choice, like working hard or making sacrifices to obtain qualifications.

So, Rawls might think it unfair, and therefore unjust, that Ronaldo earns so much, but the injustice can be mitigated if his high earnings are heavily taxed to help provide funds for the public good and the needy. Nozick might agree it’s unfair, but so what, justice isnt about fairness. He might also think Ronaldo doesnt deserve all that money, but again so what, justice isnt about desert. It’s about entitlement, and surely Ronaldo is entitled to accept pay offers freely made to him. Most non-philosophers think it’s a matter of whether Ronaldo deserves his high earnings: some think he deserves every penny of it, others think that he does deserve high earnings but top footballers’ pay has got out of hand and they now get more than anybody deserves.

As individuals we mostly use a mixture of these three conceptions of justice as we judge the various actions and situations we encounter. Likewise a state’s laws and constitution are likely to be a mix.

Creationism vs. emanationism

Anthony asked:

I read your article about emanationism:

http://philosophos.org/philosophical_connections/profile_029.html

I got questions.

What’s the difference between emanationism and creationism?

And what do you think about it?

Answer by Jürgen Lawrenz

The real issue here is not what creationism and emanationism are, but why people argue about them. It is a wholly religious question; therefore (strictly speaking) you should stick with religious literature. As far as philosophy qua philosophy can go, neither of these doctrines has a place in it, as philosophy cannot prove the existence of any metaphysical being; and if you can’t do that, you have no pathways towards a sufficient and compelling demonstration.

Nevertheless, to keep it radically simple: Emanation means that the earthly presence of a divine being does not imply his/ her physical instantiation on earth. They are apparitions, and although (unlike ghosts) you might be able to touch them (e.g. “Supper at Emmaus”), your sense of touch involves hallucination. Creationism implies the opposite, namely the conversion of a spiritual energy into physical substance, so that this divine being actually walks on earth and is capable of physical discomforts like being wounded (cf. Aphrodite at Troy), yet destined to ascend back to their own native sphere when they are done with their earthly intervention.

In terms of science, both these notions are nonsense. Only the authority of theology vouches for them; otherwise we would be hard put to believe in them any more than the flying horses, talking trees and other ‘miracles’ of legendry. It leaves one other issue to be considered, of course, which is why some people invent them and others put their faith in them.

The answer is that the human faculties are divided, grosso modo, into perceptual and conceptual orders. The first concerns matters that we perceive, and must perceive in order to equip us for survival in the world. The other concerns ideas, images, thoughts etc., which represent what we have learnt from experience. The conceptual faculty is not in touch with the world, because it relies on the perceptual faculties for its store of information. Therefore when we think, imagine, day-dream and so on, we are not in ‘experience mode’, but in ‘manipulation mode’ with respect to the ideas in our mind. Hence contemplation can evolve anything at all, from the laws of science which enabled us to build flying machines, to creatures that exist only in the mind. And so anyone might also come up with ideas like ’emanationism’ and ‘creationism’ as pure mind constructs — as ‘noumena’ in Kant’s terminology, for which no-one can be obliged to furnish a specimen. But we can talk about them; and for many people this is sufficient to make them believe. C’est la vie!

Moral ‘isms’ and relevance

Jimmy asked:

Hey I have some questions regarding ethics. How do you determine what moral properties exist and what the best moral system is? It seems like every property that people refer to is only morally significant for arbitrary reasons. Like why does sentience, autonomy, rationality, etc. matter? It seems like people assume these axioms while just appealing to intuition. How would you be able to assert that sentience is a more valid moral property than say, race? What if someone just has the natural intuition to prefer white people over others? Most people would obviously agree that that’s absurd, as racism is less common than “sentientism”, but how would you subjectively and arbitrarily determine when an intuition is common enough to matter, and for whom does this intuition apply too? Should we only consider the intuition of humans or men or white people, or even living creatures for that matter?

Also, this would apply to deontology vs consequentialism and utilitarianism. A common objection for the latter two is the utility monster argument. But how would one arbitrarily decide that it is wrong to give all the resources to the utility monster. It is also the case that people seem to be more inclined to give to the utility monster if you switch the situation so that the monster begins at a baseline of massive suffering. More people would support giving resources to the monster if it relieved his suffering greatly at the expense of having slightly less pleasure for the human. This shows that people arbitrarily determine whether deontology or consequentialism is better. This is why I don’t understand how to prove that one’s moral system is better. It is for these reasons that moral nihilism seems to make more logical sense to me, although personally it obviously sounds absurd to say things like rape, murder, etc. aren’t wrong. I was wondering what your thoughts on all of this is.

Answer by Jürgen Lawrenz

You’re obviously well read and knowledgable about this subject matter. Therefore it occurs to me that half your questions already contain the answer, inviting little more than ‘yes’ or ‘no’ in response. However, you might have considered the anthropological aspect to counteract the overly intellectual preoccupations with morals which incur your displeasure. You are perfectly right in asserting that all arguments about morals are arbitrary, as all reflect the presuppositions at work in any given society for which they are framed. Moreover there is no absolute standard, as the idea of a ‘residual observer’ or independent judge (“God”) is also a matter of mere opinion, given the number of gods that have populated our minds and passed moral legislations over historical times.

However, I have to take issue with your first two sentences, where you throw ethics and morals into the same basket. This is impermissible under your own criteria. Ethics are portable, whereas morals generally refer to a closed society — “when in Rome, do as the Romans do.” Consider that offence against morals is often severely punishable, while many approved moral practices would find a man of good ethics spewing in disgust. For instance, in some societies, the most brutal institutional murder is/was morally sanctioned (e.g. witch burning, public stoning to death), whereas a doctor has an ethical duty to heal any patient, whoever they might be. This is based on the recognition that human life is dominated by one constant, namely suffering. Ethics tends to be about these constants of humanity, rather than the particulate interests of closed communities. So let us not confuse and commingle morals and ethics!

Bearing this in mind, more credible arguments accrue to anthropological than to philosophically tinged criteria, as the ‘primitive’ behaviour patterns we deprecate in ourselves have never diminished over the 4000+ years that we have talked about morals. For this, there is a short and a long answer. The short version is simple: ‘Morals’ comes from the Latin and means ‘customs’. I don’t need to spell out what they are.; it is sufficient take note of customs, cults, traditions and rituals being bedfellows with morals, whereas ethics take a larger conspectus on desirable social behaviours.

But the longer answer is so long that I have to curtail it into a small handful of sentences and leave the ramification for you to look into on your own initiative. A first approach would have to acknowledge the dilemma that morals are pretty much ingrained in us, as a legacy from our hominid ancestors; but their variety and arbitrariness militates against them ever becoming (as noted) a collective constant in human societies. Instead they are consistently answerable to the particular needs of closed communities. And so the emergence of conscious moral dictates is most likely a reflection of the conditions under which any tribal conglomerate strove to maintain itself against adverse circumstances, whether it is the weather and climate, the resources of the habitat or the hostility of other tribes.. It stands to reason that individualism cannot flourish under such conditions, the two exceptions being the ‘champions’ (as Hobbes calls them) whose prowess lifts them above the common denominator, and the appointees of the gods, who may from time to time announce principles of obedience to the champion’s clan and the gods.

My word ‘ingrained’ is therefore a reminder that we still carry this baggage — predominantly instincts and anthropomorphisms — in our survival kit; and it is plainly in sight of every thinking person that this kit is woefully inadequate, and never more so than in the modern industrialised world.

If you accept my meaning, you might be inclined to disqualify both academic disputation and the divine commandments thesis, since both effectively defend a position of “Do as I say, not as a I do!” Concerning metaphysical beings, we know nothing more about them than what the myths tell us; and the level of morality in those stories is hardly to be commended to humanity as models for our strivings. Can we doubt, then, that we humans never felt a real compulsion to obey their strictures, but on the contrary simply kept up their inhumane practices? I am reminded here of Leibniz’s argument, that God must allow some evil in the world, as otherwise ‘The Good’ cannot be identified as what it is. Does this mean our evil deeds are necessary for us to understand what morality really is? Now this is a typical intellectual position; it’s very presupposition cannot help leading to incoherent arguments and conclusions.

Against this ‘difficult’ position, it can easily be urged that threats to a tribe, community or state from hostile natural as well as human forces demand organisation, which evidently relies on honesty, trust and authority overriding personal self-interest and ambition. This may be called the bedrock of moral behaviour, though it cannot qualify as a constant due to the infinite variety of possible collective perspectives. But now the butcher of any such community may be used to blood and slaughter, yet murder is a different story, and likewise with theft, rape, adultery and so on. Moreover, parents in common with authority figures teach their children about gods and spirits, how they influence the weather, bring disease, or tilt a battle against another tribe. We can easily flesh out this little picture and deduce the origin of moral codes as well as explaining why there are so many. Not to forget that morals under the burning sun would have to differ from those practised in ice-bound habitats. All these and many more comprise criteria for survival, in which morals tend to be joined by the aforesaid customs.

Which only brings us back to your initial questions. There is no possible “best” moral code; all morals are to some extent restricted to time and place (which does not exclude sound reasoning behind them); meanwhile sentience, rationality etc. are prized intuitively by those who feel themselves addressed by those notions. Accordingly your observations on race (to which we must add religion, politics, warfare, trade etc.) show up the morals in question as non sequiturs. At the bottom of them we find fear and self-interest, advantage and privilege, individual as well as societal agency. I think it goes without saying that deontological, utilitarian and other trademark arguments (including nihilism) are creatures of the same ilk, though they may wear other stripes.

Having earlier sounded the word ‘suffering’, however, reminds us of the constant on this horizon. What we all seek is a diminution of suffering; and this means not only illness and disease, but even more so servitude, slavery, injustice, inequality, lovelessness, loneliness, hunger and deprivation, plague and pestilence. Ah! Now we know what all these moral codes purport! We want these ills remedied by the gods; and we want the fiercest enemies we know, other human beings, to be restrained by a superior power. But every such code supposedly originating ‘up there’, beyond the clouds, is on any close scrutiny a hotchpotch of prejudices. Which means nothing other than that laws are made by men, and men are often forgetful of crucial elements. E.g. the commandment “thou shalt not kill” contains no sub-clause for exceptions, so does it mean that we must not kill a flea that bites us, a bison while we’re on the hunt, another human being who threatens us? Conventionally we would claim that self-defence as well as killing for food are implied exceptions, but evidently Moses forgot or ran out of space on his tablets to make an appropriate list. In any case, “laws are made to be broken”, because circumstances change and laws can become obsolete. Meanwhile we are aware of codes in other cultures which take the injunction not to kill literally, even at their own inconvenience.

I think this is pretty much the gist of it and as far as I can go in this forum. A neat summation to end on: “The Thrakians paint their gods with fair hair and blue eyes, while the Ethiopians depict them as dark skinned and snub-nosed.” Thus spake Xenophanes. His point is all too easily transferred to the domain of morals, as I think your own stress on the arbitrariness of all moral injunctions indicates well enough.

The dilemma of Euthyphro

Samantha asked:

Both Blackburn and Arthur casually allude to Plato’s dialogue Euthyphro as the locus classicus of the decisive refutation of a religiously based “command morality.” The sheer casualness and brevity of their allusion tells you much about how decisive and final that refutation is usually taken to be. How is that supposed to work exactly?

Answer by Jürgen Lawrenz

This issue was contentious even in the ancient days, because Greek mythology, where it deals with the gods and their doings, is in large measure a chronique scandaleuse of human patterns of behaviour transferred to the heavens. The poets depicted it without compunction, which (as you know) incurred Plato’s censure in his Politeia. For any thinker to put up such a conceptual dilemma as Socrates proposed about piousness (“hosios”), would have made the average intelligent Greek wonder what he is all about. On this account there was a more or less general perception alive among the Greeks that the gods, being immortals, could not truly understand the human imperative of adding quality of life to their social structures — of which the primary consideration was what we today call ‘human rights’, in Locke’s words, life, liberty and freedom of economic activity, none of which is meaningful to an immortal being. All the same, they always sought the blessing of the gods for this impulse towards democracy, which made its first tentative appearance in the colonial city states of the 7th century BC, despite their belief that this was a signature of humanity, not of divinity. But there is frankly no democracy to be found on Olympus — any more than in the Heavens of Jewish, Christian and Islamic religions — which is precisely the reason that Socrates insists on the consent of all the gods. But now the aforesaid exhibitions of piety among the migrating colonists might easily strike a cynic as expedients; and I suspect that many an old-time Greek would have been familiar with Pascal’s Wager long before Pascal ever thought of it.

Then c. AD 1700 Leibniz brought the same issue up again:

“Whatever God wills is good and just. But there remains the question whether it is good and just because God wills it or whether God wills it because it is good and just; in other words, whether justice and Goodness are arbitrary or whether they belong to the necessary and eternal truths.”

You will be forgiven (in both cases) for protesting that the form of the question is circular and therefore half-meaningless. Thus Plato/ Socrates tended to reify ‘The Good’ and attribute its custodianship (though not its cultivation!) to the gods. Leibniz in turn might be supposed to hint at the possibility that ‘the good and just’ exist independently from God; or if the first half of the question is considered in isolation, that Voltaire’s rebuttal says all that needs to be said. But does this mean the issue has suffered terminal refutation, as in your question?

By no means, it is alive and kicking as we speak, because there are innumerable people (including academics) who find that morals are insecure and parochial at best, unless we can have recourse to divine command. Equally of course innumerable people reject this notion and applaud the multiplicity of moral codes, mindful of the dictum “when in Rome, do as the Romans do”. In other words, the whole subject matter is impaled on the horns of a dilemma that is located somewhere between Dostoyevsky’s despairing cry “if there os no God, then everything is permitted” and Kant’s categorical imperative.

Returning to Socrates: His final word of reconciliation was, that the question of piousness, goodness, justice etc. is not answered by reference to God’s will, nor by God’s love of it, because the way the question is posed you can only go around in circles with your arguments. Yet Blackburn/ Arthur evidently speak for themselves, not for the intellectual community as a whole, since a massive literature exists which extends all the way from the Scholastics to modern deists, theists, agnostics and atheists, and it must not go without saying that their contentions have spawned a huge bevy of new terms and nomenclatures in moral and ethical philosophy. But this is a domain “where angels fear to tread”, hence I shall refrain. Although I must mention before I close the small matter of punishment, that gets nowhere near the same mileage of prose as love and divine will. I hope at any rate that you now have something to mull over, beyond the apparent shrugging of shoulders by Blackburn/ Arthur!