Politicians’ expert-led fallacies

Natalie asked:

What fallacies do you find most interesting and why?

Answer by Paul Fagan

In responding to this question, I follow its spirit and give a very indulgent answer by demonstrating a particular type of fallacy that I find to be interesting: other panel members may have their own favourites. Often fallacies are grouped into two types: ‘formal fallacies’ used in logical structures where an incorrect premise may invalidate a whole argument; and ‘informal fallacies’ common within semantic reasoning and discussion. Here, I demonstrate a type of informal fallacy which often originates from of a group of persons who are experts in their field and their opinion contains much credence and immediately convinces many of its validity. The problem starts where the experts have drawn the incorrect conclusion from the information available; and this is further compounded by persons in power adopting the conclusion, without too much questioning of its validity, as it supports their cause. Often such fallacies are the first argument opening a debate and beg to be repudiated. Furthermore, being quite original, they generally do not involve the deliberate weakening of another’s argument or the deflecting of attention from another’s argument.

Here, I will provide a recent example of such a fallacy, which eventually collapsed. It originated from the world of economics and was fervently endorsed by some politicians. It occurred during ‘the Brexit debate’: which discussed whether the United Kingdom (UK) should relinquish its membership of the European Union (EU) prior to a referendum being held to decide the matter; (now, I hasten to add that this does not indicate a stance for or against Brexit, as it attempts to be impartial and allows us to learn by recounting events which actually took place).

Two events preceding the referendum held on 23 June 2016 are quite telling. Firstly, on April 2016 the British government distributed a leaflet to all households in the UK describing why they felt it would be better for the UK to remain within the EU; they believed that a ‘leave’ vote would rapidly bring forth disruption in society resulting in an ‘economic shock’ (https://www.gov.uk/government/publications/why-the-government-believes-that-voting-to-remain-in-the-european-union-is-the-best-decision-for-the-uk/why-the-government-believes-that-voting-to-remain-in-the-european-union-is-the-best-decision-for-the-uk#fn:15).  The document was replete with references from notable economic experts including banks, reputable universities and the International Monetary Fund.

Secondly, anticipating the referendum on the 15 June 2016, a member of the government’s opposition, namely Alistair Darling, a former Chancellor of the Exchequer, joined with the then current Chancellor of the Exchequer to warn of the need for an emergency budget if the UK voted to leave the EU; the budget would require the populace to suffer tax hikes in order to fill a predicted government deficit (https://www.politicshome.com/news/europe/eu-policy-agenda/brexit/news/76168/george-osborne-and-alistair-darling-warn-emergency). This type of fallacy may be said to fall into the category of fallacies known as ad baculum: where the arguer attempts to sway the undecided by disproportionately emphasising the consequences of not supporting the arguer’s stance (and the reader may like to review the entry ‘argumentum ad’ in The Oxford dictionary of Philosophy for definition of many common fallacies; other dictionaries are available).

However, after voting to leave the EU, the immediate economic hardship failed to materialise and to this day, the UK’s economy remains stable. Many experts have been forced to backtrack and reassess their contribution to the debate; (see https://www.theguardian.com/business/2016/oct/04/imf-peak-pessimism-brexit-eu-referendum-european-union-international-monetary-fund as one example). However, it provides an example of politicians seemingly throwing a deliberate fallacy into the debate in both a Machiavellian and clumsy manner.

The questioner also asks ‘why’ some fallacies may be interesting.  For me, this example reminds us that we must have the confidence to form our own opinions and maintain a healthy scepticism towards both the experts and those in power. Such fallacies may grow unchecked until they face the acid test which is their undoing. They may be compared to the fable of The Emperor’s New Clothes as they convince many of their validity; but when they are disproved, they are rejected rapidly.

Virtue ethics and social context

Alex asked:

In Nicomachean Ethics, book 2, chapter 6, paragraph 10-11, does Aristotle suggest a notion of ethics that is fixed among human societies or does it depend on social context?

Answer by Paul Fagan

On my reading of the extract, I cannot detect any suggestion of fixing ethics or adjusting it for social context. I would consider it to be only part of a greater definition of what constitutes moral virtue; but particularly an argument supporting the doctrine of the mean. With regard to assessing whether Aristotle favoured ethics to be either firmly fixed or dependent upon social context, in my view, would require one to read widely and refrain from focusing upon the smaller extracts. This is because one runs the risk of taking the passage out of context and ascribing the wrong meaning to it.

Bearing this in mind, we may ask ourselves, what was Aristotle’s mission with regard to introducing his version of morality? From my readings of both Aristotle’s Ethics and Politics, I would provide the following brief summary for the purposes here. To attain the desirable good of a healthy society, attaining the related good of citizens living well is also required. To achieve this underlying aim, Aristotle wished to instil a morality in individuals to produce people of upstanding character: for instance, at the very least, they would exercise the virtues of self-reliance, courage and generosity: (James Rachels, in his book The Elements of Moral Philosophy, lists of 24 virtues which he describes as a ‘reasonable start’ (Rachels 1993: 163)).  Now such virtues would be intrinsically valuable in their own right; however, Aristotle realised that executing some virtues required individuals to make sacrifices and so he recommended introducing an habituation process which included a common education for all. In turn, this would also encourage solidarity amongst citizens leading to a cohesive society with shared values (although, with that said, a few exceptions may be made: such as those individuals who could never become habituated and who would be cast out from society). Overall, a very prescriptive view of virtue ethics has been provided.

However, from a practical point of view, one may expect some differences in the way virtue ethics may manifest itself in differing societies. For instance, physical factors such as climate, altitude and geography may affect peoples’ lifestyles and therefore affect the values that they hold. As would the diet available to people in any particular location: vegetarian societies may find the idea of slaughtering animals to be taboo. Also the customs and religion inherited by any particular society may inform their values. Therefore, societies would exist with different virtues being privileged over others, resulting in the manifestation of differing versions of virtue ethics in those different societies. Hence, social context would be an important factor when virtue ethics is realised in any particular society (and The Stanford Encyclopedia of Philosophy describes this type of ‘cultural relativity’ as a particular criticism of virtue ethics: see https://plato.stanford.edu/entries/ethics-virtue/#ObjeVirtEthi).

From this, I would quite simply conclude, that although differing societies wishing to live virtuously may attempt to abide by a standard concept of virtue ethics, it would manifest itself differently due to differing social contexts.

The consequences of cultural relativism

Ana asked:

Explain the consequences of adopting cultural relativism?

Answer by Paul Fagan

Generally, it is fair to say that persons do not ‘adopt’ cultural relativism: instead they have it thrust upon them. To explain, the culture that a person inhabits, sets norms and standards, that inculcate a person. This may become a ‘mindset’ that a person is either unwilling or unable to reject. This affects many obvious aspects of life such as the clothes persons feel comfortable wearing or the food they prefer: however, it should be appreciated that the process sinks deep into a person’s psyche reaching areas that one may not be aware are affected.

From a philosopher’s viewpoint, this may have a major consequence which will now be explained. Firstly, for those philosophers that find cultural relativism to be a hindrance affecting good judgement, it causes problems when assessing whether persons from other cultures have behaved rightly or wrongly. Generally, one’s own inculcated variant of cultural relativism would be expected to encourage criticism of other cultures; with more criticism generated the further a culture is distanced from your own. For example, cows are sacred to Hindus but (most) westerners enjoy eating beef: hence, the Hindu would be expected to find western culinary practices reprehensible. Ideally, the good philosopher should be able to dispense with their own cultural relativism when judging others. This process, and its pros and cons, is described in more detail by James Rachels in his book The Elements of Moral Philosophy; where one chapter is entitled ‘The challenge of Cultural Relativism’ (1993 (New York: McGraw-Hill), pp. 15-29).

That said, rather than ‘adopting’ cultural relativism, its quotient existing in society could be increased if it was considered to be beneficial for that society. For instance, Aristotle wished for persons to behave virtuously, where virtue may be defined as ‘a trait of character, manifested in habitual action, that is good for a person to have’ (Rachels 1993: 163); furthermore, it may be argued that greater society should benefit from encouraging such individualistic traits as their combined action would contribute to maintaining cohesive communities (Rachels 1993: 169-170). To achieve this Aristotle recommended a common education shared by all, which would be  ‘the business of the state’  and encourage solidarity amongst citizens (Aristotle 1999. Politics: pp. 180-1   http://socserv.mcmaster.ca/econ/ugcm/3ll3/aristotle/Politics.pdf).

For many there would be a trade-off: one may reinforce one’s community’s values but have less understanding of the values of other societies. With this knowledge some may be tempted to deliberately promote the interests of their own societies at the expense of others. Possibly, it may be argued that this has already happened in the western world before the Second World War, where unscrupulous governments achieved a greater measure of cultural relevance in their societies by appropriating the education systems and media; in turn this was used to vilify other peoples. For the moment, Western societies have opted to foster more understanding of other societies.

In concluding, the consequences of encouraging cultural relativism can be summarised quite succinctly here: the reinforcement of cultural relativism may forge cohesive cultures, which may initially seem to be benign, but this may be accompanied by discouraging the understanding of other cultures.

 

What do we owe to future generations?

Jeremy asked:

What, if anything, do we owe to future generations? And, far more importantly, WHY?

Answer by Paul Fagan 

In 1987, the United Nations’ Brundtland Commission published Our Common Future and was one of the first international organisations to popularise the term ‘future generations’. This occurred within an explanation of the notion of sustainable development, pithily defined as ‘development that meets the needs of the present without compromising the ability of future generations to meet their own needs’ (my italics) (United Nations WCED 1987, p. 43). The United Nations’ guidance would indicate that humanity should adopt a path that preserves the goods that people enjoy today for the enjoyment of future persons. The underlying rationale is that poverty, whether suffered by persons today or tomorrow, is considered to be immoral and should be prevented. Warning that we are using our resources too rapidly, whilst couched in the language of commerce, the  United Nations  advised that we ‘may show profits on the balance sheet of our generation, but our children will inherit the losses’ (United Nations WCED 1987, p. 8).

However, the prospect of considering future generations is fraught with difficulties. One commentator, namely Ernest Partridge, in his article entitled ‘Future Generations’, has provided a slew of arguments which may lead one to reject considering the needs of our descendants (See Dale Jamieson (ed.), 2001, A Companion to Environmental Philosophy (Oxford: Blackwell), pp. 377-389).  For example, future persons do not have rights and cannot as they do not exist. Furthermore, we cannot possibly anticipate who the future people will be as any actions we take now will cause a different set of persons to be born in the future; which logically confounds any planning we may make on their behalf. Additionally, the consideration of future persons forces us to deal with an abstract, unnumbered and undifferentiated concept. Also, the question arises as to who matters more, ourselves or future generations? There seems to be no justification for favouring one generation over another: for instance, if an attempt is made to spread resources evenly over generations, then it might not leave current persons with the resources they need to prosper. Finally, we cannot know where future people will place value, and therefore cannot plan for this: future persons may prefer desert to rainforest and would implore us to act to bequeath this situation. Hence, some may conclude that it is an impossibility to cater for future generations and therefore we should not attempt to do so.

Nevertheless, two strands of reasoning are now provided to demonstrate why, in the minds of some, future persons should be considered.  Firstly, a communitarian may argue that if we are part of an ongoing community, whereby persons alive today link the persons of the past and the future, through concepts such as identity or morality, then it is a natural consequence that the needs of future persons should be anticipated. Furthermore, the argument may be deemed to be strengthened, if past generations have anticipated our own current needs an acted to ensure our wellbeing (Partridge pp. 380-1).

Secondly, a more individualistic argument may be provided by questioning what comprises righteous conduct in the present.  To explain, most people wish to have children and grandchildren, and wish for them to live in decent conditions. Therefore, from a personal viewpoint, the concern for future generations should include our own immediate descendants; and as immediate offspring would wish to procreate and leave decent conditions to their direct descendants, it may be concluded that an ongoing consideration of future generations will perpetuate. Hence, it should not be an unnatural or impossible intuition to consider the needs of future generations for as far as one can anticipate.

Even if one agrees that we should attempt to cater for the needs of future persons, the question of which goods should be bequeathed elicits differing opinions. Often debates focus upon whether ‘weak’ or ‘strong’ forms of sustainable development are preferred. The ‘weak’ variant may be characterised by policies that allow natural goods, such as raw materials, to be converted into man-made goods, such as infrastructure; with the latter good providing an inheritance for future generations. The ‘strong’ variant may be characterised by denying the interchangeability of both types of goods and placing natural goods above any concepts of substitutability: for some, natural goods, such as climate-regulating oceans and rainforests, may be crucial to humanity’s survival and therefore precluded from conversion to man-made goods (See Connelly et al, 2012, Politics and the Environment (Abingdon: Routledge), pp. 238-241; for a more detailed discussion of the differences between the strong and weak variants).

To conclude, the relatively new concept that living persons have the capacity to impinge upon the lives of future individuals has brought forth a notion, via the UN, to refrain from impoverishing future generations because it is morally unacceptable. If the debate was more widely aired, then a consensus may emerge to more formally and more completely answer the questions posed.

 

Justifying a war of aggression

Andrew asked:

Are there any circumstances in which a war of aggression is justified?

 Answer by Paul Fagan

For some persons, such as pacifists, any form of Warfare may be considered to be immoral. However, for many others a war of aggression may be justified, and here three types of justification are noted.

The questioner may find it surprising that for some political philosophies, war is considered an essential part of life. It keeps populations healthy and alert; and allows the evolutionary process of the ‘survival of the fittest’ to continue. For such political philosophies, a war of aggression requires very little justification. For instance, the Futurists, who were influential upon Italian fascism, proclaimed in article 9 of their manifesto of 1911: ‘We will glorify war—the world’s only hygiene’ (http://viola.informatik.uni-bremen.de/typo/fileadmin/media/lernen/Futurist_Manifesto.pdf).

But what reasons would other philosophies need to justify a war of aggression? Christian philosophy, has had a role to play here, and would traditionally supply justification where a nation state could provide a ‘just reason to go to war’ or a jus ad bellum. The Oxford Companion to Philosophy (OUP 1995) provides an insightful definition whereby, for a war to be just, it should be:

‘undertaken only by a legitimate authority, it may be waged only for a just cause, it must be a last resort, there must be a formal declaration of war, and there must be a reasonable hope of success’ (p. 905).

Hence, using this reasoning, nation states when acting aggressively, may argue that they had a just reason to go to war. A substantial body of work has been written in this area including definitions of what actually constitutes just conduct during warfare; and for further information the reader may like to visit the Stanford Encyclopedia of Philosophy which provides a very good summary of this area (http://plato.stanford.edu/entries/war/).

The reader may also like to look back into history and analyse any particular war of aggression of their choice. Now, if the war cannot be explained by a pugnacious political philosophy causing the aggression then one should query whether the aggressor felt it had a just cause. On the face of it, wars often fail to be explained adequately by either of these. This may lead one to ponder whether other processes of justification are at work and it is suggested here that a philosophical school of thought such as utilitarianism may be being used.  It is possible that the higher echelons of power, whether they are monarchs, governments or the establishment, carry out a utilitarian calculus either subconsciously or consciously, when deciding to go to war.  More bluntly, embarking upon a war may be explained by the tools provided by modern studies of business activity. It is possible that a form of cost benefit analysis is enacted, whereby the costs and benefits are weighed against each other before war is started. To explain, the costs of waging a war may incur: lives lost; taxation revenue required; and a diminished world standing. Whereas the benefits may include: gained resources; economic or military security; and the war may even divert the citizens’ attention from problems at home. Hence, after carrying out the exercise, if the benefits are deemed to outweigh the costs, then this may provide a justification for war in the eyes of the powers that be.

Varying philosophies, in their own way, may define whether a war of aggression is justified. Here, three have been noted, where: some will go to war quite readily; some will require the fulfilment of conditions; and some may calculate whether it is worthwhile. However, expounders of any of these philosophies would do well to bear in mind Machiavelli when he said ‘Wars begin when you will, but they do not end when you please’.

 

Rawls’ principles of justice

David asked:

I have a question about the implications of John Rawls’ two ‘principles of justice’ namely:

  1. Each person has an equal right to a fully adequate scheme of equal basic liberties which is compatible with a similar scheme of liberties for all.
  2. Social and economic inequalities are to satisfy two conditions. First, they must be attached to offices and positions open to all under conditions of fair equality of opportunity, and second, they must be to the greatest benefit of the least advantaged members of society.

My question concerns the second part of principle 2. Suppose there is a social or economic inequality which, if allowed, would reduce the wellbeing of the worst off 1 per cent of society by (say) 1 per cent, but would increase the wellbeing of everyone else by (say) 10 per cent. Suppose this inequality has not much to do with the principle 1. Would it be disallowed under Rawls’ theory?

Answer by Paul Fagan

The questioner here is focussing upon Rawls’ ‘difference principle’ which may be understood quite simply as a mechanism whereby any moves to benefit society must principally benefit the disadvantaged sectors of society. In A Theory of Justice (1999 Belknap Press) Rawls states that:

“An inequality of opportunity must enhance the opportunities of those with the lesser opportunity” (p.  266).

Taken at face value, we should really consider the least advantaged to hold a veto over society. For any planned move, if least advantaged’s position does not improve, then the move should not go ahead.

Hence, the suggestion here would almost certainly be disallowed in any form of Rawlsian governance. The suggestion is seemingly grafting a form of utilitarianism onto the second principle whereby the whole stock of goods in a society increases and the average holding would increase. Rawlsian philosophising moved away from the existing utilitarian distributions of goods.

That said, actually measuring how the least advantaged’s position has improved may provide some practical problems and Rawls himself suggested at least two measures: In illustrating the difference principle via the ‘distribution of income’ (pp. 67-68), should it be gauged in terms of monetary improvement? Or should it be measured over  longer timescale where the ‘appropriate expectation in applying the difference principle is that of the long-term prospects of the least favored extending over future generations’ (p. 252).

An interesting essay by Christopher Heath Wellman entitled ‘Justice’, that differentiates between differing types of distributions in society, including Rawls and Utilitarianism, may be found in The Blackwell Guide to Social and Political Philosophy.