Let’s say that there’s a disease with a 2% mortality rate (2 out of 100 people who contract it die). Imagine that someone contracts the disease and dies (picture it as an evil dictator if that makes the example easier to bear). Why does the D-N model of explanation not offer an explanation for the dictator’s death? (think about the law that is involved, think about deductive arguments, think about the link between prediction and explanation).
Answer by Gideon Smith-Jones
This is what I would call a ‘chestnut’.
Someone, years ago, thought up your example as an objection to Carl Hempel’s ‘Deductive-Nomological’ model of explanation, and ever since philosophy instructors have routinely trotted these out.
Firstly, to get the historical perspective: it was the great Scottish philosopher David Hume who proposed ‘regularity’ as the basis for all causal explanation, in his book A Treatise of Human Nature (1738-40). As part of his analysis, he gave a list of ‘rules for judging causes and effects’. Cause-effect relationships aren’t something that you just see. They are something you have to judge, on the basis of all you see and know.
Unsurprisingly, it turns out that not every perceived regularity is an ‘explanation’ or tracks ’causes’ and ‘effects’, and not every cause-effect relation can be defined precisely in terms of regularity, or ‘law’.
There are two points here: One concerns what is, or is not an adequate empirical explanation. The other concerns our notion of a cause. I don’t see any meaningful distinction here, and neither did Hume.
Elizabeth Anscombe, one of the leading 20th century British philosophers, gives the case of contracting a disease as a purported counterexample to the analysis of Humean causation in terms of regularity, in her 1971 lecture, ‘Causality and Determination’. What she was really objecting to, I surmise, is the over-optimistic use of Hempel’s model. There are so many times when we think we have offered a ‘full explanation’, when in reality the facts remain forever beyond our grasp.
Consider the following exchange:
‘Why did Bill die?’
‘He contracted Blank’s Disease.’
‘But Jill contracted Blank’s Disease and she didn’t die.’
‘Well, Jill was lucky, Bill wasn’t.’
There you have a complete and adequate explanation of why Bill died, given what we know. You can elaborate on this, say, if you like, that Bill smoked and drank heavily, which increased his chances of dying from the disease, but there are heavy drinkers and smokers who survive.
In terms of Hempel’s D-N model:
1. If x contracts Blank’s Disease, x’s chance of dying is one in fifty.
2. Bill contracts Blank’s Disease.
3. Therefore, Bill’s chance of dying is one in fifty.
How’s that an explanation? There is much we don’t know and never will. The precise configuration of Bill’s immune system when the bacterium first entered his body, what he had for lunch that day, and so on. To track the actual causes and effects, you’d need a total scan of Bill’s body, down to microscopic detail, and then a supercomputer to analyse the results. And even after all that you could miss the crucial ‘regularity’ or causal link.
The fact is, we accept, in so many cases, that explanations are not just ‘relative to interest’ as Hilary Putnam famously claimed but also relative to what we can know.
Consider another example:
‘Why did Bill die?’
‘Jill aimed at him with her high-velocity rifle and accidentally pulled the trigger.’
‘But Jill misses forty-nine shots out of fifty at that range.’
‘Well, Bill was unlucky.’
In this case, we can do better. Bill didn’t die just because Jill pulled the trigger, Bill died because a high velocity rifle bullet hit him straight on in the middle of his forehead as a result of Jill’s pulling the trigger, and everyone who is hit by a high velocity rifle bullet straight on in the middle of the forehead dies.
See the difference? Duh!
Your instructor’s example only looks like an objection to Hempel’s D-N model because it invites you to give the wrong explanandum (‘thing to be explained’). If you substitute for 3. above, ‘Therefore Bill dies’ then the conclusion doesn’t logically follow. Obviously. So what? Because human knowledge is necessarily limited, we can’t explain everything down to the finest detail. The rest is down to chance, or luck. We give, and accept, the kind of explanation that is available in any given case.