361
Views
23
CrossRef citations to date
0
Altmetric
Original Articles

Analysing Causality: The Opposite of Counterfactual is Factual

Pages 3-26 | Published online: 14 Oct 2010
 

Abstract

Using Jim Woodward's Counterfactual Dependency account as an example, I argue that causal claims about indeterministic systems cannot be satisfactorily analysed as including counterfactual conditionals among their truth conditions because the counterfactuals such accounts must appeal to need not have truth values. Where this happens, counterfactual analyses transform true causal claims into expressions which are not true.

Notes

Correspondence: Department of History and Philosophy of Science, University of Pittsburgh, Pittsburgh, PA 15260, USA. E‐mail: [email protected]

The most widely accepted counterfactual account of causality is David Lewis's according to which, if c and e are actual events, c causes e only if (for deterministic causes) e would not have occurred had c not occurred, or (for non‐deterministic causes), the probability of e ’s occurrence would have been substantially lower had c not occurred (Lewis Citation1986, 167ff., 175). The counterfactuals that CDep includes in the truth conditions of causal claims are different from the ones Lewis uses. But what I have to say about the former can easily be applied to the latter.

As is customary in the literature, I use the term “value” broadly. Values include a thing's qualitative as well as quantitative states, its features, conditions, etc., Existence and non‐existence at a given time or place are values. Being present in large quantities in the lining of your throat is one value for diphtheria bacteria; being absent is another. Inflammation is one value for your throat; lack of inflammation is another. Red, green, and puce are values of coloured objects. And so on.

What I am calling modularity is not exactly the same as what Woodward (Citation2002, S374) calls modularity. Hausman and Woodward (Citation1999, 542ff., 549ff.) use “modularity” for a condition on functional equations which describe relations between variables corresponding to the components of a system instead of a condition on the system the equations describe. I suspect this turns out to be significantly different from the usage of Woodward (1999) and I'm sure it differs even farther from my notion of modularity. However, these differences in usage will not affect my arguments with regard to CDep's appeal to counterfactuals to analyse causality.

This example violates (3), the condition on immaculate intervention, as well as my modularity condition, (4). But the functional equations the Hausman–Woodward modularity condition applies to contain error terms whose values could perhaps be adjusted in such a way as to allow the example of the tides to violate my condition (3) without violating what they call modularity. I don't understand the constraints CDep places on the use of error terms well enough to know whether such adjustments would be permissible. If they are not, the example of the tides violates Hausman–Woodward modularity as well as mine. In either case, it is not an immaculate manipulation.

An arrow leads back to X if it has X at its tail. Or if its tail is at the head of another arrow with X at its tail. And so on.

I am indebted to Wendy King for telling me about some considerably more interesting and sophisticated research of hers on which this example is loosely based. Notice that while B is represented as screened off by a single factor, C is represented as screened off by two together.

I say “may have to adjust” instead of “must adjust” because, for example, the influence of B may cancel out that of a new value assigned to F so that C retains its original value.

For example, it may be, but is not always, assumed that the values of the items represented by nodes in a graph do not have any common causes which are not represented in the graph (Glymour Citation1997, 217). For more assumptions, see Scheines (Citation1997, 193ff.) and Glymour (Citation1997, 206ff., 209ff., 319ff.).

It is worth noting that Glymour, Spirtes, and Scheines emphatically do not believe that every real system meets all of their conditions (including the one which corresponds to modularity) they elaborate for DAG modelling. Indeed, they conceive their project not so much as determining what can be learned from statistical data about systems which meet all of them, as figuring out what if anything can be learned when one or more of their conditions are not met, and what, if anything, can be done to decide whether a given condition is satisfied by the system which produced the data used to construct the graph (Glymour Citation1997, 210, 217).

The following detail I've ignored is extremely important in other contexts. To construct a DAG one applies the algorithms (supplemented by whatever knowledge is available concerning the system to be represented) to statistical distributions among data produced by observing or performing experiments on the system of interest. But in most cases the algorithms deliver, not a single DAG, but a class of different graphs, each of which represents a somewhat different causal set‐up capable of producing data which are the same in all relevant respects (Scheines Citation1997, 166).

Alternatively—depending on your logical preferences—it will have a third truth value, will be truth valueless, or will be false.

This is a terminological stipulation. In requiring the antecedents of counterfactuals to be false descriptions of the actual world, it fits the standard literature. But unlike Goodman (Citation1965, 36) and others, I allow counterfactuals to have consequents which are actually true. Some philosophers include subjunctives like “this property would bring $200,000 if it were sold to a willing buyer by a willing seller” under the heading of counterfactuals. This leaves room for saying that what happens later on (a willing buyer purchases the property from a willing seller) can realize the conditional.

I don't claim that none of the relevant counterfactuals has non‐vacuous truth values. It's plausible that some causes operate deterministically. It's conceivable that there are non‐vacuously true unrealized counterfactual claims to be made in connection with some indeterministic causal interactions. It will serve my purposes if the counterfactuals CDep and other such theories include among the truth conditions of just some causal claims lack non‐vacuous truth values.

Lewis (Citation1986, 175–84, 121ff.) has made some suggestions about how to treat non‐deterministic causality counterfactually. I won't discuss this because the details he mentions don't matter with regard to what I have to say in this paper.

Lewis (Citation1986, 45–46) couches this condition in terms of how large a miracle is required for the non‐occurrence of C , but he goes on to explain that a miracle in one world, w 1, relative to another world, w 2, is an occurrence which accords with the laws of w 1 but not with those of w 2. The greater the disparity between the laws, the greater the miracle.

See, for example, Scriven (Citation1961, 91), Cartwright (Citation1983, chs 2 and 3), and Suppes (Citation1984, chs 2 and 3).

For Belnap, Perloff, and Xu (Citation2001, 139), times are point instants. Mine are longer because many manipulations and changes take at least a little while to occur, and it takes a while for an entity to have some of the values in the examples we'll be looking at. Perhaps this could be avoided by decomposing my times into instants, and decomposing manipulations and values into components which can exist at an instant. I have no theory about how this might go.

If everything stops happening so that nothing is going on at some time, t, then the moment which is actual at t (and from then on, I suppose) is barren; it includes everything that's actually going on at t, but what's actually going on then is nothing at all.

The same goes, of course, for non‐conditional predictions like (Aristotle's own example) “there will be a sea battle tomorrow” (Aristotle Citation1995, 30, 19a/30ff.). If the world is indeterministic, the truth value of this prediction is undecided until tomorrow comes. I suppose the same goes for the non‐conditional probabilistic prediction “tomorrow morning, the probability of a sea battle occurring tomorrow afternoon will be 0.8”. I say “is true” where Belnap, Perloff, and Xu (Citation2001, 133–76, passim) say “is settled true”.

David Papineau suggested the probabilities might propagate along the actual history of the world in such a way as to rule out all but one of the possible segments in which the intervention occurs. If 1/2 is the probability required by (6) for R y at t–1 conditional on I X at t–2, then (6) is non‐vacuously true if the probabilities propagate in such a way as to rule out every alternative to HA except S1, and false if only S2 remains possible relative to what was actually the case just before t–2. I suppose the probabilities could propagate that way, but I don't know why they should propagate as required by CDep for every true causal claim. Papineau raised this possibility during the London workshop's first session on 11 September 2001. Our discussion of it ended when we looked up at the TV in the coffee shop we'd gone to for a break just in time to see the first of the Twin Towers disintegrate.

Clark Glymour objected by e‐mail to an early draft of this paper that in assuming that the probability of RY at t–1 depended on how many segments included it, I was also assuming that there are countably many possible histories which realize (6). If the truth value of the probabilistic counterfactual depended on how many segments include R y at t–1, (6) would be neither true nor false (except trivially so) if there were uncountably many. But why (the objection continues) shouldn't the probability of R x at t–1 depend upon some kind of non‐counting probability measure? Why couldn't that measure be such that (6) turns out to have a non‐vacuous truth value? I can't answer this question because (a) I don't know what the possibility space would have to be like in order to prevent the uncountably many possible I X at t–2 segments from gathering into countably many groups, some of which satisfy (6) and some of which do not, and (b) I don't know what kind of non‐counting probability measure would apply to a possibility which didn't allow such groupings. Accordingly I don't know whether the principled application of a non‐counting probability measure would make counterfactuals like (6) true where CDep needs them to be.

The issue that Glymour's objection raises is crucial to prospects for counterfactual analyses of causality. A clear attraction of non‐counterfactual accounts of causality is that they are not hostage to the development of plausible, non‐counting measures of probabilities which deliver all the results counterfactual accounts might require for their plausibility.

A bit less sloppily, where p is any such claim, and t is any time during, or after which e belongs or belonged to the actual history of the world, p is either true or false if evaluated relative to what is actually the case at t.

In saying this I do not mean to be endorsing CDep in a qualified way by suggesting that it delivers a satisfactory of analysis of some though not all causal claims. Whether it does is a question this paper has no space to consider.

Genetic expression is another place to look for such counterexamples (Chu et al. Citation2000), as are the complicated and tangled calcium cascades involved in neuronal signalling, and synaptic adaptation (see, e.g. Hille Citation2001, ch. 4, 95–130).

With regard to this, I am indebted to Stathis Psillos for drawing my attention to a congenial remark from Judea Pearl (Citation2000b, 428): “the word ‘counterfactual’ is a misnomer” as used in connection with inferences to and from causal claims. So‐called counterfactual claims, for example, about the probability that my headache would have continued if I had not taken aspirin, given the fact that it went away when I did, “are merely conversational shorthand for scientific predictions. Hence QII stands for the probability that a person will benefit from taking aspirin in the next headache episode, given that aspirin proved effective for that person in the past.… Therefore, QII is testable in sequential experiments where subjects’ reactions to aspirin are monitored repeatedly over time” (Pearl Citation2000b, 429).

I don't know whether or on what grounds Glymour and his colleagues might disagree. As I read Glymour (Citation1997, 317), he leaves open the possibility of an analysis of the notion of causality, which somehow involves counterfactual dependencies. But he denies that his use of Bayes nets to study causal structures assumes or implies that there actually is or can be any such analysis. Counterfactuals are used to construct and manipulate DAGs in order to discover casual structures—not to set out the contents of causal claims.

After retraining, and before visual cortex lesioning, the blind rats’ performance was not notably worse than it had been after they had been trained and before they were blinded (Lashley Citation1960, 437, 438).

The five structures were the hippocampal lobes, the auditory cortex, the lateral nucleus, pars principalis, the retrosplial areas (anterior nuclei) and the subicular areas (Lashley Citation1960, 446).

The version of Lashley's paper that I consulted is slightly abridged, but all that's missing is one or a few tables (Lashley Citation1960, viii).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.