603
Views
3
CrossRef citations to date
0
Altmetric
Introduction

SYSTEM EFFECTS AND THE PROBLEM OF PREDICTION

Pages 291-312 | Published online: 26 Mar 2013
 

Abstract

Robert Jervis's System Effects (1997) shares a great deal with game theory, complex-systems theory, and systems theory in international relations, yet it transcends them all by taking account of the role of ideas in human behavior. The ideational element inserts unpredictability into Jervis's understanding of system effects. Each member of a “system” of interrelated actors interprets her situation to require certain actions based on the effects these will cause among other members of the system, but these other actors' responses to one's action will be based on their own perceptions of their situation and their interpretations of what it requires. These ideas are fallible, but we cannot predict the mistakes people will make if the errors are based on information we do not have or do not interpret in the same way they do. Not only members of a system but social-scientific observers and policy makers are ignorant of others' information and interpretations, and therefore are as likely to err in their behavioral predictions as are members of the system. Thus, Jervis's book raises serious questions about how to evaluate policies directed toward producing positive system effects. The questions are unanswerable at this point, but they might be susceptible to analysis by an ambitious form of political theory.

Acknowledgments

thanks Samuel DeCanio, Stephen DeCanio, and Nuno Monteiro for comments on previous drafts.

Notes

1. On Converse, see Critical Review 18, nos. 1–3 (2006), republished as Friedman and Friedman Citation2012a; on Tulis, Critical Review 19, nos. 2–3 (2007), republished as Friedman and Friedman Citation2012b; on Tetlock, Critical Review 22, no. 4 (2010).

2. Mitchell 2009 is an excellent guidebook to complex-systems theory, not least in that the author frequently asks whether the theory is applicable to human realities, and does not hesitate to answer in the negative.

3. Sometimes, to be sure, he imports law-like generalizations from other disciplines, such as psychology, writing, for example, that “people tend to think that good things (and bad things) go together and thereby minimize the perceived trade-offs among desired values” (Jervis 1997, 230) and then citing a slew of psychology-journal articles. (Jervis 1976 is an extended engagement with the cognitive-psychology literature aimed at producing generalizations about the basis of misperceptions.) However, if there is one field that, in principle, might impose regularities on what would otherwise be the kaleidoscopic flux of ideational possibilities, it would be psychology, since the members of a species might well share cognitive traits that constrain their ideas. This is not to say, however, that the reading I am giving would be endorsed by Jervis. I am trying to ferret out the logical prerequisites and implications of system effects as he presents them, but sometimes readers of the book will find statements, particularly in chapter 1, that run counter to my interpretation of this logic, particularly on pages 22 and 144, where Jervis casually refers to the “laws” of economics and, on page 22, of politics. (However, Jervis does not identify these laws or claim to be adding to them.) I construe this as a case of his not having fully appreciated the radically antipositivist implications of the book.

4. It does not take a leap of the imagination, however, to “predict” that political scientists might jump on the Nate Silver bandwagon and use statistics to try to predict things other than elections. See Ward and Metternich Citation2012. For a pre-emptive, measured critique of the applicability of statistics to future events, see Blyth Citation2006; and for discussions of the related work of Nassim Taleb, see Blyth Citation2009, Jervis Citation2009, and Runde Citation2009.

5. Usually the models “cheat” by using survey data on presidential approval, survey data on the two candidates before and after their conventions, primary-election results, and other measures of voters’ opinions about the presidential candidates, which are variants of the very thing expressed by their votes on Election Day. We already have plenty of polls, however; if the forecasting exercises have a scholarly purpose, it is to show that “real” factors, such as changes in unemployment rates and economic growth in a given quarter prior to the election, are (somehow) at work, even though the real factors alone cannot make accurate predictions, and so must be tweaked by using polls and other direct measures of opinion. The forecasters then fit various measures of public opinion and real factors against the small N of past presidential elections to produce a model that will forecast the next one, which presupposes that there is a temporally uniform underlying mélange of causes expressible in a formula weighting the various factors. The notion seems to be that as the N grows over time, the formulae will grow more precise overall, and that if they fail in a given case, that is only to be expected, as these are mere probability predictions. In short, only the inapplicability of the ceteris paribus clause, not the inapplicability of the Homo economicus model itself, is considered as a cause of outliers. See Campbell Citation2012 and Citation2013.

6. An interesting exception is explained in Lewis-Beck and Tien Citation2013. The authors' best-performing model in 2012 used only a measure of subjective perceptions of objective economic conditions: namely, the net proportion of survey respondents six months before the election who said that “business conditions are worse” than they had been previously. The authors explain that on the basis of “voter behavior theory,” they would have preferred their old “Jobs Model” to the subjective model, but the predictions of the two models diverged, with the subjective model performing better (ibid., 39). That is, people's perceptions of economic conditions were more predictive of how they would vote than were the real conditions themselves. If so, however, then perhaps the dependent variable that should be of interest is not the electoral outcome (which we all find out, soon enough), but people's perceptions of economic conditions; and perhaps the hypothesized independent variables should be factors (such as media coverage of economic conditions) that could produce these perceptions regardless of whether they accurately reflect real conditions. The election-forecasting scholarship is pointless curve fitting unless the models are supposed to identify what is really behind people's votes. But if this “real” factor is people's beliefs, then the economic factors used in the likes of the Jobs Model should be seen as, at best, proxies for what is actually causal: voter perceptions (whether of unemployment, business conditions, or anything else).

7. E.g., Fiorina Citation1981.

8. See n6 above.

9. Or they get washed out by the use of measures of opinion to diminish the impact of real factors alone; see n5 above.

10. Moreover, even if one votes out of a felt civic obligation rather than as an attempt to affect the outcome, this obligation is not fulfilled merely by voting per se. Few would claim that they have a civic obligation to show up at the polls but that, having done so, they may proceed to choose whom to vote for by flipping a coin. (How would such an obligation make sense?) Nor would an instrumentally rational voter who thought her vote was likely to be decisive have reason to try to affect the outcome (by voting) if she could not motivate a nonrandom vote. An obligation to vote, or a desire to affect the outcome, must entail voting for the “right” candidate, i.e., the one who, the voter predicts, is likely to advance what she takes to be good ends. Yet if people knew, as rational-ignorance theory holds, that they were too poorly informed to make such predictions with any reliability, because they had deliberately underinformed themselves, then they would have no reason, whether moral or instrumental, to vote.

11. A rejection of the pretense of predictive knowledge does not entail a rejection of determinism. I am suggesting that ideas are causes of behavior. In a Laplacean sense, one could, in principle, predict ideas, hence behavior, if one had omniscient command of all the antecedent conditions that lead one person to invent or endorse or transmit an idea, while another rejects it or never hears of it in the first place. However, we do not have such knowledge, and it is a safe bet that we never will. A predictive epistemology is logically possible but pragmatically impossible. The “laws” of epistemology may be knowable in principle, but not in practice.

12. Another path toward minimizing the role of subjective beliefs in human behavior is to treat emotions as overriding (rather than being triggered by) subjective beliefs, since emotions can be assumed to have some roughly general similarity across individuals, and these similarities can plausibly be seen as objective facts that directly control individual behavior. Long ago, Jervis (1976, 4–5) dispatched political psychologists’ overemphasis on emotions by pointing out the performative contradiction it involves, at least if one is using emotion to explain the behavior of policy makers in non-crisis situations. The scholars attributing agents’ behavior to emotion surely would not attribute their own attribution of emotion to the agents as the result of emotion. Cf. Friedman Citation2012 and Ross Citation2012.

13. This is not to say that there are no differences over values, any more than by bracketing the role of emotion, one is saying that emotion never overrides, rather than amplifying, rational judgment. But if one does not set values and emotions to the side, one cannot even consider the possibility of subjective misperceptions of objective facts.

14. On complex-systems theories as theories about epistemology, not ontology, see McIntyre Citation1998.

15. Clearly, in this respect, I am departing from the letter of Jervis's book, but I hope not from the spirit.

16. I am referring to Hayek's quantitative notion of what makes “spontaneous” orders “complex phenomena.” See Hayek Citation1967.

17. See Neumark and Wascher Citation2009 for summaries of many dozens of studies of the effects of minimum-wage increases.

18. In response to White House claims that “a range of economic studies show that modestly raising the minimum wage increases earnings and reduces poverty without measurably reducing employment,” a Wall Street Journal editorial quoted David Neumark of the University of California at Irvine, who said that “the White House claim of de minimis job losses ‘grossly misstates the weight of the evidence.’ About 85 percent of the studies ‘find a negative employment effect on low-skilled workers.’” “The Minority Youth Unemployment Act,” Wall Street Journal, 19 February 2013.

19. Or, at best, putative known unknowns whose applicability Earle had reason to doubt.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.