103
Views
17
CrossRef citations to date
0
Altmetric
Pages 1147-1170 | Published online: 19 Nov 2019
 

Abstract

We revisit the assertion that entrepreneurs are uniquely characterized in their ways of thinking; specifically being relatively more prone to the overconfidence bias and the representativeness heuristic in their decision‐making. We replicate an earlier seminal study in entrepreneurial cognition, with a wider and more current survey. We then extend that analysis by investigating whether such “different thinking” leads to different (i.e., less rational) choices and different (i.e., worse) firm performance. Given the expected differences, we also investigate whether there exist other factors that affect the use of such biases and heuristics, to control their effects on focal outcomes.

Notes

1 This paper contributes in several ways beyond what was presented in the original BB97 study. As described, there are both upstream and downstream extensions, and considerations of other outcomes and controls, to address calls for such research in related works. We also present replication results in an updated fashion that clarifies changes in “hit rates” and in what the choices—not just the justifications of the choices—were of the survey participants (the latter providing new results about the relative rationality of entrepreneurs).

2 Although a bias like overconfidence is considered a measurement of error (Kahneman Citation2003), its presence can have positive compensating effects when other errors exist (e.g., an error due to risk aversion—so, the overconfidence error could make a choice more risky, which would help counteract the effect of a risk aversion error that would make a choice less risky). Note that overconfidence is correlated with the use of specific decision modes (heuristics) and with increases in specific abilities (e.g., salesmanship). As with any “intermediate” measure, the overconfidence bias has both causes and effects, and as with most applied psychology concerns, that causal nature is a question for future work.

3 In the original BB97 study we noted that the “reasoning” was measured in the decision‐making scenarios used to calculate the representativeness heuristic, but the “actual choices” made for those scenarios were not reported. We were curious whether the two differed; in our study they did.

4 When a short‐cut—a heuristic—is used, the decision must be “less rational,” as by definition, any short‐cut is a deviation from “fully rational” (i.e., optimal) decision‐making. That is not to say that it could be true that a short‐cut could provide the same decision choice (e.g., to choose product A versus B in a scenario), and do so with less computing, making it in such a sense “economically better”—as it gives the same output for a cheaper input, for that specific scenario. The problem is that the set of “appropriate” settings is not known ex ante, and users of short‐cuts end up using them outside that set, giving what is normally observed as non‐optimal, or less rational, decision‐making. As such, discussions of performance implications of biases and heuristics are difficult.

5 Note that entrepreneurs do not always suffer more from biases than manager peers; Burmeister and Schade (Citation2007) tests for differences in the “status quo” bias. In that study they find that bankers are more likely to use the status quo bias more than entrepreneurs in consumer scenarios.

6 Unfortunately, the original data as well as some of the specific questions were unavailable from the original authors due to nondisclosure issues.

7 MarketTools’ sample respondents are scattered and mobile and consist of members that regularly complete on‐line surveys. As well, this technique is less prone to non‐sampling errors such as data collection and data processing. None of the common problems in electronic survey applications applied to our application: lack of universal coverage was not an issue given the validated representative population of MarketTools; bias in sampling frames due to users versus non‐users of the Internet (and e‐mail) was also not a concern due to MarketTools’ database; and, compatibility problems and technical problems simply did not exist with our application.

8 Specifically, we used Model 3A‐ [p. 899] where the “researcher cannot obtain the predictor and criterion variables from different sources, cannot separate the measurement context, and cannot identify the source of the method bias. In this situation, it is best to use a single‐common‐method‐factor approach [SCMF] (Cell 3A in Table 5).” We followed the reference of Elangovan and Xie (1999): using the LISREL statistical package for the analysis, our SCMF model provided the following statistics: χ2(229) = 327.70, p = .00, GFI = 0.890, while the BASE model provided the following statistics: χ2(260) = 412.49, p = .00, GFI = 0.861. We interpreted these statistics according to Elangovan and Xie (1999, p. 365) as: even though the overall chi‐square statistics were significant between the two models, the incremental fit index difference was calculated as a rho of 0.012 suggesting insignificant improvement—this means that the method effects were not significant (Bentler and Bonett 1980). In other words, the SCMF approach—as suggested by Podsakoff et al. (Citation2003), indicated no common method bias issue for our study.

Specifically, the procedural methods we used included:

•Protecting respondent anonymity to decrease the respondents’ tendency to make socially desirable responses. We accomplished this through the online method chosen, where anonymity was guaranteed through the third‐party intermediary.

•Reducing survey item ambiguity. We accomplished this through careful attention to wording in our questions, assessed through our pretesting stage.

•Separating scale items to reduce the likelihood of respondents guessing the relationship between variables and then consciously matching their responses to those relationships. We accomplished this by placing predictor and criterion variables far apart; that is, we placed dependent and independent variables to diminish the effects of consistency artifacts (Podsakoff et al. Citation2003; Salancik and Pfeffer Citation1977).

•Targeting the top managers as respondents. Single respondent bias is less of a problem when focal organizations are small (Gerhart, Wright, and McMahan Citation2000). By surveying the top managers of the new ventures and the C‐suite Executives of medium and large firms, we obtained the greatest information on the enterprise from that single response (Clark Citation2000).

The further statistical methods we used included:

•Conducting Harman's (Citation1967) one‐factor test on the data to ascertain whether one factor accounts for most of the variance when all variables are entered together. Our results gave seven factors with eigenvalues over 1.0, where the largest factor explained only 21 percent of variance.

•Assessing the significance of interaction terms in the analysis to determine whether a pattern of significant interaction terms exists. For example, the results of from analyzing the interaction of cognition and environmental terms are unlikely to have resulted from single‐respondent bias (Aiken and West Citation1991; Kotabe, Martin, and Domoto Citation2003) because it is unlikely that respondents would consciously theorize these complex relationships among variables when responding to the survey.

9. Note that we were not restricted by the SIC codes of the original BB97 study (SICs 2800–3800) or the one geographic area (the one U.S. state). We did obtain a similar response rate.

10. Due to restrictions on the population of respondents available to MarketTools, we had to adjust our criteria relative to the BB97 study in terms of the executive manager's firm characteristics. Respondents needed to work in firms of 100 FTEs or more (versus 10,000 employees in the BB97 study) and we did not restrict the firm to being publicly‐owned. However, we were not restricted to only two corporations and only a few SIC codes as the original study was.

11. Individual overconfidence was measured based on responses to five questions about the death rates from various diseases and accidents in the United States. The questions were based on the most recent vital statistics report prepared by the National Center for Health Statistics. Respondents had to choose between two causes as the one with the higher attributed death rate, and then answer a question about how confident they were in that selection.

12. We chose this measure primarily because it was a natural outcome from the original study that was curiously not reported; it is a natural outcome because it would have been reported in that data as closure to the representativeness scenarios. While it is of interest to identify the rationale expressed by the decision‐makers in their thought processes (for cognitive research), it is also of interest (to the strategist) to identify what the actual choice made was, regardless of the process taken to get there. Thus, we do the analyses here of involving the measure based on the choices made. It was a directly related extension to the original study that we considered an addressable flaw.

13. For robustness, we also report means t‐test statistics for differences of focal variables between entrepreneurs and managers here: for overconfidence and for representativeness, the entrepreneurs have significantly greater levels (p < .01) than the managers; for ratl‐rep, the entrepreneurs have greater levels (p > .10) than the managers, but it is non‐significant.

14. The unexpected non‐focal result was a significant negative correlation of alertness with ratl‐rep; our argument was that greater outside knowledge would make for better decisions. However, given the result of H3—that alertness increases overconfidence—perhaps it is not surprising that this other measured deviation from rational decisions is also increased by alertness, as it would appear that greater knowledge actually appears to make the decision‐maker more comfortable ignoring base rates (Forbes Citation2005, p. 637; Zacharakis and Shepherd Citation2001).

15. Note that Evans (Citation1985) and Siemsen, Roth, and Oliveira (Citation2010) both agree that interaction terms cannot be artifacts of common method variance; if interactions are found, they are likely to exist (although their practical effects may be attenuated).

16. We expect that the original BB97 results would have provided similar outcomes, and were perhaps somewhat “oversold” in terms of playing up the value of the cognitive factors and playing down that of the controls (as their controls appeared to add about as much to the “hit ratio” as their focal variables—9 percent versus 10 percent —even after those latter variables were present; recall their base hit ratio was about 60 percent whereas ours was about 51 percent).

17. This paper contributes by replicating the BB97 Study, with a more diverse sample, and updated presentation of results, and also a full reporting of the results. The main contribution, however, is found in the extensions of the study both upstream—to identify possible causes of the bias—and downstream—to identify possible effects of the heuristic and the bias over several measures; we also address several calls for alternative measures and controls in related, more recent studies in our analyses.

Additional information

Notes on contributors

Richard J. Arend

Richard J. Arend is Professor of Strategy and Entrepreneurship and Bloch Research Fellow at the Bloch School of Management at University of Missouri–Kansas City.

Xian Cao

Xian Cao is a doctoral student at the School of Business, University of Connecticut.

Anne Grego‐nagel

Anne Grego‐Nagel is a doctoral candidate at the School of Engineering, Kansas State University.

Junyon Im

Junyon Im is a doctoral candidate at the Henry W. Bloch School of Management, University of Missouri–Kansas City.

Xiaoming Yang

Xiaoming Yang is a doctoral candidate at the Henry W. Bloch School of Management, University of Missouri–Kansas City.

Sergio Canavati

Sergio A. Canavati is an Assistant Professor at the Department of Management, California State University, Los Angeles.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 153.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.