ABSTRACT
The experimental method is designed to secure the reliable attribution of causal relationships by means of controlled comparison across conditions. Doing so, however, depends upon the reduction of uncertainties and inconsistencies in the process of comparison; and this poses particularly significant challenges for the behavioral and social sciences because they work with human subjects, whose malleability and complexity often interact in unexpected ways with experimental manipulations, thus resulting in unpredictable behavior. Drawing on the Science and Technology Studies perspective and one of our authors’ experiences in experimental work, this paper examines how experimental social scientists manage to establish objectivity and standardization in the face of vagaries arising from working with human subjects. In identifying experimental researchers’ solutions to this challenge, we draw on methodological discussions among applied social scientists as naturally occurring data, through which we show how some seemingly mundane practices play essential roles in extracting patterns out of otherwise unpredictable behaviors in the lab. Closely examining such strategies, we reveal the inherent instabilities in the experimental method when adopted in the social sciences and discuss their methodological implications. In conclusion, we make tentative suggestions for escaping the kinds of methodological impasses which we have identified.
Disclosure Statement
No potential conflict of interest was reported by the author(s).
Notes
1. By ‘experimental behavioral and social sciences’ we mainly refer to disciplines that study human behavior with the experimental method. According to conventional disciplinary division, most experimental social sciences fall into this category, but some areas not conventionally considered as social sciences, such as neuroscience, also fit this description. In the rest of the paper we use the general label ‘social sciences’ for succinctness.
2. We acknowledge the heterogeneity across scientific sub-disciplines and that not all sub-fields of natural sciences can reach the same level of mathematical sharpness and predictive precision of physics (Nelson Citation2016). In the rest of the paper we sometimes use the wording ‘natural sciences’ or ‘the hard sciences’ to refer to the older and more deterministic sciences including Newtonian physics and chemistry, but our comparison is a very targeted one that aims to focus attention on the challenges specific to experimental social sciences.
3. Statisticians debate about how best to define confounds (Pearl and Mackenzie Citation2018). Some emphasize a spurious variable connected to both the input and output variables (e.g. Carr et al. Citation2018), but the word confound is also often used loosely in the case of failures to include a control variable that should have been controlled for, in the sense that its influence on the output is mixed with that of the input variable and therefore contaminates estimates of the relationship between the input and output (Pearl and Mackenzie Citation2018).
4. Here the contrast is drawn against cognitive psychology, which seeks to understand the hidden cognitive processes that are assumed to be independent of context.
5. The results are understandably more variable as individual factors play a bigger role in the evaluative process, even when some personal characteristics are statistically controlled for.
6. Cognitive psychology and neuro-psychology do use practice sessions and rehearsals extensively as their focus tends to be (ostensibly) mechanical processes isolated in human brains (Cohn Citation2008; Martin Citation2022).
7. This is short for Online Recruitment System for Economic Experiments (http://www.orsee.org/web/).
8. See https://www.sona-systems.com.
9. This is the idea that those who are told that they do better/worse than the average person in one domain, such as carbon footprint, will consequently behave in the opposite way in another domain such as buying green energy.
10. This undermines presumed efficiency of trade and has important economic and legal implications.
11. Longino (Citation2013) surveys research approaches to studying human behavior and demonstrates ways in which inseparability of effects and incommensurable approaches threaten research validity and makes policy research problematic.
12. Though note the hostile reaction of some populist politicians to the role of ‘experts’ in the determination of policy.
13. Compare the sociolinguist Labov: ‘Formalisation is a useful procedure even when it is wrong: it sharpens our questions and promotes the search for answers’ (Labov Citation1972, 61).
Additional information
Funding
Notes on contributors
Carol Ting
Carol Ting is Assistant Professor at the Department of Communication, University of Macau. Her area of research focuses on social science experiments and she has recently published work in the Social Science Journal and the International Journal of Social Research Methodology.
Martin Montgomery
Martin Montgomery is Visiting Professor in the School of Humanities, University of Strathclyde (and Emeritus Professor in the Faculty of Arts and Humanities of the University of Macau where he served as Dean). He has published widely on language and media.