2,267
Views
4
CrossRef citations to date
0
Altmetric
Articles

Expanding the Methodological Toolbox: Factorial Surveys in Journalism Research

ORCID Icon &
 

ABSTRACT

Experimental designs to examine attitudes and behavior are crucial to make causal inferences. However, studies that assess attitudes and behavior of journalists are still dominated by correlational designs, such as used in surveys with journalists. Elaborating on historical and practical reasons for that, we argue in this paper why journalism scholars may benefit from including a certain experimental approach to their toolbox: the factorial survey experiment. Using data from a factorial survey with German newspaper journalists, we illustrate the application of factorial surveys from their conceptualization to the data analysis. Suggestions for further fields of application are made.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Notes

1 It is beyond the scope of this paper to give a review of research designs used in journalism studies – we rather would like to show that experimental designs are not as prevalent as in neighboring sub-disciplines. Following this argument, we aim to introduce factorial surveys as a further possibility to consider when researching journalists.

2 A Boolean search in abstracts of the database “Communication & Mass Media Complete” on January 25, 2019, yielded 148 hits for “journalism” AND (“experiment” OR “experimental study”), 438 for “journalism” AND (“content analysis” OR “content analyses”), 901 for “journalism” AND “survey”, and 1072 for “journalism” AND “interview”.

3 When entering similar search strings for political communication, the method survey was most prevalent (more than 1500 hits) – experiment, interview and, content analysis where almost equal with 500–600 hits (abstract search).

4 We are, of course, aware of the fact that journalism research is not only concerned with studying journalists. However, investigating the work, behavior, attitudes, and decisions of journalists is at the core of the discipline and could benefit a lot from using FS designs, as we show in this paper.

5 One has to be aware, though, that in the literature the terms are sometimes used interchangeably. To prevent confusion, we follow the terminology and threefold differentiation by Auspurg and Hinz (Citation2015b). Introducing conjoint analysis and multifactorial experiments would go beyond the scope of this paper as they differ largely in research tradition, data structure, and, most important, data analysis. For introductions to (conjoint) choice experiments, see, e.g., Knudsen and Johannesson (Citation2018), to conjoint analyses, e.g., Green, Krieger, and Wind (Citation2001).

6 Of course, low response rates in factorial surveys are still a problem concerning the nonresponse bias (e.g., Fowler Citation2013).

7 However, using only some vignette comes with statistical disadvantages. While in a complete vignette universe the “all of the dimensions and interactions between the dimensions are uncorrelated with each other” (Auspurg and Hinz Citation2015a, 16), fractions lack this desirable feature. Furthermore, some dimensions might be oversampled in a fraction. Consequently, the statistical strength of the FS is reduced.

8 The English terms of hard and soft news are familiar to German journalists since they are used in text-books in journalism school (e.g., Hooffacker and Meier Citation2017). Nevertheless, we made sure that our participants were acquainted with the terms by asking them before participating in the study if they knew the terms. Only n=3 neglected this and were excluded from further participation.

9 We group-mean-centered the independent level-2 variable. It is beyond the scope of the paper to discuss the advantages and disadvantages of the group- and grand-mean centering in multilevel modeling (e.g., Enders and Tofighi Citation2007).

10 As described above, journalists can also be clustered in news organizations or even countries. We refrained from clustering journalists also in newspapers since the n of these clusters would have been too small for an analysis (min=1, max=14; m=2) (Hox Citation2010).

11 We focus on statistical methods suited for a metric answer scale.

12 We report random intercept models with fixed effects for the predictors (level 1) and random effects for the residual variance of the levels (Levels 1 and 2). It is beyond our scope to discuss the possibilities in multilevel-modeling, but we refer the readers to appropriate literature, e.g., Bickel (Citation2007); Nezlek (Citation2011). Because the residuals of the full model were not normally distributed, we also decided for Huber/White/sandwich estimators using the xtmixed, vce(robust) command in Stata 14.0 Hoechle (Citation2007)

13 It is beyond the scope of this paper to give a detailed description of how to conduct multilevel analyses. Readers new to this approach may want to follow the rule of thumb though that multilevel modeling “[is] just regression” (Bickel Citation2007, 1) when interpreting the results () and to pay less attention to random effects which account for clustering of data. For a comprehensive description of multilevel analyses, see, for example, Raudenbush and Bryk (Citation2002).

14 Within the sets, the vignettes were presented in random order. We, furthermore, checked for potential influences of the sets, i.e., the so-called deck or set effect (Auspurg and Hinz Citation2015a), by including the set numbers as independent variables to the model. Since neither the set numbers yielded significant results nor did we see a notable change of the results, we report the model without the set number.