2,285
Views
0
CrossRef citations to date
0
Altmetric
Original Articles

What was I thinking? A theoretical framework for analysing panel conditioning in attitudes and (response) behaviour

&
Pages 333-345 | Received 17 Feb 2017, Accepted 18 Oct 2017, Published online: 08 Nov 2017

Abstract

Though panel data are increasingly used in the social sciences, the question whether repeatedly participating in a panel survey affects respondents’ attitudes and (response) behaviour is still largely unsolved. Drawing on a model of associative networks that is extended by assumptions on survey satisficing, we present a theoretical framework that emphasizes the role of strength-related attributes of attitudes (accessibility, internal consistency, extremity) and motivation in respondents’ information processing. In particular, we argue that – depending on respondents’ predispositions – occupation with survey questions enhances attitude strength, which results in increasing attitude stability and influence on thoughts and behaviours. Against this background, we bring together hitherto unconnected results from previous research and thus contribute to a more thorough understanding of both the mechanisms and the multifaceted outcomes of panel conditioning.

Introduction

One of the major advantages of panel surveys is that repeated observations of the same respondents over time permit to analyse intra-individual changes. This enables researchers to approach causal relationships that cannot be modelled in an analogous manner on the basis of cross-sectional studies or time-series analyses. Consequently, data from panel surveys have been constituting a major source of scientific insights for many years. At the same time, panel data bear two problems that complicate the generalization of empirical results: non-random attrition (see, e.g. Schifeling, Cheng, Reiter, & Hillygus, Citation2015; Waterton & Lievesley, Citation1987) and panel conditioning. Although numerous studies explore the conditions and the extent of panel conditioning effects, the question ‘whether repeated interviews are likely, in themselves, to influence a respondent’s opinions’ (Lazarsfeld, Citation1940, p. 128) and her (response) behaviour has not been sufficiently solved yet. In recent years, evidence for the existence of panel conditioning has accumulated (e.g. Bergmann, Citation2015; Halpern-Manners, Warren, & Torche, Citation2014; Kroh, Winter, & Schupp, Citation2016; Warren & Halpern-Manners, Citation2012). At the same time, a substantial number of studies has found no or only very small effects (e.g. Axinn, Jennings, & Couper, Citation2015; Barber, Gatny, Kusunoki, & Schulz, Citation2016; Mann, Citation2005; Smith, Gerber, & Orlich, Citation2003). The contradictory nature of results is commonly ascribed to large differences in research designs (e.g. Warren & Halpern-Manners, Citation2012) and the difficulty of empirically separating panel conditioning from attrition and true change (e.g. Yan & Eckman, Citation2012).

In this paper, we argue that a thorough analysis of the occurrence, magnitude and direction of panel conditioning effects does not only require empirical but also theoretical rigour. While a substantial number of studies has suggested sophisticated measurement approaches for the separation of panel conditioning effects from attrition bias and true change (Crossley, de Bresser, Delaney, & Winter, Citation2017; Das, Toepoel, & van Soest, Citation2011; Struminskaya, Citation2016; Yan & Eckman, Citation2012), the phenomenon’s underlying mechanisms and driving factors are still far from clear. Hence, this paper aims to integrate proposed mechanisms behind panel conditioning in a comprehensive theoretical framework by drawing on insights from research on cognitive information processing, attitude strength and respondents’ survey response strategies. The framework shall serve as a basis for the formulation of hypotheses on the causes and consequences of panel conditioning for attitudes and (response) behaviour, thereby providing a broad theoretical foundation for future analyses.

Literature shows that most existing assumptions on the mechanisms of panel conditioning (at least implicitly) refer to cognitive processes, either focusing on the role of memory in a sense that survey responses depend on experiences in previous panel waves (e.g. Crespi, Citation1948; Struminskaya, Citation2016; Waterton & Lievesley, Citation1989) or postulating some kind of stimulation by answering survey questions (e.g. Jagodzinski, Kühnel, & Schmidt, Citation1987; Sturgis, Allum, & Brunton-Smith, Citation2009). This observation provides the rationale for connecting and expanding current explanations in a model of associative networks (Anderson, Citation1983). In this context, we make use of the concept of attitude strength (Krosnick & Petty, Citation1995) in a two-step procedure: first, we illustrate that strength-related attributes of attitudes, namely accessibility, internal consistency and extremity, are likely to be affected by repeated participation in a panel study. This is what we call the mechanisms of panel conditioning. Generally speaking, repeated processing of question-related information is supposed to simplify and accelerate the cognitive steps that are involved in answering a survey question: understanding the meaning of a question, retrieving relevant information from memory, constructing an answer from the available information, and reporting it (Tourangeau, Rips, & Rasinski, Citation2000). Second, we argue that substantial changes in attitudes and behaviour can be regarded as consequences of the distinctive features of strong attitudes.

Additionally, we refer to Krosnick’s (Citation1991) approach of survey satisficing to highlight the role of respondent characteristics, particularly motivation, in explaining observed changes in their response behaviour. Against this background, we demonstrate that our theoretical framework facilitates the integration of different, sometimes contradictory results from previous studies, derive a set of hypotheses, and give an empirical example of measuring panel conditioning with regard to political indecision while controlling for individual attitude strength.

Previous research on panel conditioning

In the 75 years that have passed since Lazarsfeld’s (Citation1940) first concerns on the potential influence of repeated interviews on respondents’ opinions, numerous studies in various disciplines have investigated this topic. Due to the diversity of methodological approaches, data sources and study designs, we do not aim at a comprehensive meta-analysis of detected effects. Instead, we give an overview on popular streams of proposed theoretical explanations for panel conditioning effects in the realm of attitudes and (response) behaviour. It has repeatedly been remarked that the theoretical basis of research concerning panel conditioning effects is rather thin (Struminskaya, Citation2016; Sturgis et al., Citation2009; Warren & Halpern-Manners, Citation2012). While a number of sound assumptions have been formulated, most propositions are not linked to a broader theoretical framework. In trying to narrow this research gap, we hope to shed some light on the reasons why some studies detect considerable effects of repeated interviewing while others do not.

Memory effects

When respondents are asked questions several times, they are likely to remember the structure of the questionnaire as well as their previous answers. Basically, this is the rationale behind proposing recollection or learning as causal mechanisms of panel conditioning, which can have different consequences: first, respondents may remember that answering a certain question yields a number of follow-up questions, which leads them to choose less time-consuming alternatives in subsequent waves (e.g. Toepoel, Das, & van Soest, Citation2008). This mechanism has been suspected with regard to substantially inexplicable changes in later waves of a panel survey, such as less reported jobs (Warren & Halpern-Manners, Citation2012), a decline in the size of personal networks (Eagle & Proeschold-Bell, Citation2015), and even reduced use of toothpaste (Nancarrow & Cartwright, Citation2007).

In the absence of external factors, the avoidance of lengthy questions is clearly attributable to a desire to reduce burden and can thus be regarded as a problem of response behaviour (Struminskaya, Citation2016). However, more exact reporting has been proposed – sometimes even simultaneously – as a second memory effect (Silberstein & Jacobs, Citation1989; Waterton & Lievesley, Citation1989). In this respect, learning the rules that govern a survey interview might also lead to better informed respondents who give more exact answers. Additionally, familiarization with the survey procedure due to repeated participation is supposed to increase respondents’ trust, which is why more truthful answers are expected in later waves (Struminskaya, Citation2016; Waterton & Lievesley, Citation1989). Finally, respondents remembering their answers from earlier waves may be disinclined to change their once stated opinion as they are reluctant to appear fickle, resulting in higher response consistency (Crespi, Citation1948). In sum, hypotheses based on memory effects address modifications in response behaviour, which are effectuated by participation in previous waves. However, due to the – often simultaneous – prediction of opposite effects, their value in the advancement of theory on mechanisms of panel conditioning is limited.

Stimulation effects

Next to remembering one’s answers as well as the general survey procedure, many studies assume that being interviewed about certain topics leads to a sensitization of the respondent, making her think about the survey topics more thoroughly and, as a consequence, increase topical interest. For example, Waterton and Lievesley (Citation1989) found evidence for an increase in partisanship and a decrease of the likelihood of ‘don’t know’ responses. This is explained by an increased cognitive engagement with the survey topic. Sturgis and colleagues (Citation2009) argue in the same way when they propose a ‘cognitive stimulus’ that serves to crystallize previously weak and inconsistent attitudes. Jagodzinski and colleagues (Citation1987), who label the phenomenon of higher attitude consistency in later waves as ‘socratic effect’, and Crespi (Citation1948), who terms it ‘clarification effect’, refer to an increased cognitive processing of survey topics, too. Particularly in the domain of attitudes, the idea of a cognitive stimulation that yields stronger opinions due to repeated interviewing is the most convincing theoretical approach to date. However, analyses are largely restricted to the outcomes of the assumed stimulation, while the exact mechanisms of individual cognitive processes and the role of respondents’ characteristics have not been focused on.

Mere measurement effects

The stream of research that operates under the heading ‘mere measurement’ expands the idea of stimulation from surveys on political and social attitudes to consumer research and other fields. Frequently, evidence from external validation data is provided, which proves that being surveyed does not only affect attitudes but also behaviour. In this area, increased attitude accessibility is often proposed as the main mechanism accounting for panel conditioning (see Dholakia, Citation2010). Morwitz, Johnson, and Schmittlein (Citation1993) revealed that being surveyed on the intention to buy a car or a computer leads to an increase in purchase behaviour, especially when participants had no prior experience with these products. The same effect was shown for candy bars (Morwitz & Fitzsimons, Citation2004) as well as for online grocery purchases and automotive services (Chandon, Morwitz, & Reinartz, Citation2004; Morwitz et al., Citation1993). The authors hypothesize that repeated activation of attitudes leads to higher attitude accessibility that in turn increases attitude-behaviour consistency. Despite the robustness of the mere measurement effect and its replication in different fields, its drivers as well as its effect size are, however, still not fully obvious (Dholakia, Citation2010).

Self-prophecy effects

Originally referred to by Sherman (Citation1980) as ‘self-erasing errors of prediction’, the term self-prophecy (coined by Greenwald, Carnot, Beach, & Young, Citation1987) describes the phenomenon that merely asking people to predict whether they are willing to perform a certain behaviour increases their probability of acting in line with their prediction. In a series of experiments, it was shown that previous self-assessments increase socially desirable behaviours, such as charity work (Sherman, Citation1980), voter registration and turnout (Greenwald et al., Citation1987) or health club visits (Spangenberg, Citation1997), whereas socially undesirable actions decrease (Spangenberg & Obermiller, Citation1996). The authors assume that due to the socially normative nature of the behaviours, respondents overestimate their adherence to perceived societal rules. However, in order to preserve a positive self-perception, they subsequently feel compelled to act according to their own prediction, which makes the normative bias a self-erasing one. This explanation fits in with the theory of cognitive dissonance (Festinger, Citation1957) and is currently considered the leading one, though other mechanisms such as heightened self-awareness or script evocation have been proposed as well (see Dholakia, Citation2010).

In the last years, there have been efforts to unite the streams of mere measurement and self-prophecy under the heading of a ‘question-behaviour-effect’. Several meta-analyses confirm a small, but significant positive effect of asking questions on the subsequent performance of related behaviour (Rodrigues, O’Brien, French, Glidewell, & Sniehotta, Citation2015; Spangenberg, Kareklas, Devezer, & Sprott, Citation2016; Wilding et al., Citation2016). However, the question whether the effect is mainly driven by attitude accessibility, processing fluency, behavioural simulation, motivation or the desire to reduce cognitive dissonance remains a matter of debate.

To sum up, most of the listed studies share the (implicit) assumption that the survey interview itself causes some kind of change in an individual’s information processing. Though all four explanations provide important clues for the causal mechanisms behind panel conditioning, unified discussions of the foundations of panel conditioning and its consequences have not initiated to date. In the following, this gap is minimized by elaborating a comprehensive theoretical framework that specifies mechanisms of cognitive information processing on the one hand and accentuates respondents’ individual preconditions as a main differentiating factor on the other. Against the background of associative networks, assumptions of mere measurement and self-prophecy can be generalized, whereas the stimulation of respondents and the conditions for differential outcomes of memory effects can be stated more precisely. Additionally, by extending the framework with Krosnick’s (Citation1991) approach on survey satisficing, we are able to predict changes in respondents’ reporting behaviour that can manifest themselves in different directions.

A theoretical framework for analysing panel conditioning

Associative network models, originally based on works about semantic connections (Anderson, Citation1983), currently belong to the most popular models for the explanation of information processing in cognitive psychology, but have been adopted by other disciplines as well. They assume that information is stored in long-term memory in the form of objects representing individuals, issues, specific events, but also values or ideological principles. Those objects can be imagined as nodes that are connected with specific attributes as well as with other objects in a network structure (see, e.g. Lodge & McGraw, Citation1991). In such a mental network, the links between particular objects contain information on the nature of their relationship. Moreover, objects are associated with positive or negative evaluations that vary in their intensity. Thus, initially knowledge-based cognitive models of information processing are extended by an affective component taking into account that socio-political concepts are affectively charged (e.g. Kim, Taber, & Lodge, Citation2010). Within this model, an attitude can be defined as a summarized or balanced reaction towards an object that can be positive or negative and differs in intensity.

Regarding the investigation of panel conditioning effects, this outline of the model architecture allows for several assumptions concerning changes in respondents’ associative networks due to repeatedly responding to the same questions. We posit that repeated interviewing affects respondents’ individual information processing. More precisely, we expect the automatic activation of question-related objects to increase attitude accessibility, internal consistency and extremity (see Figure ). These underlying mechanisms of panel conditioning take place largely automatically due to the mere confrontation of respondents with survey questions; they are only dependent on the individual arrangement of a respondent’s associative network (for example their initial experience with a certain topic). Based on this, further substantial consequences for respondents’ attitudes and (response) behaviour can be derived. These consequences, which are mediated by respondents’ predispositions, such as perceived social norms, topical interest or motivation, include effects on the stability of attitudes, attitude formation and related actual behaviours as well as the reporting of survey answers.

Figure 1. Framework for analysing panel conditioning.

Figure 1. Framework for analysing panel conditioning.

Mechanisms of panel conditioning

Firstly, we assume that repeated survey participation enhances the accessibility of object nodes. Accessibility can be defined as ‘the speed and ease with which the attitude can be accessed from memory’ (Fazio, Chen, McDonel, & Sherman, Citation1982, p. 340) and is a function of prior activation. This process happens automatically within a few hundred milliseconds after the perception of the object and is not consciously controlled by respondents (Fazio, Sanbonmatsu, Powell, & Kardes, Citation1986). The more one has come into contact with an object in the recent past, for example because one has heard or thought about it in a survey, the more mentally accessible this object, its associated evaluation, but also closely connected objects are. Moreover, frequent contacts with an object accelerate the process of attitude formation and the utterance of an evaluative judgment. Therefore, quick answers in surveys are regarded as an indicator for the accessibility of the object in question (e.g. Fazio et al., Citation1986). These considerations are in line with findings on heightened attitude accessibility in mere measurement research (e.g. Morwitz & Fitzsimons, Citation2004).

Additionally, it is very likely that panelists directly resort to a previously formed attitude in subsequent waves, as repeated survey participation increases the chance that a summary evaluation of a repeatedly activated object is stored in long-term memory next to singular object evaluations. This contrasts the situation in cross-sectional surveys where respondents frequently do not have a pre-formed attitude; hence, they have to newly construct an attitude from memory by retrieving, evaluating and putting together individual pieces of information when confronted with a new survey question (Bizer, Tormala, Rucker, & Petty, Citation2006). When a summary evaluation is available, attitude formation can be shortened or even skipped, which increases response speed. We thus assume that the time needed to answer a question decreases over the course of a panel survey and that the effect is most pronounced at the beginning, attenuating – also because of cognitive limits – with the number of repetitions.

Next to an increase in accessibility, we secondly expect that panel surveys enhance attitude consistency. The theory of spreading activation in associative networks poses that querying a specific object will also activate adjacent objects and their associated evaluations that are evaluatively consistent (Kim et al., Citation2010). A number of studies have demonstrated that objects exhibiting evaluative consistency are easier activated and transferred from long-term memory to working memory than objects that are associated with both positive and negative evaluations (e.g. Fazio et al., Citation1986; Judd & Brauer, Citation1995). Since the early work of Festinger (Citation1957) on dissonance theory, it is known that individuals usually prefer consistent to inconsistent attitudes. In a panel survey, the chance is high that respondents subliminally perceive inconsistencies in their evaluations and try to eliminate them by reducing the relevance of inconsistent information, adding concordant elements, or reinterpreting dissonant elements. Therefore, repeated administration of the same questions enhances selective processing of evaluatively consistent objects and their associated evaluations, while inconsistent attributes become increasingly meaningless in attitude formation. Researchers adhering to variants of the stimulation hypothesis have presented empirical evidence for increased attitude consistency (Jagodzinski et al., Citation1987; Sturgis et al., Citation2009).

Thirdly, the repeated activation of consistent evaluations results in increased attitude extremity. This hypothesis is based on the notion that each activation of objects, for example when respondents are confronted with questions on certain topics, strengthens the connections between objects that are evaluatively consistent. This results in a selective processing of information and, in turn, a higher influence of consistent evaluations on attitude formation (Tesser, Citation1978). Consequently, the summary evaluation of an object is incrementally steered towards a certain direction and the extremity of attitudes is expected to increase (Judd & Brauer, Citation1995; Kim et al., Citation2010). In a panel study this would mean, for instance, that the evaluation of a politician is getting more and more extreme from wave to wave.

So far, we have argued that repeated interviews have the potential to influence the strength-related attributes of attitudes. While the automatic activation of objects due to answering questions in a survey can be considered as fairly universal, it is important to state that panel conditioning effects will be more pronounced the lesser respondents have engaged in thinking about the objects in question before. This is a logical consequence of the mechanisms described above: for respondents who already have accessible, consistent and extreme attitudes towards a certain object, being asked a question about it only brings about a marginal change, if any. In contrast, respondents whose attitudes are less elaborated and partly inconsistent are likely to experience significant changes in their associative network structure due to survey participation (Judd & Brauer, Citation1995; Tesser, Citation1978). The latter situation should be frequently the case at the beginning of a panel study. Changes in associative networks therefore provide a clue why panel conditioning effects are most pronounced between the first and the second measurement occasion. In addition, these considerations are helpful in explaining the finding that shorter intervals between waves increase the probability for panel conditioning (Warren & Halpern-Manners, Citation2012): the less time has passed between the last measurement and a renewed activation of an attitude object in the next wave, the more mentally accessible is this object as well as its associated evaluations and the more internally consistent and extreme attitudes will become over time. In such a situation, panel conditioning can be expected to produce particularly strong effects.

While we consider the associative network model very useful for the explanation of actual attitude changes, we extend our framework by Krosnick’s (Citation1991) approach of survey satisficing in order to account more precisely for changes in reporting. Krosnick states that a respondent’s propensity to reduce the cognitive effort that is necessary to answer survey questions accurately is moderated by three factors: the more difficult it is for a respondent to provide the correct answer, and the less able and motivated to do so she is, the more likely it is that a satisficing strategy is employed. This can, for instance, result in selecting the first answer that seems reasonable instead of carefully reflecting all alternatives, indiscriminately agreeing with assertions, or saying ‘don’t know’ despite having a substantive opinion.

Summing up, we assume that the cognitive mechanisms operating during a panel survey change the way information is processed, resulting in stronger attitudes towards repeatedly assessed objects. This central claim facilitates the connection of hitherto freestanding explanations of panel conditioning. It thus helps to understand in detail what exactly happens in the head of respondents when repeatedly confronted with a survey question instead of largely concentrating on manifestations of the underlying cognitive processes. Moreover, it enables systematic analyses of the underlying individual mechanisms of panel conditioning taking into account respondents’ previous experience with the survey objects by operative indicators of attitude strength (Bassili, Citation1996). An increase in attitude strength implies, in turn, that distinctive features of strong attitudes will become more pronounced. We therefore expect attitudes to become more stable and more influential towards thoughts and actual behaviour due to repeatedly answering the same questions (Krosnick & Petty, Citation1995). Further, being repeatedly confronted with identical questions in a panel survey may also affect response behaviour. These consequences are exposed in more detail in the following.

Consequences of panel conditioning

Stability is one of the most important characteristics associated with strong attitudes (Krosnick & Petty, Citation1995) and is consequentially expected to increase with repeated survey participation. Furthermore, the probability that respondents resort to a previously stored summary evaluation increases over the course of a panel survey, while the influence of situational factors in the construction of an attitude decreases (Bizer et al., Citation2006). This should contribute to less inter-wave fluctuation of individual responses, too.

Stability can be defined as rank-order stability that is testable with test–retest correlations or absolute stability which is quantified by changes in intra-individual response sequences (e.g. Prior, Citation2010). While the former variant has been assessed in the context of stimulation effects (Jagodzinski et al., Citation1987; Sturgis et al., Citation2009), the latter is often associated with positing that respondents remember their answers from previous interviews (e.g. Bridge et al., Citation1977). Applying the model of associative networks, these two streams can be integrated as attitude strength and the recourse to a summary evaluation suggest an increase in both forms of stability over the course of a panel study. Empirical findings regarding stability are mixed: while Bridge et al. (Citation1977), Jagodzinski et al. (Citation1987) and Sturgis et al. (Citation2009) report a significant increase in stability, Waterton and Lievesley (Citation1989) as well as Silberstein and Jacobs (Citation1989) only find a negligible increase. Next to differences in operationalization and measurement, such ambiguous results can be attributed to the neglect of differences in respondent characteristics because a significant increase in stability is expected only for those respondents whose initial attitudes on the surveyed topics were rather weak and inconsistent.

In addition to more stable attitudes, we assume that repeated confrontation with identical survey questions leads to systematic and substantial changes in the formation of attitudes. Due to selective information processing, where certain (consistent) beliefs are activated and processed more frequently than others, attitudes are likely to be systematically altered depending on respondents’ existing preferences and intents (e.g. Morwitz et al., Citation1993). Such attitude changes may have consequences for closely connected objects within the respondents’ knowledge structure as well. In this context, empirical studies have shown that persons with strong attitudes exhibit an increase in evaluative differences between objects that are perceived as being contrary to each other (attitude polarization) when they process information that is in line with their previous attitudes towards these objects (Taber & Lodge, Citation2006). In the view of Tesser (Citation1978, p. 298), there is no doubt that even mere thinking about an object, for example during a survey, is sufficient to cause such effects: ‘thought about some particular object in the absence of any new information will tend to produce attitude polarization’.

This mechanism also provides an explanation why respondents’ trust in the survey and hence their willingness to reveal sensitive information tends to increase during the course of a panel study (Struminskaya, Citation2016; Waterton & Lievesley, Citation1989): as one can reasonably assume that respondents need a minimum amount of positive feelings towards both the survey organization and the asked topics as a prerequisite to initially participate, their evaluations towards the survey are likely to evolve in a positive direction, while inconsistent beliefs such as feeling uncomfortable during the interview become less and less important. This also means that respondents who initially agree to participate despite having negative feelings towards the survey, for example, because the requested information is seen as too sensitive, have a higher probability to drop out from the study.

Besides influences on thoughts, stronger attitudes can have consequences on the actual behaviour of respondents as well. It is well established that more consistent and accessible attitudes allow more accurate predictions of subsequent behaviour (e.g. Fazio et al., Citation1986). With respect to panel conditioning effects, it has been demonstrated that respondents’ perceived social norms play a crucial role (e.g. Spangenberg & Greenwald, Citation1999). The perception of existing inconsistencies between attitudes and behaviour is more likely the more often one participates in a panel survey and thus intellectually engages with the survey topics. The desire to resolve or at least reduce unpleasant inconsistencies should be especially strong when the respective behaviour is perceived as socially (un)desirable. The repeated confrontation with discrepancies is therefore hypothesized to trigger cognitive dissonance that should lead to an adaptation of one’s own behaviour to the social norm and subsequently more accordance between attitudes and behaviour.

Taking into account social norms that are perceived as important by the respondents, hence considering individual differences in respondents’ cognitive structures and normative beliefs, gives researchers the opportunity to predict the magnitude as well as the direction of changes, thus moving beyond the assumption of mere behavioural stimulation caused by panel conditioning. In this sense, observed panel conditioning effects such as a higher turnout in elections (e.g. Clausen, Citation1968; Granberg & Holmberg, Citation1992) or a higher percentage of correct answers to knowledge questions (e.g. Toepoel, Das, & van Soest, Citation2009) can be interpreted as normatively charged behaviour changes.

Finally, participating in a panel may not only influence attitudes and actual behaviour, but also response behaviour. By applying Krosnick’s (Citation1991) survey satisficing model against the background of the presented mechanisms in associative networks, it can be predicted how the conditions that foster satisficing (or optimizing as its counterpart) evolve over the course of a panel study. On the one hand, an increase in attitude strength makes it easier to process information from an existing knowledge structure, thus reducing difficulty, potentially improving ability, and therefore increasing the chance of optimizing. On the other hand, decreasing motivation might outweigh possible gains in ability and losses in difficulty. It has been shown that fatigue – due to the lengthy or repetitive character of a survey – decreases motivation and can lead to a strong desire to reduce the burden of answering cognitively demanding survey questions, resulting in a systematic underreporting of events or symptoms (e.g. Duan, Alegria, Canino, McGuire, & Takeuchi, Citation2007), avoidance of follow-up questions (Mathiowetz & Lair, Citation1994) or speeding through the questionnaire (Roßmann, Citation2017).

The presence of such effects is most likely in questions that have already been experienced as burdensome in a preceding interview (Das et al., Citation2011; Toepoel et al., Citation2008). Positioning individual motivation as a key concept in analyses of panel conditioning thus substantially reduces the arbitrariness that characterizes most previous theoretical approaches towards changes in reporting survey answers, where ‘better’ and ‘worse’ response behaviour is predicted simultaneously (Warren & Halpern-Manners, Citation2012; Waterton & Lievesley, Citation1989; Yan & Eckman, Citation2012) and can help to distinguish different directions of changes in reporting.

An empirical example

To illustrate the analytical potential of our framework, we give an example of panel conditioning effects on respondents’ political indecision, using data from the German Longitudinal Election Study. In particular, we use a campaign panel with six interviews conducted before the 2009 federal elections in Germany as well as several independent cross-sectional studies that serve as control groups (see Bergmann, Citation2015; Steinbrecher, Roßmann, & Bergmann, Citation2013 for detailed information on the data). To separate panel conditioning from confounding effects such as panel attrition, propensity score weighting was applied that takes into account respondents’ socio-demographic characteristics as well as their political interest, which is often correlated with panel attrition (e.g. Lazarsfeld, Citation1940). This procedure serves to guarantee that panel (treatment group) and cross-sectional (control group) respondents can be reasonably compared and observed effects can be correctly attributed to repeated interviewing. In addition, we separated panel conditioning effects from real changes over time by using the differences between the observed changes from one panel interview to the next and the aggregate change in two parallel cross-sections as a control variable in our analyses (see Waterton & Lievesley, Citation1989).

Our main focus is on intra-individual changes, therefore, we ran a fixed-effects model that accounts for person-specific heterogeneity (see, e.g. Allison, Citation2009). As dependent variable, we use the change in respondents’ political indecision regarding party vote intention (saying ‘don’t know’ which party one is going to vote for in the upcoming election). As explaining variables, we use the frequency of being interviewed as well as the indicators of attitude strength (accessibility, internal consistency and extremityFootnote1) as suggested above. Following our theoretical model, it is hypothesized that panel respondents experience an increase in attitude strength which leads to a decrease in political indecision. The effect is expected to be stronger for respondents whose attitudes were weak at the beginning of the study.

Table shows the influence of repeatedly participating in a panel study on the change of political indecision between the first and the sixth panel wave during the election campaign when individual attitude strength is accounted for. The consistent negative effect of interview frequency can be interpreted as a strong decrease in political indecisiveness of panel respondents that is significantly larger than the observed decrease in the general population (time trend) which we control for. In addition, model 2 shows that political indecisiveness is reduced significantly less for panel respondents whose party vote intention was highly accessible at the beginning. The same is true for attitude extremity towards the parties running for election (see model 3). Only the consistency in evaluating the personality of the candidates for chancellorship seems to have no independent influence on the change in political indecision (model 4).

Table 1. Panel conditioning effect on the change of respondents’ political indecision.

Overall, this brief example mainly confirms the model’s assumptions: The repeated confrontation of panel respondents with their party vote intentions led to substantial changes, surpassing those in the general population. As we accounted for confounding effects, panel attrition cannot serve as a possible explanation for these changes. Moreover, the initial strength with which an attitude is held at the beginning of a panel survey plays an important moderating role. Especially panel respondents with weak attitudes towards the issue in question show considerable effects, while respondents who already have crystallized and strong attitudes show significantly smaller effects.

Discussion

Although the notion that participating in a panel can change respondents’ attitudes and (reporting) behaviour has almost become a truism of social science research, questions of when, why, how, and to what extent such changes occur are far from being solved. In this paper, we argued that while recent works greatly advanced the methods to adequately measure effects of panel conditioning (Crossley et al., Citation2017; Das et al., Citation2011; Halpern-Manners et al., Citation2014; Kroh et al., Citation2016; Yan & Eckman, Citation2012), in order to thoroughly understand the mechanisms behind panel conditioning and to produce testable hypotheses on its occurrence, magnitude and direction, a theoretical framework is needed that is more comprehensive than existing singular assumptions.

Based on research in cognitive information processing, we proposed that repeatedly answering the same questions can be understood as a trigger for structural changes in associative networks, thus increasing (1) accessibility, (2) internal consistency and (3) extremity of attitudes. These processes happen largely automatically and are only moderated by respondents’ previous experience with objects in the sense that (4) the effects of repeated questioning are stronger the less respondents have come into contact with the survey topics before. We further argued that the increase in attitude strength due to repeated interviewing effectuates (5) higher stability of attitudes, (6) systematic attitude formation and (7) higher attitude-behaviour correspondence conditional on existing preferences and social desirability of topics. Finally, we claimed that (8) respondents’ motivation is a key element in predicting whether repeated surveying enhances satisficing or optimizing with respect to response behaviour.

At this point, it is important to state that the whole framework is not restricted to attitude questions, but can be regarded as applicable to any repeated question with an evaluative component, such as behavioural frequencies or knowledge questions. Changes in this respect can manifest themselves, for instance, in an active search for information in order to answer questions that previously could not be answered. The proposed framework connects existing streams of explanations and thus serves to improve the understanding of the cognitive foundations of panel conditioning. We also provided a short example, demonstrating how some of the model’s most important assumptions can be tested. However, only further empirical applications will allow firm conclusions about the hypotheses’ validity. Thus, we hope that the proposed framework can serve as a starting point for more comprehensive, theoretically informed future research on panel conditioning.

In this regard, several aspects are important in our view: first, empirical analyses should adopt a differentiated perspective on the consequences of panel conditioning by taking respondents’ differences more strongly into account. This applies in particular to respondents’ experience with the survey topic, which determines attitude strength, but also to predispositions in terms of social norms, topical interest and motivation that affect attitude stability and formation as well as (response) behaviour. In our view, accounting for these differences might help to understand some of the contradictory or null findings in previous research.

Second, panel conditioning needs to be carefully distinguished from other effects, such as panel attrition, but also real changes over time in the population of interest. The problem of correctly identifying panel conditioning in the presence of attrition has been extensively addressed in the literature (e.g. Struminskaya, Citation2016; Warren & Halpern-Manners, Citation2012). In order to separate panel conditioning from real change over time, researchers can exploit the potential of cross-sectional control groups by contrasting the observed changes from one interview to the next in a panel study with the aggregate change in two parallel cross-sections as shown in our example.

Third, we recommend complementing comparisons between repeatedly surveyed panelists on the one hand and cross-sectional respondents on the other by intra-individual analyses of change. Only the latter allows identifying causal effects of repeated interviewing by controlling unit-specific heterogeneity. Such an analysis strategy constitutes an important extension to previous research, which is frequently limited to univariate comparisons at the aggregate level, therefore comprising the danger of overlooking contrary effects that cancel each other out.

Finally, we completely agree with recently expressed demands of conducting methodological experiments (Halpern-Manners et al., Citation2014) that vary key conditions of panel conditioning, such as mode of data collection, time between waves, the number of repetitions as well as question wording. Such an experimental setting (that can also be integrated in classical surveys; see, e.g. Barber et al., Citation2016) has the advantage to carefully manipulate and simultaneously test how respondents process specific information and in what way this influences their attitudes and (response) behaviour.

We are aware that while some of our suggestions are fairly easy to implement, setting up experimental designs and integrating new variables into panel surveys is costly and time consuming. Further, identifying mechanisms and conditions that foster panel conditioning and correcting for bias are quite different matters. Nevertheless, we are convinced that the latter is not possible without the former. A theoretically informed understanding of panel conditioning would thus be a helpful starting point for measuring and correcting for panel conditioning effects in an accurate way.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes on contributors

Michael Bergmann is a senior researcher at the Technical University of Munich (Chair for the Economics of Aging) and the Munich Center for the Economics of Aging (MEA). He received his PhD at the University of Mannheim. His main research interests are changing attitudes and behaviour in longitudinal studies as well as methodological issues, such as interviewer effects, panel attrition and panel conditioning.

Alice Barth is currently working as a research assistant at the department of Sociology in Bonn and completing her PhD in the field of survey methodology and applied statistics. Her research interests are geometric data analysis, survey data quality, panel conditioning, and the interplay between attitudes and social structure.

Notes

1. Accessibility was measured by individual response latencies regarding party vote intention. Internal attitude consistency was operationalized by comparing respondents’ positive and negative evaluations of the two candidates for chancellorship. Using the evaluations of all parties, attitude extremity was calculated as the average of absolute deviations from the neutral midpoint of the scale (see Bergmann, Citation2015, pp. 164–176 for a detailed description).

References

  • Allison, P. D. (2009). Fixed effects regression models. Thousand Oaks, CA: Sage.10.4135/9781412993869
  • Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press.
  • Axinn, W. G., Jennings, E. A., & Couper, M. P. (2015). Response of sensitive behaviors to frequent measurement. Social Science Research, 49, 1–15. doi:10.1016/j.ssresearch.2014.07.002
  • Barber, J. S., Gatny, H. H., Kusunoki, Y., & Schulz, P. (2016). Effects of intensive longitudinal data collection on pregnancy and contraceptive use. International Journal of Social Research Methodology, 19(2), 205–222. doi:10.1080/13645579.2014.979717
  • Bassili, J. N. (1996). Meta-judgmental versus operative indexes of psychological attributes: The case of measures of attitude strength. Journal of Personality and Social Psychology, 71(4), 637–653. doi:10.1037/0022-3514.71.4.637
  • Bergmann, M. (2015). Panel conditioning. Wirkungsmechanismen und Konsequenzen wiederholter Befragungen [Panel conditioning. Underlying mechanisms and consequences of repeated interviewing]. Baden-Baden: Nomos.
  • Bizer, G. Y., Tormala, Z. L., Rucker, D. D., & Petty, R. E. (2006). Memory-based versus on-line processing: Implications for attitude strength. Journal of Experimental Social Psychology, 42(5), 646–653. doi:10.1016/j.jesp.2005.09.002
  • Bridge, R. G., Reeder, L. G., Kanouse, D., Kinder, D. R., Nagy, V. T., & Judd, C. M. (1977). Interviewing changes attitudes – Sometimes. Public Opinion Quarterly, 41(1), 56–64. doi:10.1086/268352
  • Chandon, P., Morwitz, V. G., & Reinartz, W. J. (2004). The short- and long-term effects of measuring intent to repurchase. Journal of Consumer Research, 31(3), 566–572. doi:10.1086/425091
  • Clausen, A. R. (1968). Response validity: Vote report. Public Opinion Quarterly, 32(4), 588–606. doi:10.1086/267648
  • Crespi, L. P. (1948). The interview effect in polling. Public Opinion Quarterly, 12(1), 99–111. doi:10.1086/265924
  • Crossley, T. F., de Bresser, J., Delaney, L., & Winter, J. (2017). Can survey participation alter household saving behaviour? Economic Journal. doi:10.1111/ecoj.12398
  • Das, M., Toepoel, V., & van Soest, A. (2011). Nonparametric tests of panel conditioning and attrition bias in panel surveys. Sociological Methods & Research, 40(1), 32–56. doi:10.1177/0049124110390765
  • Dholakia, U. M. (2010). A critical review of question-behavior effect research. Review of Marketing Research, 7(7), 145–197. doi:10.1108/S1548-6435(2010)0000007009
  • Duan, N., Alegria, M., Canino, G., McGuire, T. G., & Takeuchi, D. (2007). Survey conditioning in self-reported mental health service use: Randomized comparison of alternative instrument formats. Health Services Research, 42(2), 890–907. doi:10.1111/j.1475-6773.2006.00618.x
  • Eagle, D. E., & Proeschold-Bell, R. J. (2015). Methodological considerations in the use of name generators and interpreters. Social Networks, 40, 75–83. doi:10.1016/j.socnet.2014.07.005
  • Fazio, R. H., Chen, J.-M., McDonel, E. C., & Sherman, S. J. (1982). Attitude accessibility, attitude-behavior consistency, and the strength of the object-evaluation association. Journal of Experimental Social Psychology, 18(4), 339–357. doi:10.1016/0022-1031(82)90058-0
  • Fazio, R. H., Sanbonmatsu, D. M., Powell, M. C., & Kardes, F. R. (1986). On the automatic activation of attitudes. Journal of Personality and Social Psychology, 50(2), 229–238. doi:10.1037/0022-3514.50.2.229
  • Festinger, L. (1957). A theory of cognitive dissonance. Evanston: Row, Peterson and Company.
  • Granberg, D., & Holmberg, S. (1992). The Hawthorne effect in election studies: The impact of survey participation on voting. British Journal of Political Science, 22(2), 240–247. doi:10.1017/S0007123400006359
  • Greenwald, A. G., Carnot, C. G., Beach, R., & Young, B. (1987). Increasing voting behavior by asking people if they expect to vote. Journal of Applied Psychology, 72(2), 315–318. doi:10.1037/0021-9010.72.2.315
  • Halpern-Manners, A., Warren, J. R., & Torche, F. (2014). Panel conditioning in a longitudinal study of illicit behaviors. Public Opinion Quarterly, 78(3), 565–590. doi:10.1093/poq/nfu029
  • Jagodzinski, W., Kühnel, S. M., & Schmidt, P. (1987). Is there a “socratic effect” in nonexperimental panel studies? Consistency of an attitude toward guestworkers. Sociological Methods & Research, 15(3), 259–302. doi:10.1177/0049124187015003004
  • Judd, C. M., & Brauer, M. (1995). Repetition and evaluative extremity. In R. E. Petty & J. A. Krosnick (Eds.), Attitude strength: Antecedents and consequences (pp. 43–72). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Kim, S.-Y., Taber, C. S., & Lodge, M. (2010). A computational model of the citizen as motivated reasoner: Modeling the dynamics of the 2000 presidential election. Political Behavior, 32(1), 1–28. doi:10.1007/s11109-009-9099-8
  • Kroh, M., Winter, F., & Schupp, J. (2016). Using person-fit measures to assess the impact of panel conditioning on reliability. Public Opinion Quarterly, 80(4), 914–942. doi:10.1093/poq/nfw025
  • Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5(3), 213–236. doi: 08884080/91/030213-24
  • Krosnick, J. A., & Petty, R. E. (1995). Attitude strength: An overview. In R. E. Petty & J. A. Krosnick (Eds.), Attitude strength: Antecedents and consequences (pp. 1–24). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Lazarsfeld, P. F. (1940). “Panel” studies. Public Opinion Quarterly, 4(1), 122–128. doi:10.1086/265373
  • Lodge, M., & McGraw, K. M. (1991). Where is the schema? Critiques. American Political Science Review, 85(4), 1357–1380.10.2307/1963950
  • Mann, C. B. (2005). Unintentional voter mobilization: Does participation in preelection surveys increase voter turnout? ANNALS of the American Academy of Political and Social Science, 601(1), 155–168. doi:10.1177/0002716205278151
  • Mathiowetz, N. A., & Lair, T. J. (1994). Getting better? Change or error in the measurement of functional limitations. Journal of Economic and Social Measurement, 20(3), 237–262. doi:10.3233/JEM-1994-20305
  • Morwitz, V. G., & Fitzsimons, G. J. (2004). The mere-measurement effect: Why does measuring intentions change actual behavior? Journal of Consumer Psychology, 14(1), 64–74. doi:10.1207/s15327663jcp1401&2_8
  • Morwitz, V. G., Johnson, E., & Schmittlein, D. (1993). Does measuring intent change behavior? Journal of Consumer Research, 20(1), 46–61. doi:10.1086/209332
  • Nancarrow, C., & Cartwright, T. (2007). Online access panels and tracking research: The conditioning issue. International Journal of Market Research, 49(5), 573–594.
  • Prior, M. (2010). You’ve either got it or you don’t? The stability of political interest over the life cycle. Journal of Politics, 72(3), 747–766. doi:10.1017/S0022381610000149
  • Roßmann, J. (2017). Satisficing in Befragungen. Theorie, Messung und Erklärung [Satisficing in surveys. Theory, measurement and explanation]. Wiesbaden: Springer VS.10.1007/978-3-658-16668-7
  • Rodrigues, A. M., O’Brien, N., French, D. P., Glidewell, L., & Sniehotta, F. F. (2015). The question-behavior effect: Genuine effect or spurious phenomenon? A systematic review of randomized controlled trials with meta-analyses. Health Psychology, 34(1), 61–78. doi:10.1037/hea0000104
  • Schifeling, T. A., Cheng, C., Reiter, J. P., & Hillygus, D. S. (2015). Accounting for nonignorable unit nonresponse and attrition in panel studies with refreshment samples. Journal of Survey Statistics and Methodology, 3(3), 265–295. doi:10.1093/jssam/smv007
  • Sherman, S. J. (1980). On the self-erasing nature of errors of prediction. Journal of Personality and Social Psychology, 39(2), 211–221. doi:10.1037/0022-3514.39.2.211
  • Silberstein, A. R., & Jacobs, C. A. (1989). Symptoms of repeated interview effects in the consumer expenditure interview survey. In D. Kasprzyk, G. J. Duncan, G. Kalton, & M. P. Singh (Eds.), Panel surveys (pp. 289–303). New York, NY: Wiley.
  • Smith, J. K., Gerber, A. S., & Orlich, A. (2003). Self-prophecy effects and voter turnout: An experimental replication. Political Psychology, 24(3), 593–604. doi:10.1111/0162-895X.00342
  • Spangenberg, E. (1997). Increasing health club attendance through self-prophecy. Marketing Letters, 8(1), 23–31. doi:10.1023/A:1007977025902
  • Spangenberg, E., & Greenwald, A. G. (1999). Social influence by requesting self-prophecy. Journal of Consumer Psychology, 8(1), 61–89. doi:10.1207/s15327663jcp0801_03
  • Spangenberg, E., Kareklas, I., Devezer, B., & Sprott, D. E. (2016). A meta-analytic synthesis of the question-behavior effect. Journal of Consumer Psychology, 26(3), 441–458. doi:10.1016/j.jcps.2015.12.004
  • Spangenberg, E., & Obermiller, C. (1996). To cheat or not to cheat: Reducing cheating by requesting self-prophecy. Marketing Education Review, 6(3), 95–103. doi:10.1080/10528008.1996.11488565
  • Steinbrecher, M., Roßmann, J., & Bergmann, M. (2013). The short-term campaign panel of the German longitudinal election study 2009: Design, implementation, data preparation, and archiving. GESIS – Technical Reports 2013/20. Köln: GESIS.
  • Struminskaya, B. (2016). Respondent conditioning in online panel surveys: Results of two field experiments. Social Science Computer Review, 34(1), 95–115. doi:10.1177/0894439315574022
  • Sturgis, P., Allum, N., & Brunton-Smith, I. (2009). Attitudes over time: The psychology of panel conditioning. In P. Lynn (Ed.), Methodology of longitudinal surveys (pp. 113–126). Chichester: Wiley.10.1002/9780470743874
  • Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755–769. doi:10.1111/j.1540-5907.2006.00214.x
  • Tesser, A. (1978). Self-generated attitude change. Advances in Experimental Social Psychology, 11, 289–338. doi:10.1016/S0065-2601(08)60010-6
  • Toepoel, V., Das, M., & van Soest, A. (2008). Effects of design in web surveys comparing trained and fresh respondents. Public Opinion Quarterly, 72(5), 985–1007. doi:10.1093/poq/nfn060
  • Toepoel, V., Das, M., & van Soest, A. (2009). Relating question type to panel conditioning: Comparing trained and fresh respondents. Survey Research Methods, 3(2), 73–80.
  • Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response. Cambridge: Cambridge University Press.10.1017/CBO9780511819322
  • Warren, J. R., & Halpern-Manners, A. (2012). Panel conditioning in longitudinal social science surveys. Sociological Methods & Research, 41(4), 491–534. doi:10.1177/0049124112460374
  • Waterton, J., & Lievesley, D. (1987). Attrition in a panel study of attitudes. Journal of Official Statistics, 3(3), 267–282.
  • Waterton, J., & Lievesley, D. (1989). Evidence of conditioning effects in the British social attitudes panel. In D. Kasprzyk, G. J. Duncan, G. Kalton, & M. P. Singh (Eds.), Panel surveys (pp. 319–339). New York, NY: Wiley.
  • Wilding, S., Conner, M., Sandberg, T., Prestwich, A., Lawton, R., Wood, C., … Sheeran, P. (2016). The question-behaviour effect: A theoretical and methodological review and meta-analysis. European Review of Social Psychology, 27(1), 196–230. doi:10.1080/10463283.2016.1245940
  • Yan, T., & Eckman, S. (2012). Panel conditioning: Change in true value versus change in self-report. In Proceedings of the Survey Methods Research Section (pp. 4726–4736). Alexandria, VA: ASA.