1,007
Views
6
CrossRef citations to date
0
Altmetric
Research Notes

Changes in Definitions and Operationalizations in Advertising Research—Justified or Not?

Pages 468-476 | Received 12 Jan 2022, Accepted 10 May 2022, Published online: 10 Jun 2022

Abstract

Many advertising constructs vary considerably in how they are defined and operationalized—variations which result from changes in definitions and operationalizations over time and between studies. These changes undermine the comparability of empirical insights, challenge the integration of insights into an existing body of knowledge, and thereby impede knowledge development in advertising research. This negative effect is why it is crucial to know whether and when changes are needed and justified. In this article, we address definition and operationalization change from a theoretical perspective, identify factors driving changes in definitions and operationalizations, and outline when changes are justified. We then offer recommendations for how to avoid unjustified changes.

This article is part of the following collections:
Journal of Advertising Best Article of the Year Award

Recent research has demonstrated that many important advertising constructs, such as ad attitude or brand attitude, vary considerably in whether and how they are defined and operationalized in the literature (e.g., Bergkvist and Langner Citation2017, Citation2019; Bruner 1998). This variation results from an evolutionary process where constructs evolve through different stages (Bergkvist and Eisend Citation2021). The dynamic model of construct evolution suggests that consensus is the ideal end state for constructs, which requires a clear construct definition and agreement on its operationalization and most valid measures. Very few constructs seem to have reached a state of consensus or dominance (but see, e.g., need for cognition), and research practice reveals that most constructs are not (yet) in an ideal state but are characterized by changing definitions and variable and customized operationalizations (Bergkvist and Eisend Citation2021). While the dynamic model acknowledges the evolving nature of definitions and operationalizations and changes as part of long-term consensus building, not all changes are required and necessary to approach a construct’s ideal state; some changes seem not to contribute to construct improvement and thus are not justified.

Although there is a growing body of research on various aspects of advertising and marketing constructs (e.g., Bergkvist and Langner Citation2017, Citation2019; Bergkvist and Eisend Citation2021; Katsikeas et al. Citation2016; Ptok, Jindal, and Reinartz Citation2018), there appears to be no research on when changes in construct definitions and variations in operationalizations are justified. This dearth of research is surprising given the negative consequences that unjustified changes have on marketing and advertising knowledge. Advertising meta-analyses show, for instance, that changing the definition and operationalization of advertising creativity from a multidimensional concept (e.g., as original and relevant) to a single dimension of originality reduces the overall correlation-based effect of creativity on brand attitude by more than 50% (from 0.33 to 0.16 ; see Rosengren et al. Citation2020). Similar results occur for ethnic identity effects in advertising after operationalization changes are introduced: The effects are twice as strong when ethnic identity is measured by a single-item compared to a multi-item scale (0.33 versus 0.15; Sierra, Hyman, and Heiser Citation2012).

In this article, we address definition and operationalization change from a theoretical perspective. Drawing on recent literature on definitions and research practice, we identify factors driving changes in definitions and operationalizations and outline when changes are justified, and we show examples of justified definition and operationalization changes from current advertising research. We offer recommendations for dealing with changes in definitions and operationalizations in advertising research.

The Importance and Role of Definitions and Operationalizations

Definitions serve several essential functions in research: They distinguish different constructs from one another (e.g., attitude toward the ad versus attitude toward advertising in general); they provide the basis for categorizing objects (e.g., liked versus disliked ads); they identify antecedents, outcomes, and correlates in a nomological network (e.g., the dual-mediation hypothesis as an attitude toward the ad model); and they guide the development of research manipulations and measurement operationalizations (e.g., the items to assess attitudes or the manipulation of involvement) (Podsakoff, MacKenzie, and Podsakoff Citation2016). A lack of consensus for definitions reduces the likelihood of consensus about measurement operationalizations, which is critical for knowledge accumulation, as divergent operationalizations may render research results incommensurable (Landy et al. Citation2020). Operationalizing the same construct with different measures in different studies makes it impossible to know whether factors in the study context or differences in the operationalizations caused differences in research results (Katsikeas et al. Citation2016), even if the different measures have high levels of convergent validity (Carlson and Herdman Citation2012; Rossiter Citation2016).

Two recent studies relying on crowdsourcing of researchers have demonstrated the importance of definitions and operationalization for research results. Schweinsberg et al. (Citation2021) enlisted 19 researchers to define and operationalize the constructs in two hypotheses concerning online scientific discussions, without restrictions on how the constructs could be defined or operationalized, and to test the hypotheses on the same data set. Results across 29 analyses showed considerable dispersion with the hypotheses rejected, supported, or inconclusive results, and the authors concluded that “subjective researcher choices make a critical contribution to the results obtained” (Schweinsberg et al. Citation2021, p. 243). In a similar study, Landy et al. (Citation2020) crowdsourced 15 research teams who independently created stimuli for five research questions relating to moral judgment, negotiation, and implicit cognition. Experiments by the research teams and subsequent replication studies obtained directionally inconsistent results for three of the five hypotheses, and analyses found considerable heterogeneity in the effect sizes. The authors conclude that “idiosyncratic choices in stimulus design have a very large effect on observed results, over and above the overall support (or lack thereof) for the hypothesis in question” (Landy et al. Citation2020, p. 469). The conclusions from these studies align with the meta-analytic findings in the introduction that show, for instance, altering the definition and operationalization of advertising creativity changes the effects for the same hypothesis (i.e., advertising creativity increases brand attitudes) by more than 50% (for further discussion of the importance of definitions, see Harford Citation2021).

Thus, there are compelling theoretical and empirical grounds to state that consensus regarding definitions and operationalizations is fundamental for comparing, integrating, and interpreting research findings and consequently for knowledge accumulation in marketing and advertising research. However, studies of research practice show that changes in definitions and operationalizations occur frequently, and it is often unclear whether these changes are justified (Bergkvist and Langner Citation2017; Bergkvist and Eisend Citation2021).

Change in Construct Definitions

Defining a construct involves specifying its necessary and/or sufficient characteristics (Podsakoff, MacKenzie, and Podsakoff Citation2016). Specifically, for a measurement construct, this involves specifying the attribute (the “judgment” or rating), the object of measurement, and the rater (Rossiter Citation2011). It also involves specifying the domain of the construct (Nunnally and Bernstein Citation1994). Drawing on Nunnally and Bernstein (Citation1994) and Rossiter (Citation2011), we define the domain as (1) the application area (e.g., social media advertising) and (2) the theoretical context (e.g., attitude toward the influencer influences brand attitude). Together, the elements of the definition (attribute, object, rater) and the two domain elements include those entities that directly bear on the construct’s operationalization, making it possible to outline factors that motivate changes in construct definitions.

There are two categories of reasons for changing construct definitions. First, if there are limitations in existing definitions, they should be changed. Limitations in definitions have multiple negative consequences, such as invalid measurement operationalizations, construct proliferation, and lacking discriminant validity (Podsakoff, MacKenzie, and Podsakoff Citation2016; Teas and Palan Citation1997), and it is imperative that advertising scholars, as a collective, work toward construct improvement and consensus over time. Second, changes in the domain of the construct (application area, theoretical context) have consequences for the definition, and definitions should be changed accordingly (Nunnally and Bernstein Citation1994).

Limitations in the Existing Definition that Motivate Change

Drawing on the literature on definitions in marketing and related fields, we identified seven ways in which definitions can have limitations that motivate change, and we found examples of each limitation in top advertising journals (International Journal of Advertising, Journal of Advertising, and Journal of Advertising Research; refer to ). These limitations fall into three categories. First, some definitions lack specification of the construct’s characteristics. Definitions in this category do not completely specify the construct’s necessary characteristics, include ambiguous or vague terms, or are tautological (Podsakoff, MacKenzie, and Podsakoff Citation2016; Teas and Palan Citation1997; Wacker Citation2004). These definitions do not clarify the meaning of the construct, which can lead to poor measurement operationalizations. When improving definitions in this category, scholars should ensure that definitions include all of the construct’s necessary characteristics and use precise terms that specify the construct’s meaning. Second, some definitions have unclear boundaries toward other constructs (Podsakoff, MacKenzie, and Podsakoff Citation2016; Teas and Palan Citation1997; Wacker Citation2004). These definitions may not include all construct instances or overlap with other constructs, in the latter case leading to construct proliferation (Podsakoff, MacKenzie, and Podsakoff Citation2016) or construct confusion (Bergkvist and Langner Citation2019). Scholars should consolidate overlapping constructs and clarify the boundaries of constructs. Third, some definitions are overly specific (Summers Citation2001; Wacker Citation2004). These definitions are not parsimonious or include relationships that are true by definition. Scholars should remove redundant terms and avoid relationships between the construct and its antecedents or consequences in the definition.

Table 1. Limitations in definitions with commented examples, descriptions of the problems, and recommendations.

Thus, existing definitions can have several limitations that motivate change. Scholars should scrutinize definitions early in their research and carefully consider whether definitions need change. If change is needed, scholars should report how they changed the definition and justify it by outlining the limitations in the existing definition, as suggested in .

Changes in the Construct’s Domain That Prompt a Definition Change

Changes in the external world or the theoretical context frequently impact a construct’s domain and can prompt a change of the definition, as the object of measurement, rater, or attribute parts of the definition may become inadequate due to the change of domain. We identified three types of external changes that may prompt changes in construct definitions (). First, societal changes in gender roles, acceptance of sexual identities, and other socially constructed phenomena influence the characteristics defining related constructs (e.g., whom or what to include in the definition). Changes in the defining characteristics, in turn, have implications for the rater and object of measurement parts of the operational definition and the operationalization of the construct (Nunnally and Bernstein Citation1994; Rossiter Citation2011). Accordingly, definitions should be changed to reflect changes in society. Second, different types of technological development (e.g., artificial intelligence, augmented reality) could extend or reduce a construct’s application area, which, in turn, could influence the object of measurement or the attribute in the operationalization. Consequently, definitions should be updated to reflect relevant technological development. Third, advances in advertising practice, such as personalization, influencer marketing, and social messaging apps, could extend the application area of many advertising constructs (Thorson and Rodgers Citation2006; Tsai and Men Citation2013). Definitions of advertising constructs should be updated to reflect relevant advances in marketing practice.

Table 2. Changes in the construct’s domain, their effects on definitions, examples, and recommendations.

Thus, societal changes, technological development, and changes in advertising practice frequently motivate changes in definitions. Scholars should be observant of relevant changes in the external world and update definitions when needed.

Unjustified Definition Changes

Some construct definition changes are not justifiable, and these are changes undertaken for reasons other than limitations in existing definitions or changes in the domain. There are two primary types of unjustified definition changes. First, definitions tend to drift over time, taking on additional or different meanings than the original definition (Suddaby Citation2010). This drift is unintended and could be the result of drifting paraphrases (i.e., a definition is restated rather than cited and critical terms are subtly changed; e.g., “brand” is changed to “product”), scholars broadening the meaning of a definition when attempting to clarify its terms (Wacker Citation2004), or chance factors such as typos or misinterpretations of existing definitions. Second, scholars sometimes change definitions deliberately without justifying doing so. For example, when scholars make ad hoc changes to definitions according to specific research needs (e.g., changing “customer” to “consumer” in the definition of “customer-based brand equity” if the data set includes consumers rather than customers).

Unjustified definition changes adversely affect knowledge accumulation within a field, and scholars should avoid these (Bergkvist and Eisend Citation2021). If definition changes are justified, these should be explicitly motivated; if they are not, they should be avoided.

Change in Construct Operationalizations

Justified Operationalization Changes

Marketing scholars have long argued that marketing measures should be developed and validated in scale development studies and then used unchanged in subsequent studies (e.g., Churchill Citation1979). However, scholars cannot expect operationalizations to remain fixed over time, and several reasons may make changes justifiable, or in some cases, justify replacing existing operationalizations. These changes may be motivated by construct definition changes or limitations in the properties of the existing operationalization. We identified four situations that motivate operationalization change (). First, if the construct definition is justifiably changed (as outlined previously), the operationalization should change accordingly, as operationalizations should follow from definitions (Podsakoff, MacKenzie, and Podsakoff Citation2016). Leaving the operationalization of a construct unchanged when the definition is changed entails a considerable risk of a mismatch between the two (i.e., the operationalization is not valid). Accordingly, operationalizations should be updated to reflect changes in definitions. Second, if repeated usage of an accepted operationalization suggests that it has limitations, the operationalization should be updated or replaced (Summers Citation2001). Scholars should strive to use optimal operationalizations of their constructs (Rossiter Citation2016) and thus improve or replace them when results in previous studies identify weaknesses. Third, if a comparative study shows that a new operationalization is more valid than the previously accepted operationalization, then scholars should use the new operationalization. Developing improved operationalizations is an important methodological contribution (Summers Citation2001), and scholars should use the most valid operationalization. Fourth, if a more parsimonious operationalization of equal or better validity could replace an existing operationalization, scholars should use the former. Operationalizations with fewer items save respondents’ time and increase answer quality (Bergkvist Citation2015) and should be preferred over operationalizations with more items.

Table 3. Situations that justify changes in operationalizations with examples and recommendations.

Consistency in construct operationalizations is a precondition for knowledge accumulation within a field (Bergkvist and Langner Citation2017; Bergkvist and Eisend Citation2021). Thus, operationalizations should change only in response to changes in definitions, demonstrated weaknesses in existing operationalizations, or if better or more parsimonious measures exist, and subsequent studies should use the new or improved operationalizations.

Unjustified Operationalization Changes

There are also situations in which construct operationalization changes occur that are not justified (i.e., when there are no grounds for changing the operationalization). Studies of research practice (Bergkvist and Langner Citation2017; Bergkvist and Eisend Citation2021) have identified two situations in which unjustified operationalization change take place. First, operationalizations are changed (e.g., measurement items are dropped or added) ad hoc without corresponding definition changes or justifications as explained previously. Validated operationalizations should be left unchanged, as even minor changes may have substantive effects on research results (see examples in Rossiter Citation2011, p. 46; Weijters, Geuens, and Baumgartner Citation2013, p. 377). For example, multiple studies refer to Zaichkowsky’s (Citation1985) definition of involvement, but many use only some items in her Personal Involvement Inventory (PII) to measure the construct. Second, scholars drop items from a validated operationalization based on a single study’s measurement model evaluation, which is inherently perilous, as a single study can provide random results that cannot be generalized (Summers Citation2001). Changes in operationalizations should result from repeated evaluations showing a consistent and generalizable pattern (as noted previously). If scholars detect a problem with a measurement operationalization, they could report results for both the complete and the reduced operationalization (perhaps in a Web appendix) to ensure the availability of results comparable to other studies.

Unjustified operationalization changes risk causing inconsistency among studies, which imperils knowledge accumulation within the field (Bergkvist and Langner Citation2020; Bergkvist and Eisend Citation2021). Scholars should refrain from adapting, changing, or replacing existing measures unless there are solid grounds for doing so and report these grounds.

Discussion

Our article discusses when changes in definitions and operationalizations are justified and when to avoid them. Advertising scholars should change definitions and operationalizations only when justified to ensure that knowledge can accumulate and our field can progress. Scholars should explicitly state why they have made changes, reporting previous definitions and their limitations or the domain changes that motivate change. Scholars should prevent unjustified definition and operationalization changes from spreading and impacting future studies.

through outline the factors that justify changes in definitions and operationalizations and how they should be improved. If a definition is lacking in the specification of the construct’s characteristics, the improved definition should specify the necessary attributes of the definition using precise primitive terms (or specify the meaning of the terms) that clarify the construct’s meaning. If a definition has unclear boundaries toward other constructs, the boundaries toward other constructs should be clarified and overlapping constructs consolidated. Overspecific definitions should be simplified and relationships excluded (). If the construct’s domain changes, the impact on the definition should be analyzed, and the definition changed accordingly (). If a definition has been (justifiably) changed, a change in the operationalization should follow. Similarly, demonstrated limitations in existing operationalizations and the development of better or more parsimonious operationalizations justify changes in operationalizations ().

Scholars should justify the definition change by outlining the limitations in the existing definitions and how the new definition resolves these. Thus, studies changing definitions should analyze existing definitions carefully, outline their limitations, and put forward the updated definition together with explanations of how the new definition addresses the limitations in the existing definition (for a recent example of this process, see Bergkvist and Taylor, forthcoming). The process for operationalization change should follow a similar logic.

Importantly, scholars should always include definitions and corresponding references, as perhaps the most prominent problem is that key constructs are not defined. Bergkvist and Langner (Citation2017) found that about one-third of advertising studies did not define attitude toward the ad, attitude toward the brand, or purchase intention, even if their minimum requirement for a definition was a reference to a previous study that had defined the construct. Similarly, Katsikeas et al. (Citation2016) found that less than 10% of marketing studies defined the marketing performance construct. One straightforward step toward ensuring construct transparency would be to require that manuscripts submitted for publication include a table with definitions and operationalizations, including references and changes, for all constructs in the study (Bergkvist and Langner Citation2019).

An ideal solution for promoting consensus-based evolution toward optimal definitions and operationalizations would be for advertising scholars to organize an online database with definitions and operationalizations of the field’s constructs. The database would store all updates to construct definitions and operationalizations and explain the reasons for the changes. The categories provided in through can serve as an orientation to assign the correct reasons for a change. A construct database can help other scholars identify the changes made to a construct and whether these changes are justified and support scholars in finding the best available definition and operationalization for a construct. This support would be beneficial for scholars who apply different constructs in their work, including constructs they are less familiar with and they lack knowledge about the status of a construct in the construct life cycle. Setting up and maintaining a construct database would be a demanding task requiring advertising scholars’ concerted effort, and it might not be attainable in the short run. However, the American Academy of Advertising and the European Academy of Advertising could take the first step by forming a joint task force to do a feasibility study and propose a plan for setting up the database.

Working toward better construct definitions and operationalizations is the advertising research community’s responsibility and is in its shared interest. We hope the structure for improvement that we have proposed can contribute to this ongoing process by increasing scholars’ awareness of the issues and suggesting how to address them.

Disclosure Statement

The authors have no conflicts of interest to declare.

Additional information

Notes on contributors

Lars Bergkvist

Lars Bergkvist (PhD, Stockholm School of Economics, Sweden) is a professor of marketing, Norwegian School of Hotel Management, University of Stavanger.

Martin Eisend

Martin Eisend (PhD, Free University Berlin, Germany) is a professor of marketing, European University Viadrina.

References

  • Aaker, Jennifer L. 1997. “Dimensions of Brand Personality.” Journal of Marketing Research 34 (3): 347–356. doi:10.1177/002224379703400304
  • Bergkvist, Lars. 2015. “Appropriate Use of Single-Item Measures is here to Stay.” Marketing Letters 26 (3): 245–255. doi:10.1007/s11002-014-9325-y
  • Bergkvist, Lars, and John R. Rossiter. 2007. “The Predictive Validity of Multiple-Item versus Single-Item Measures of the Same Constructs.” Journal of Marketing Research 44 (2): 175–184. doi:10.1509/jmkr.44.2.175
  • Bergkvist, Lars, and John R. Rossiter. 2009. “Tailor-Made Single-Item Measures of Doubly Concrete Constructs.” International Journal of Advertising 28 (4): 607–21. doi:10.2501/S0265048709200783
  • Bergkvist, Lars, and Tobias Langner. 2017. “Construct Measurement in Advertising Research.” Journal of Advertising 46 (1): 129–40. doi:10.1080/00913367.2017.1281778
  • Bergkvist, Lars, and Tobias Langner. 2019. “Construct Heterogeneity and Proliferation in Advertising Research.” International Journal of Advertising 38 (8): 1286–302. doi:10.1080/02650487.2019.1622345
  • Bergkvist, Lars, and Tobias Langner. 2020. “Four Steps toward More Valid and Comparable Self-Report Measures in Advertising Research.” International Journal of Advertising 39 (5): 738–55. doi:10.1080/02650487.2019.1665398
  • Bergkvist, Lars, and Martin Eisend. 2021. “The Dynamic Nature of Marketing Constructs.” Journal of the Academy of Marketing Science 49 (3): 521–41. doi:10.1007/s11747-020-00756-w
  • Bergkvist, Lars, and Charles R. Taylor. “Reviving and Improving Brand Awareness as a Construct in Advertising Research.” Journal of Advertising. Forthcoming.
  • Bruner II, Gordon C. 1998. “Standardization & Justification: Do Aad Scales Measure up?” Journal of Current Issues & Research in Advertising 20 (1): 1–18. doi:10.1080/10641734.1998.10505073
  • Cacioppo, John T., and Richard E. Petty. 1982. “The Need for Cognition.” Journal of Personality and Social Psychology 42 (1):116–131. doi:10.1037/0022-3514.42.1.116
  • Cacioppo, John T., Richard E. Petty, and Chuan Feng Kao. 1984. “The Efficient Assessment of Need for Cognition.” Journal of Personality Assessment 48 (3):306–307. doi:10.1207/s15327752jpa4803_13
  • Carlson, Kevin D., and Andrew O. Herdman. 2012. “Understanding the Impact of Convergent Validity on Research Results.” Organizational Research Methods 15 (1): 17–32. doi:10.1177/1094428110392383
  • Casimir, Gerda J., and Hilde Tobi. 2011. “Defining and Using the Concept of Household: A Systematic Review.” International Journal of Consumer Studies 35 (5):498–506. doi:10.1111/j.1470-6431.2011.01024.x
  • Churchill, Gilbert A. Jr. 1979. “A Paradigm for Developing Better Measures of Marketing Constructs.” Journal of Marketing Research 16 (1): 64–73. doi:10.1177/002224377901600110
  • Gerson, Kathleen, and Stacy Torres. 2015. “Changing Family Patterns.” In Emerging Trends in the Social and Behavioral Sciences, edited by R. Scott, M. C. Buchmann and S. Kosslyn, 1–15. Hoboken, NJ: Wiley-Blackwell.
  • Geuens, Maggie, Bert Weijters, and Kristof De Wulf. 2009. “A New Measure of Brand Personality.” International Journal of Research in Marketing 26 (2):97–107. doi:10.1016/j.ijresmar.2008.12.002
  • Graf, Laura K. M., Stefan Mayer, and Jan R. Landwehr. 2018. “Measuring Processing Fluency: One versus Five Items.” Journal of Consumer Psychology 28 (3): 393–411. doi:10.1002/jcpy.1021
  • Harford, Tim. 2021. The Data Detective: Ten Easy Rules to Make Sense of Statistics. New York: Riverhead Books.
  • Katsikeas, Constantine S., Neil A. Morgan, Leonidas C. Leonidou, and G. Tomas M. Hult. 2016. “Assessing Performance Outcomes in Marketing.” Journal of Marketing 80 (2):1–20. doi:10.1509/jm.15.0287
  • Landy, Justin F., Miaolei (Liam) Jia, Isabel L. Ding, Domenico Viganola, Warren Tierney, Anna Dreber, Magnus Johannesson, et al. 2020. “Crowdsourcing Hypothesis Tests: Making Transparent How Design Choices Shape Research Results.” Psychological Bulletin 146:451–479. doi:10.1037/bul0000220
  • Lee, Wei-Na, Jerome D. Williams, and Carrie La Ferle. 2004. “Diversity in Advertising: A Summary and Research Agenda.” In Diversity in Advertising: Broadening the Scope of Research Directions, editedby J. D. Williams, W.-N. Lee and C. P. Haugtvedt, 3–20. Mahwah, NJ: Erlbaum.
  • MacKenzie, Scott B., and Richard J. Lutz. 1989. “An Empirical Examination of the Structural Antecedents of Attitude toward the Ad in an Advertising Pretesting Context.” Journal of Marketing 53 (2): 48–65. doi:10.2307/1251413
  • Malhotra, Naresh K., Sung S. Kim, and James Agarwal. 2004. “Internet Users' Information Privacy Concerns (IUIPC): The Construct, the Scale, and a Causal Model.” Information Systems Research 15 (4): 336–355. doi:10.1287/isre.1040.0032
  • Nunnally, Jum C., and Ira H. Bernstein. 1994. Psychometric Theory. 3rd ed. New York: McGraw-Hill.
  • Podsakoff, Philip M., Scott B. MacKenzie, and Nathan P. Podsakoff. 2016. “Recommendations for Creating Better Concept Definitions in the Organizational, Behavioral, and Social Sciences.” Organizational Research Methods 19 (2): 159–203. doi:10.1177/1094428115624965
  • Ptok, Annette, Rupinder P. Jindal, and Werner J. Reinartz. 2018. “Selling, General, and Administrative Expense (SGA)-Based Metrics in Marketing: Conceptual and Measurement Challenges.” Journal of the Academy of Marketing Science 46 (6):987–1011. doi:10.1007/s11747-018-0589-2
  • Rapp, Justine, Ronald Paul Hill, Jeannie Gaines, and R. Mark Wilson. 2009. “Advertising and Consumer Privacy: Old Practices and New Challenges.” Journal of Advertising 38 (4): 51–61. doi:10.2753/JOA0091-3367380404
  • Rosengren, Sara, Martin Eisend, Scott Koslow, and Micael Dahlén. 2020. “A Meta-Analysis of When and How Advertising Creativity Works.” Journal of Marketing 84 (6): 39–56. doi:10.1177/0022242920929288
  • Rossiter, JohnR. 2011. Measurement for the Social Sciences: The C-OAR-SE Method and Why It Must Replace Psychometrics. Berlin: Springer.
  • Rossiter, John R. 2016. “How to Use C-OAR-SE to Design Optimal Standard Measures.” European Journal of Marketing 50 (11):1924–1941. doi:10.1108/EJM-10-2016-0546
  • Rubenking, Bridget, and Cheryl Campanella Bracken. 2021. “Binge Watching and Serial Viewing: Comparing New Media Viewing Habits in 2015 and 2020.” Addictive Behaviors Reports 14: 1–5. doi:10.1016/j.abrep.2021.100356
  • Rundin, Ksenia, and Jonas Colliander. 2021. “Multifaceted Influencers: Toward a New Typology for Influencer Roles in Advertising.” Journal of Advertising 50 (5): 548–564. doi:10.1080/00913367.2021.1980471
  • Scholz, Joachim. 2021. “How Consumers Consume Social Media Influence.” Journal of Advertising 50 (5): 510–527. doi:10.1080/00913367.2021.1980472
  • Schweinsberg, Martin, Michael Feldman, Nicola Staub, Olmo R. van den Akker, Robbie C. M. van Aert, Marcel A. L. M. van Assen, Yang Liu, et al. 2021. “Same Data, Different Conclusions: Radical Dispersion in Empirical Results When Independent Analysts Operationalize and Test the Same Hypothesis.” Organizational Behavior and Human Decision Processes 165: 228–249. doi:10.1016/j.obhdp.2021.02.003
  • Sierra, Jeremy J., Michael R. Hyman, and Robert S. Heiser. 2012. “Ethnic Identity in Advertising: A Review and Meta-Analysis.” Journal of Promotion Management 18 (4): 489–513. doi:10.1080/10496491.2012.715123
  • Smith, H. Jeff, Sandra J. Milberg, and Sandra J. Burke. 1996. “Information Privacy: Measuring Individuals' Concerns about Organizational Practices.” MIS Quarterly 20 (2): 167–96. doi:10.2307/249477
  • Suddaby, Roy. 2010. “Construct Clarity in Theories of Management and Organization.” Academy of Management Review 35 (3): 346–57. doi:10.5465/AMR.2010.51141319
  • Summers, John O. 2001. “Guidelines for Conducting Research and Publishing in Marketing: From Conceptualization through the Review Process.” Journal of the Academy of Marketing Science 29 (4): 405–15. doi:10.1177/03079450094243
  • Teas, R. Kenneth, and Kay M. Palan. 1997. “The Realms of Scientific Meaning Framework for Constructing Theoretically Meaningful Nominal Definitions of Marketing Concepts.” Journal of Marketing 61 (2):52–67. doi:10.2307/1251830
  • Thorson, Kjerstin S., and Shelly Rodgers. 2006. “Relationships between Blogs as EWOM and Interactivity, Perceived Interactivity, and Parasocial Interaction.” Journal of Interactive Advertising 6 (2): 5–44. doi:10.1080/15252019.2006.10722117
  • Tsai, Wan-Hsiu Sunny, and Linjuan Rita Men. 2013. “Motivations and Antecedents of Consumer Engagement with Brand Pages on Social Networking Sites.” Journal of Interactive Advertising 13 (2): 76–87. doi:10.1080/15252019.2013.826549
  • Wacker, John G. 2004. “A Theory of Formal Conceptual Definitions: Developing Theory-Building Measurement Instruments.” Journal of Operations Management 22 (6): 629–50. doi:10.1016/j.jom.2004.08.002
  • Weijters, Bert, Maggie Geuens, and Hans Baumgartner. 2013. “The Effect of Familiarity with the Response Category Labels on Item Response to Likert Scales.” Journal of Consumer Research 40 (2): 368–81. doi:10.1086/670394
  • Zaichkowsky, Judith L. 1985. “Measuring the Involvement Construct.” Journal of Consumer Research 12 (3): 341–52. doi:10.1086/208520

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.