1,181
Views
5
CrossRef citations to date
0
Altmetric
Replication

Replication and George the Galapagos tortoise

, ORCID Icon & ORCID Icon

ABSTRACT

This paper conceptualises replication research as being one of the most needed areas of ongoing academic activity. Using George the Galapagos tortoise as a metaphor for the lack of replication research, it is argued that only by replicating research studies over time can solid theory be developed. For the most part, advertising and marketing communication research consists of non-replicated, one-shot, point-in-time experiments which, once accepted and published by a journal, becomes the litany of the academic community and is then deified by the citation process. The paper begins by reviewing the background of replication research in the marketing communication domain and applies it to current thinking and publication trends. Reasons for the lack of replication research are presented and some conclusions are drawn for those seeking to confirm or challenge existing research. An agenda is provided for the development and publication of replication research.

You have probably heard of George the Galapagos Tortoise. The last of his kind, Lonesome George was discovered in 1972 and became a ‘conservation icon’. The need to protect George resulted in a number of measures to conserve the habitat of giant tortoises, inspired some blind dates with tortoises of other species and guaranteed a stream of tourists to the small Ecuadorian island. Much of the world mourned his death in June 2012.

While it might seem unlikely, this tortoise shares a number of things in common with the replication of existing marketing and marketing communication research. Perhaps the most obvious of these is longevity. George lived for around 100 years and the debate over replication research in the social sciences has already spanned more than six decades (Cox Citation1948; Hubbard and Armstrong Citation1994; Kerr, Schultz, and Lings Citation2016; Kitchen et al. Citation2014; Monroe Citation1992a; Reid, Soley, and Winner Citation1981). Over those six decades, studies concerning replication of existing and often revered research findings that have made it into the literature have sometimes resulted in significant change or the demise of mainstream theories. Although most editors tend to avoid the issue, occasionally a forward-thinking editor such as Kenton Monroe (Citation1992b) in the Journal of Consumer Research, or the Journal of Advertising in 2016, have re-ignited the debate. Unfortunately, in our view, these few attempts at replication occur too seldom and the time span is too great. Now is the time to build another fire under the replication issue with the hope of getting it firmly and permanently placed on the research agenda.

This renewed attention to replication seems increasingly important as we examine the technological changes which are impacting marketing, advertising and communication today. The rapid shift in marketplace power from sellers to buyers as a result of digital developments are clearly increasing the need for solid, provable theory to support and explain the analytical results technology is supplying. In short, today, we have massive amounts of data which supply the ‘what’ we know about marcom, but it is the ‘why’ which comes from theory we seem to be missing. Therefore, an increased focus on replication seems to certainly be in order.

Much of this replication debate has centered on another thing that both replication research and George have in common, and that is, the unlikelihood of actually replicating at all. Starting in 1993, scientists had tried to encourage George to replicate, but despite experimentation with tortoises of different subspecies, the proposed cross-breeding failed to produce any significant results prior to his death in 2012. Likewise, the impact of replication research in advertising and marketing literature has massively underperformed. Very few replication studies in the marketing, consumer behavior and advertising domain have ever been published (Reid, Soley, and Winner Citation1981; Monroe Citation1992a, Citation1992b; Hubbard and Armstrong Citation1994; Madden, Easley, and Dunn Citation1995; Hubbard and Vetter Citation1996; Eisend Citation2006; Kitchen et al. Citation2014), much to the loss of scholarly rigor. As Eisend, Franke and Leigh (Citation2016, 1) suggest, ‘To trust the findings of seminal studies, no matter how carefully conducted and widely cited, replications are needed’.

One could suggest that this is a denial of the laws of physical science which always demand replication as the key to generalization, especially for social science scholars who consider themselves as thorough, well-grounded theory builders. Or, perhaps the social sciences are so well grounded that replication is not as necessary as in other fields of research endeavor? Leaving aside the speciousness of the question, alternatively, perhaps like George, who is of the same species that inspired Charles Darwin, this lack of replication is an artifact of evolution? We may have shown or proven it once, and believe that is sufficient. After all, a snapshot of what was then the current situation is better than no investigation at all.

Seemingly, there are so many shiny new toys in the various forms of technology to distract us, we just move on. Certainly, there is a weight of evidence that the publication process encourages the pursuit of one-off innovation, certainly that of analytic techniques. That leaves little time, effort or reward for replication, which many consider ‘uncreative’ or lacking new conceptual thinking or advancing new ideas (Easley, Madden, and Dunn Citation2000). That said, there is increasing evidence that many papers in management, marketing and communication do not display innovation per-se, but rather, a form of creeping incrementalism, that is, studies which add small luster to some overarching theory. Often this is done without ever tackling whether the theory is inexorably anchored in its own time, and thus has become less relevant, or, perhaps, even irrelevant in the marketing, marcom and managerial world of today and tomorrow.

One could also add that, like George, evolution and replication research tend to move very slowly. Onlookers could therefore be pardoned for not having noticed even any minor changes – sometimes for decades. Hence, the purpose of this research note is to try to get things moving again. It reviews the background of replication research in the marketing and marcom domain and applies it to current thinking and publication trends. From that, we attempt to draw some conclusions for those seeking to confirm or challenge existing research by providing an avenue for the publication of replication research.

We begin by first defining replication and reviewing the state of replication research.

Replication research defined

A theory is ‘a systematically related set of statements, including some law-like generalizations, that is empirically testable … a systemized structure capable of both explaining and predicting phenomena’ (Hunt Citation1991, 4). Theory builds through scientific endeavor and what Popper (Citation1968) describes as a cycle of conjecture and refutation. Interestingly, conjecture breeds innovation, while refutation demands replication. Hence to build and verify theory, both are necessary. As Popper (Citation1968, 45) said,

“Only by such repetitions can we convince ourselves that we are not dealing with a mere isolated “coincidence”, but with events which, on account of their regularity and reproducibility, are in principle inter-subjectively testable”.

Put another way, if it is not falsifiable, then it is not scientific. That’s quite a challenge today given the academy’s acceptance of one-time snapshots of the reaction of a group of college sophomores to various stimuli, which are then proven with data dredging or p-hacking.

A replication is ‘a duplication of a previously published empirical study concerned with assessing whether similar findings can be obtained upon repeating the study’ (Hubbard and Armstrong Citation1994, 236). (Note here, the emphasis on the word empirical which is, too often, missing in much academic research.) As a result of this requirement, in marketing science, particularly marcom, advertising and other communication areas, replication studies are massively underrepresented, and often simply non-existent, in the literature (Leone Citation1995; Hunter Citation2001; Eisend Citation2006). Thus, we too often seem to rely on the citations of marketing scholars in their ongoing work as a surrogate for actual proof of the concept itself.

In one of the first studies in the field of marketing replication, Brown and Coney (Citation1976) made the distinction between replication and replication with extension. A replication with extension is a duplication of a previously published empirical study that investigates the generalizability of earlier findings. Its purpose is to test the generalizability of outcomes beyond the original content. To do this, it changes some aspect of the original design, by modifying the manipulated or measured variables, introducing additional variables, changing the population or testing the impact of the passage of time on the results (Nagashima Citation1977; Hubbard and Armstrong Citation1994). Any and all are relevant in developing useful replication research.

Replication research is also described by its timing, whether it is simultaneous (same study across different scenarios) or sequential (a second study repeats the initial study using the same or different stimuli at another point in time) (Raman Citation1994). Of these two alternatives, Monroe (Citation1992a, Preface) from JCR suggested,

“Generally sequential replication is preferable because the research has been conducted at a different time, with different people and provides more information on the scope or boundary of the original study’s outcomes.”

Many other descriptors of replication research, such ‘strict’ or ‘faithful reproduction’, have been used in the literature. However, the typology by Easley, Madden, and Dunn (Citation2000), shown in , perhaps offers the most useful syntheses.

Table 1. Typology of replication for social sciences

Another way of examining replication research was proposed by Tsang and Kwan (Citation1999). They suggested consideration of the data set, the population and the measurement and analysis as determinants of the type of replication. An exact replication, which uses the same population and same measurement and analysis, aims to determine if the findings are replicable and is useful soon after the original study has been published. An empirical generalization closely follows the procedures of the original study, but, uses a different population. A conceptual extension uses different procedures, but, the same population. To test a theory, Tsang and Kwan (Citation1999) propose that an exact replication is advisable. And that has often been the sticking point for marketing and communication research … .the inability to develop the exact same scenario or sample in a dynamic system.

Replication research in marketing, marcom and advertising

As early as 1948, Cox observed that marketing lacked ‘law-like principles’ or what could be described as theory. Almost forty years later, Hunt (Citation1983, 11) also argued in favor of, ‘law-like generalizations that are empirically testable’. Academics embraced this view and positivism became the dominant paradigm in the US and elsewhere. In fact, in 1986, Hirschman reported that only one study in the Journal of Marketing was non-positivist. This research tradition appears to have continued with Hanson and Grimmer (Citation2007) finding that from a sample of 1,000 articles in the Journal of Marketing, European Journal of Marketing and Journal of Services Marketing between 1993 and 2002, more than 70% of the articles were quantitative or positivist in nature. Thus giving rise to the current focus on methodology rather than findings. Clearly, methodologies can be replicated. The more important issue is whether or not the findings are consistent over time as well. If a theory is replicated and shown to no longer work, it can revised or replaced. This guarantees that theory is not a historical artefact, but a working or operational premise.

An increase in the number of qualitative articles was observed to 1999, but this has been reversed in recent years. Hanson and Grimmer (Citation2007, 66) concluded, ‘Academic marketing remains dominated by the goal of making generalizable statements from an objectivist framework’. Just the thought of that concept raises questions in the minds of true scholars. Calls clearly have been made to challenge even the well-established generalizations, but these appear to have fallen on deaf ears (see Sheth and Sisodia Citation1999), or there is no evidence that anyone heeded the call. Sort of like trees falling in the forest when no one is there.

Given this research orientation, which seems to be focused on theory building, it would seem that replication would be an inherent and ongoing part of the marketing, marcom and advertising research agenda. Monroe (Citation1992b, Preface) contends, ‘Developing a tradition of systematic replications would facilitate the task of synthesizing results and assessing the state of knowledge within a research domain.’ Yet attests that this is not the case.

Table 2. Replication research in the marketing and advertising discipline

documents replication studies in the field of marketing and advertising across a thirty year period. Noticeably, and alarmingly, remarkably few replication studies have ever been published in the marketing and advertising scholarly journals. The consequence of this is pointed out by the authors of one of these studies. Reid, Soley and Winner (Citation1981, 3) suggest,

“The results revealed that replications are seldom published in advertising research. Therefore, it is possible that empirical results are uncritically absorbed into advertising literature as verified knowledge.”

That seems to increasingly be the case given the focus on ‘publishing impact’ or ‘rank’ by which most journal value is approximated. Citations, rather than scholarly replications, appear to be the purpose of most journals today.

Some rays of light

Encouragingly, in the 1990’s, a number of replication studies achieved acceptance and publication. As discussed earlier, this came primarily as the result of efforts of Kenton Monroe who was editor of the Journal of Consumer Research in the early 1990s. Munroe had a clear editorial policy of encouraging and accepting replication research for publication. Yet, in spite of that, ‘research as replication rarely has been published’ (Citation1992b, Preface). Monroe’s lone crusade on the need for replication research inspired other editors to rethink their position, with many changing their editorial policies to encourage the publication of replication studies. In 1998, Winer revived the ‘Research Notes and Communications’ section of the Journal of Marketing Research. In 2000, the Journal of Business Research, under the editorial instigation of Arch Woodside, examined replication studies in a special issue. A year later, Mick introduced a ‘Re-Inquiries’ section in the Journal of Consumer Research. In 2002, Eden wrote an extended editorial for the Academy of Management Journal in which he proposed that management practice would be enhanced by ‘a large number of high-quality replication studies’. The body of work on replication was also swelled by the highly-successful, but unfortunately no longer existent replication corner in the International Journal of Research in Marketing.

Dashed by reality

Despite these encouraging notes from editors, along with changes to editorial policy, it seems that the rate of replication research has continued to decline over the past few decades. For example, in 1994, Hubbard and Armstrong examined replication studies in three leading marketing journals, Journal of Marketing, Journal of Marketing Research and Journal of Consumer Research, between 1974 and 1989. They found no replication studies from 1,120 papers from these three major marketing journals. Only 1.8% of papers were extensions, with 12 challenging the study it sought to replicate and only three providing full confirmation. Thus, there is little output or support for a true theory-based field of inquiry.

The Hubbard and Armstrong study was however replicated in 2007 by Evanschitzky, Baumgarth, Hubbard and Armstrong, wherein they reported that the rate of replications with extensions in the same three journals between 1990 and 2004 had fallen to only 1.2% (only 16 replications out of 1,389 articles). Of the 16 replications, 44% or 3 actual studies, confirmed the previously published study, 31% provided partial support and 25% did not support the results of the previous study. One wonders as to the overarching effect on the original study outcomes. Were the findings of the original study amended, altered, or held to be irrelevant? Once case in point is the Elaboration Likelihood Model by Petty and Capcioppo (Citation1986). The most cited model in advertising, with 7872 citations attributed to the 1986 paper, was replicated by Kerr et al. (Citation2015). The ELM model was shown to not work in the US, UK and Australia. Despite questioning an important model, the paper has so far attracted just 16 citations and textbooks continue to publish it without criticism. Therefore, it seems likely that citations are the primary measure of value ascribed to research. The original papers continue to attract the majority of citations, despite replication questioning their authority. So why do we bother?

Reasons for replication

One of the strongest reasons for replication comes from the physical sciences, where replication is essential for the conduct of good science. As Epstein (1980 in Easley, Madden, and Dunn Citation2000) said, ‘There is no more fundamental requirement in science than that the replicability of the findings be established’. Replication affirms that the science works and has rigor and relevance across time. This knowledge can then underpin and support new experimentation and idea generation.

However, in the social sciences, particularly marketing, marcom and advertising, human knowledge and/or behavior is often the unit of analysis, generating more background factors and greater variability than in the physical sciences (Easley, Madden, and Dunn Citation2000). One would think most researchers would support replication as essential to knowledge advancement and as a check against questionable results. Past history shows this an incorrect assumption. For example, classic studies, such as the 1956 subliminal advertising study, which suggested, ‘Hungry? Eat Popcorn?’ (see Wilkie Citation1986) were accepted for years – even disseminated through textbooks and in the media – despite being badly flawed or spurious (Packard Citation1981).

Leone and Schultz (Citation1980) make the point that although there is much empirical activity in marketing, without the replication of these studies, we have little generalizability and hence little real knowledge or the law-like generalizations in marketing that Cox (Citation1948) and Sheth and Sisodia (Citation1999) so desired. Only replication leads to generalizability, rather than ‘single-shot’ studies (Jacoby Citation1978).

“The literature is replete with one-shot studies of phenomena whose veracity is unquestioned and whose findings are disseminated as implicit laws. If the goal of science is to produce universal truths, inherent to this goal is the task of adequate theory development and refinement, in which the criterion of reproducibility should be inextricably intertwined” (Easley, Madden, and Dunn Citation2000, 83).

It appears the discipline has conveniently ignored, passed over or simply sublimated these simple scientific truths.

In their recent special issue on ‘Re-inquiries in Advertising Research’, Eisend, Franke, and Leigh (Citation2016) warn of the inherent dangers in overgeneralizing from any single study. They make their argument based on statistical evidence, such as Type I and Type II errors and effect size. Interestingly, they also make the point that the reporting of findings is a selective process. Drawn from many different tests and analysis, only the selected and the significant findings are ever reported, despite the potential value of nonsignificant replications.

It is also important to assess validity across time and context. Rather than a once-and-for-all definitive experiment, which as Reid, Soley, and Winner (Citation1981) suggested could be uncritically absorbed as theory, Campbell and Stanley (Citation1963, 3) propose verification at other times and under other conditions in order to remain confident in the results. This view is supported by many other researchers, such as Cohen (Citation1990, 1311) who said, ‘A successful piece of research doesn’t conclusively settle an issue, it just makes some theoretical proposition to some degree more likely. Only successful future replication in the same and different settings provides an approach to settling the issue.’ So, while there is substantial support and head-nodding for replication research, it appears that there is some smoke, but, practically as yet – no fire.

Limitations of replications

In response to Campbell and Stanley’s call for continuous and multiple experimentation, Rosenthal (Citation1991) makes the point that, ‘replications are possible only in a relative sense’. No two social situations are the same, nor is research performed under identical conditions. Thus, it may well be that marketing, marcom and advertising research are not science at all but simply a number of associated or aligned experiments which we have called the development of a ‘theory base’ to make ourselves feel more secure in our endeavors. The problem, of course, is that consumer behaviour is dynamic. It is in a continuous state of flux which challenges even the most dedicated researcher to find comparable situations. According to the Behavioral Perspective Model (Foxall Citation1993), consumer behavior is an evolutionary process, governed by environmental events such as the physical, social and temporal elements. Both subjects and researchers will change with the passage of time (Rosenthal and Rosnow Citation1984). As a result of this ‘relativeness’ of replicability, it is suggested that being able to replicate a study does not mean conclusive verification, nor does the failure to replicate signal conclusive falsification. Some have called this the replication paradox (Bornstein Citation1991; Rosenthal Citation1991), that is, our findings can often be right and wrong at the same time.

The other main criticism of replication research is its perceived lack of creativity. It is not exciting, but, often deadly boring. Indeed, such research is deemed to lack the imagination of innovative studies, and fails to contribute new concepts or original approaches to theory development (Easley, Madden, and Dunn Citation2000). Or, seemingly more important, to support the introduction of new methodologies to study the same variables and situations, ignoring any findings as irrelevant. Yet, without replication can we be sure that there is a theory at all, or, simply some similar experiments tied together with statistical wizardry?

Reasons why replication studies are not published

Many of the authors of the studies cited in have offered reasons why replication studies are not published. One of the most frequently cited reason is editorial bias. Neuliep and Crandall (Citation1990) reported that replication was not an everyday, operational issue for many editors of social science journals, with 42% never having received a direct replication for review and only 5% explicitly encouraging replication studies. Out of sight, out of mind may well be a major reason for the dearth of replication studies. Madden, Easley, and Dunn (Citation1995) found social science editors considered strict replication to be of minimal value and they would only publish the material as a research note or review article or if the research included new variables or areas of investigation. ‘The discipline as a whole pays scant attention to the matter,’ suggested one editor they interviewed. Echoing comments in the previous section, another editor said, ‘Replication isn’t considered creative.’ And, like advertising, if it is not considered creative, few peer accolades will follow.

A natural consequence of editorial bias and scant publication is the lack of scholarly respect or value for replication held by marketing academics. Scholars are discouraged from replications because of a lack of scholarly respect for this type of work and the fear of not being considered creative or ‘leading edge’ in their research which creates limited publication opportunities (Madden, Easley, and Dunn Citation1995; Monroe Citation1992b). This lack of scholarly respect for replication is perhaps engendered in reviewer’s responses to replication articles, where there is ‘nothing new’ and no way for me, as a reviewer, to demonstrate my grasp of the subject. This seems to have created and perpetuated an almost unbreakable publication cycle, despite the best of editorial intentions (Easley, Madden, and Dunn Citation2000).

Another key driver for the lack of published replication studies, echoed by many of the authors of replication research, is the paradox of replication. Researchers who find non support for previous work are often criticized for not being true to the original method. Yet those who do find support for previous work may equally be criticized for contributing nothing new (Monroe Citation1992a, Citation1992b). This is perhaps also compounded by what Rosenthal (Citation1991) calls the ‘file drawer’ problem, where publication is highly biased towards positive results which effectively mitigates against unpleasant challenges from those whose work cannot be duplicated. Non-significant findings are difficult to interpret and there is an unwillingness to consider these as empirical contributions (Bornstein Citation1991). Few academics seemingly want or perhaps would even dare to swim against the tide of editorial preference, especially in the early stages of a career development.

It is also important to make the point that while the number of replication studies published is negligible, Intra-Study Replication research is increasing. Unlike true replication studies which are conducted by independent researchers, Intra-Study Replication tests the main effect by incorporating several studies in the one research design (Kwon et al. Citation2017). This provides greater reliability than single-shot studies alone.

Agenda for replication, replicators, and replicants

Replication research can be very rewarding for researchers who do not want to get published or promoted, who wish to be seen as uncreative or simply as recalcitrant ogres. It appears equally simple for editors, who as a result of this inherent lack of scholarly respect for replication, rarely receive replication studies, and when they do, their reviewers generally find some way to reject them. The result? Why bother with replication? So, while this advantageously increases the rejection rate of the journal, and, thus, the perception of academic credibility, it does little to add to the overall knowledge base.

Despite the occasional editorial or published study, the lack of replication has become an entrenched part of the marketing, marcom and advertising domain. As a result, researchers who are considered to be more creative or are more widely published drive the fate of marketing theory, all based on relatively less rigorous one-shot studies and their subsequent verification-through-publication. That seems to make them widely accepted ‘marketing truths’. Based on our review of the replication research stream, we argue that some of our most highly cited studies may well be simply unverified conjecture, based on a sample of 100 or so college sophomores in a large Midwestern university. As a continuing result of that situation, our hope is that in another twenty years, someone will write a paper (that might be published) just like this one.

We owe our discipline and ourselves as researchers, reviewers and editors more than this. Here is what we need to do.

Replication efforts should focus on theory which changes the nature of the research domain

More than 25 years after Monroe’s (Citation1992b, Preface) editorials, we concur with his suggestion, that, ‘Pioneering research results that subsequently are cited frequently by other researchers should be examined for their reproducibility. Offering evidence of the citations record of the research and lack of previous replication research would be an important rationale for a replication study.’ This means that well-cited theory should be replicated since it holds the greatest potential for perpetuating a stream of unverified and perhaps non-theory-supporting research. This seems to be particularly true today since many of our foundational marketing, marcom and advertising theories were developed in the mass-marketing, mass-media days of the 1970s and 1980s (Kerr and Schultz Citation2010; Kerr, Schultz, and Lings Citation2016; Kitchen et al. Citation2014). It would be relatively easy to identify these discipline-sharing theories through citations, guaranteeing integrity across time and change. It may well be that we continue to support concepts and approaches that once were true, but are no longer so.

Replication is best undertaken by researchers with no connection to the study or its original authors

The replication of the previous work by independent researchers reduces concerns about the interaction of the researcher and the study (Monroe Citation1992a). It avoids biases such as experimenter expectations, associations or connections with the original study (Hubbard and Armstrong Citation1994) or its authors. It is also important that researchers conducting replication are also independent of the research tradition or the school of thought of the original researcher, so there is no vested interest by mentors or students in perpetuating the research stream. Johnson and Eagly (Citation1989) expressed a major concern by stating that only researchers (the Ohio State researchers) associated with Petty and Cacioppo’s Elaboration Likelihood Model (ELM) were able to generate results consistent with the original ELM predictions. While reviewers may not be able to ascertain the independence of researchers, it surely falls to authors to state their dependence or independence and for editors to note any associations. This is or should be a key element for any journal submission, replication or not. Yet, we know of no such statement being required by the editors of any marketing, marcom or advertising journal for this type of declaration. While this is a serious challenge to the PhD-clone system, it makes good research sense.

Replications with an extension can generalize findings, while exact replication tests theory

Replications with extensions appear to be the most published kind of replication study (Neuliep Citation1991). Their value lies in their ability to determine or verify whether the original findings are generalizable beyond their original context. Sometimes this involves targeting new populations, introducing new variables or even, as mentioned above, just standing the test of time (Nagashima Citation1977; Hubbard and Armstrong Citation1994). Given the nature of marketing, marcom and advertising research, this is probably the best we can hope for in terms of replication in the near future.

Weighed against this, Rosenthal (Citation1991) considers that replication with an extension may encourage ‘imprecise’ or modified replications, which protect the professional reputation of the original researchers. Tsang and Kwan (Citation1999) also advise the use of an exact replication in order to test theory. In a dynamic system, however, this seems like an ‘impossible dream’.

Intra-study replication and the replication battery

‘If one replication is good, two are better’ (Rosenthal Citation1991, 28). Rosenthal (Citation1991) proposes a battery of replications rather than a single replication. This battery may vary the degree of similarity to the original study, allowing further insight into its external validity. Repeated replication guarantees generalizability, verifies real knowledge and protects the integrity of marketing theory (Campbell and Stanley Citation1963; Jacoby Citation1978; Leone and Schultz Citation1980; Cohen Citation1990; Easley, Madden, and Dunn Citation2000). Perhaps we need to adopt the practice of the physical sciences which demand field trials and multiple replications across different contexts before a theory can be accepted as truth. As noted earlier, Intra-Study replication is being increasingly used to provide a more stringent examination of a phenomenon. Maybe this trend should become a requirement for publication or before it can be cited by others. Thus, all experiments become ‘exploratory’ and ultimately would require replication before the study enters the research stream. This strongly argues for longitudinal studies which seem to be increasingly possible given our ability to capture, store and manage huge amounts of data.

Mandatory documentation makes replication easier and more rigorous

Often the good intention to replicate fails because the original study was not well documented. In order to ensure precision in replication, we need the questionnaires, the scales and the detailed experimental design of the original study. This could be secured as a condition of publication, and experimental materials collected and stored online.

Change editorial practice, not just policy

It is relatively easy to present an editorial policy that does not discriminate against replication. It is another thing to enact it. This involves not only editorial intent, but the cooperation of the reviewers and the participation of researchers. There are a couple of methods which may empower such policy. Firstly, editors can advise reviewers of the value of replication in the social sciences, to the journal and to theory-building within the discipline. By giving them permission to accept the unpopular, the uncreative but potentially paradigm-changing replication papers, they can help protect the theory base. Secondly, editors can invite replication studies through special issues and calls for papers. This assures researchers of the journal’s intent to publish good replication studies. Or perhaps even special journals whose sole intent was to publish replication should be considered. Thirdly, editors can help guarantee the rigor of these studies by encouraging multiple replications and also requiring disclosure of any association with the original authors of the replicated study and the provision of experimental materials. Finally, longitudinal studies over time should be encouraged. That will mitigate against the use of a ‘convenience sample of college sophomores who participate for extra credit’ which seems so prevalent in today’s ‘scholarly journals’.

Weighed against all these great reasons for replication is the impact factor of the journal. Will editors challenge the impact factor or do we need to consider a journal just for replications? While it may have a lower impact factor, it is likely to have a greater impact on research and theory development.

Conclusion: are particular studies a sacred tortoise?

There is no doubt that in the end, George the Galapagos tortoise was one of a kind. This invoked him with almost sacred rights and elevated him to a position of reverence. The affection we might feel for this long-lived and slow-moving reptile may be shared by some of our ‘special’ marketing and advertising theories. Despite evidence that several well-cited studies are incapable or resistant to replication, like sacred shibboleths, they continue to be accepted and revered as the dominant paradigms in their specific domain (Jacoby Citation1978; Hubbard and Armstrong Citation1994). This reverence for some of our most fundamental marketing studies is perhaps an outcome of academic investment. For example, the previously mentioned and highly-cited Elaboration Likelihood Model (ELM) has inspired an entire school of researchers, the group Johnson and Eagly (Citation1989) call ‘the Ohio State’ researchers, who are most diligent (and sometimes almost exclusive) in upholding the original theoretical framework.

Additionally, many academic journals or their editors, invest in a theory and by publishing it, validate it as a ‘good’ framework for research. Immediately, marketing lemmings leap onto the theoretical bandwagon. However, what would it mean if the theory were demonstrated to have significant failings? This stamp of approval from academic journals encourages other researchers to also invest in the theory, knowing it is publishable (as long as you do not attempt replication!).

These sacred theories also seem the most popular in the marketing and advertising texts, thus, they likely have contributed to the education of generations of both academicians and practitioners as well. Staying with the ELM example, it would be hard to find a marketing or advertising text which does not include it – from the introductory texts of Belch and Belch (Citation2017) to the more thoughtful theory handbooks such as Thorson and Rodgers (Citation2012). Nor is there a journal which does not publish articles adopting it as the theory platform. ELM is considered ‘advertising gospel’ in spite of its lack of replication by independent researchers (Kerr et al. Citation2015; Kitchen et al. Citation2014).

So why risk all of this academic investment when we have the replication paradox? When non-replication could mean that the problem is the replication. Or just maybe, via replication, we may draw closer to the ideals espoused by the physical sciences. Equally, this may be a bridge too far. And who knows, maybe George would have fathered a score of children if he had only met the right tortoise.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Notes on contributors

Don E. Schultz

Don E. Schultz, Professor (Emeritus-in Service) Integrated Marketing Communication, Northwestern University, Evanston, IL. Don has researched, lectured, consulted and held seminars on integrated marketing communication, marketing, branding, advertising and communication management in Europe, South America, Asia-Pacific, the Middle East, Australia and North America. He is the author of 26 books and over 150 trade, academic and professional articles. Don shares his IMC expertise with adjunct professorial appointments at QUT Australia, Cranfield School of Management UK, Tsinghau University and Peking University in China and the Swedish School of Economics in Finland.

Gayle Kerr

Gayle Kerr is a Professor in Advertising, IMC and Digital at the Queensland University of Technology Business School in Australia. Her key research areas are consumer empowerment, self-regulation, creativity, advertising avoidance and engagement and IMC. Her research informs both her leading Australian textbooks, as well as her teaching, which has been acknowledged with a national AAUT Teaching Excellence Award and a Billy I. Ross Education Award. She is the former President of the Australia and New Zealand Academy of Advertising and served on the Executive of the American Academy of Advertising.

Philip Kitchen

Philip Kitchen is a Professor of Marketing, Salford University, UK and an affiliate Professor in the ICN Business School, Nancy, France. His research interests lie in the fields of marketing and corporate communication, marketing theory and applications of marketing as seen from a consumer, rather than an organizational viewpoint. Kitchen has published 20 books and over 150 papers in academic journals across the world. He is currently a Fellow of the CIM, RSA, HEA, Institute of Directors UK and the Institute of Marketing Science US.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.