4,719
Views
5
CrossRef citations to date
0
Altmetric
Articles

Highly cited educational technology journal articles: a descriptive and critical analysis

ORCID Icon, &
Pages 216-229 | Received 06 Aug 2021, Accepted 19 Oct 2022, Published online: 31 Oct 2022

ABSTRACT

Citations are valuable capital in the academy as the number of citations is the most frequently used indicator in evaluating the quality of papers, journals, researchers, and universities. Thus, the characteristics of highly cited articles (HCA) have become a common research topic but the approach has been mainly descriptive with no profound critical reflection of what kind of research is cited, where the research is from, where the research is published, and what do these things mean for edtech research. This paper contributes to this need by providing a descriptive and critical analysis of 200 highly cited articles from 10 edtech journals. To summarize the key findings, a ‘typical’ edtech HCA is a Western-based review article or quantitative research paper reporting positive findings from higher education, published in a high-impact factor general edtech journal by a major publisher.

Introduction

The overarching question this paper proposes is what are the common characteristics of highly cited educational technology research articles? Citations are an important research topic for several reasons. First, they are valuable capital in the academy as ‘the number of citations is the most frequently used indicator in evaluating the quality of papers, researchers, research centers and universities’ (Tahamtan, Safipour Afshar, and Ahamdzadeh Citation2016). The prestige of academic journals is also determined by citation-based metrics like impact factor and CiteScore. Being widely cited implies that the theoretical ideas or empirical findings presented in the paper have influenced other researchers and, thus, shaped the research field (Stremersch, Verniers, and Verhoef Citation2007). Thus, it is hardly surprising that citations have been studied frequently for several decades (Kunnath et al. Citation2021), the characteristics of highly cited articles (HCA) being one major sub-theme (e.g., Elgendi Citation2019; Tahamtan, Safipour Afshar, and Ahamdzadeh Citation2016).

HCA research often aims to map and describe the whole variety of factors that correlate with citations, which often leads to the identification of dozens of variables (Tahamtan, Safipour Afshar, and Ahamdzadeh Citation2016). On the one hand, such research can provide detailed and fine-grained information. On the other, the findings can also be somewhat shattered and decontextualized. One example is that high number of tables are positively associated with citations (Elgendi Citation2019). While tables are claimed to enhance the trustworthiness of a research publication (Cloutier and Ravasi Citation2021), the high number of tables in HCA is more likely an indicator of the research methodology: tables are more common in quantitative research, which is cited more than qualitative research (Antonakis et al. Citation2014; Farsani et al. Citation2021; Swygart-Hobaugh Citation2004). Put differently, by scraping the surface a bit, the seemingly value-free finding concerning the number of tables appears to signpost an imbalance between these two major methodological paradigms.

The example, above, suggests that a purely descriptive approach may, metaphorically speaking, fail to see the forest from the trees. Thus, in the present paper, we combine descriptive and critical review strategies (see Grant and Booth Citation2009). In other words, we go beyond mere description and include a critical interpretation of the findings (i.e., the distribution of research methodologies). Furthermore, we interpret the data in relation to the prominent characteristics of educational technology (edtech) research, such as the inherent positivity (tech is good for education) identified in various articles (e.g., Bigum and Kenway Citation2005; Mertala Citation2021; Selwyn Citation2016). To keep the article focused, we opted to concentrate on five themes: journals, article types, research methodologies, findings, and contexts (an overview of relevant literature is provided in the following section). The precise research question we seek answers to are:

  • What kind of journals are HCAs published in (publisher, impact factor, journal scope [general/specialized])?

  • How are different article types (e.g., empirical/theoretical/methodological/review) distributed among HCAs?

  • How are different research methodologies (e.g., quantitative/qualitative/mixed method) distributed among empirical HCAs?

  • What kind of findings are presented in empirical HCAs?

  • How different geographical and educational contexts are distributed among HCAs?

We tackle these questions by analyzing of 200 HCAs published in between 2015 and 2019 in 10 different edtech journals. Our research design complements the existing edtech-themed HCA research, which has focused on individual journals (Bond and Buntins Citation2018), as well as specific research topics (blended learning; Halverson et al. Citation2014) and particular educational contexts (K-12 schools; Pérez-Sanagustín et al. Citation2017). The few inclusive and cross-journal HCA analyses have been done for relatively small samples ranging from nine to 50 papers (Bodily, Leary, and West Citation2019; Valtonen et al. Citation2022; West and Borup Citation2014).Footnote1 Lastly, all the aforementioned reviews can be labeled descriptive as they outline the common features of the HCAs without a profound critical reflection of what kind of research is cited, where the research is from, where the research is published, and what do these things mean for edtech research.

Background

Since the vast majority of articles are published in peer-reviewed academic journals, it would be artificial to study HCA without paying attention to journal-related characteristic. HCAs are typically published in journals that have high impact factors (IF) (Aksnes Citation2003; Duyx et al. Citation2017). Additionally, the journals publishing HCAs are more often general than specialized, with narrow and focused scopes (Tahamtan, Safipour Afshar, and Ahamdzadeh Citation2016). These two features are more intertwined than separate, as journals with high IF are more often general than specialized (Kelly and Jennions Citation2006). IF is also a key feature for scholars when they select a target journal for their manuscript (Ritzhaupt, Sessums, and Johnson Citation2012; Tahamtan, Safipour Afshar, and Ahamdzadeh Citation2016), which – partially, at least – reflects the metric-based meriting and evaluation policy of contemporary academia (Muller Citation2019). Educational journals with high IFs are typically those from major publishers like Elsevier, Wiley, Springer, Sage, and Routledge (Scimago Journal & Country Rank, Citationn.d.), which suggest that IF-based journal selection works in favor of major for-profit publishers.

Furthermore, as previously mentioned, research qualitative research is typically cited less than quantitative research (Antonakis et al. Citation2014; Farsani et al. Citation2021). One explanation relates to the different citation cultures inhabited by quantitatively and qualitatively oriented researchers. A study of sociological research implied that authors of quantitative articles cited other articles from quantitative-dominated journals but virtually excluded citations to articles from qualitative journals, while authors of qualitative articles cited articles from quantitative-dominated as well as qualitative-specialized journals (Swygart-Hobaugh Citation2004). There is also evidence that the readership of qualitative articles is smaller than quantitative ones (Jamali Citation2018). One more possible explanation is that journals with high IFs publish only a little qualitative research (Avenier and Thomas Citation2015), and emerging if indicative evidence shows that similar patterns are present in edtech journals (Pérez-Sanagustín et al. Citation2017). In terms of possible reasons, it has been suggested that journal editors prefer submissions that are most likely to be cited to avoid damaging the journal’s high IF, which is considered a signal of prestige (Caon Citation2017).

As a related notion, empirical articles reporting positive findings (where the research hypothesis is supported) are cited more than those reporting negative findings (Duyx et al. Citation2017). Such finding is likely to be witnessed in our study too. Edtech is as an essentially positive project, which is present in the common terminology of the field, including phrases like computer-supported collaborative learning and technology-enhanced learning (Selwyn Citation2016), which both signal an optimistic and solutionist stance towards edtech (Mertala Citation2021). An illustrative, if anecdotal, example is that the most cited edtech paper identified by Valtonen et al. (Citation2022) explored ‘e-learning success factors’ (Selim Citation2007). Besides quantitative studies, review, method, and theoretical articles are also cited more than qualitative articles (Antonakis et al. Citation2014), with reviews typically being overrepresented in HCAs (Aksnes Citation2003; Judge et al. Citation2007). One explanation for the popularity of review articles is that they synthesize vast amounts of literature and provide integrative knowledge on the status of a field, thus offering solid ground for the citators to build on their arguments (Antonakis et al. Citation2014).

Lastly, education is a contextual practice, and findings from one cultural, political, geographical, or age-specific research context may not be easily transferable to others. Thus, it is valuable to know more about the contextual variation of the edtech-themed HCAs. By context we mean both, geographical (country / continent) and educational (e.g., primary / secondary / higher education) contexts. Western and especially English-speaking countries are overrepresented among HCAs (Azer and Azer Citation2019), and there is indicative evidence suggests that this also applies to edtech research: contributions from South America, the Middle East, and Africa are underrepresented in edtech journals (Bond and Buntins Citation2018; Bond, Zawacki-Richter, and Nichols Citation2019; Pérez-Sanagustín et al. Citation2017; Valtonen et al. Citation2022; Zawacki-Richter and Latchem Citation2018). Evidence further shows that papers from (often Westernized parts of) Asia are common in the major edtech journals (Bodily, Leary, and West Citation2019; Bond, Zawacki-Richter, and Nichols Citation2019; Pérez-Sanagustín et al. Citation2017; Zawacki-Richter and Latchem Citation2018). Lastly, research suggests that formal education, especially (primary and secondary) school and higher education are the most common research contexts of the studies published in major edtech journals (Bond and Buntins Citation2018; Bond, Zawacki-Richter, and Nichols Citation2019; Pérez-Sanagustín et al. Citation2017; Valtonen et al. Citation2022).

The current study

Data collection

Our sampling strategy followed the principles of intensity sampling, in which the researcher seeks for ‘excellent or rich examples of the phenomenon of interest, but not highly unusual cases’ (Patton Citation2002, 234). Previous edtech-themed citation research suggests that absolutely most cited papers are often about ‘up-and-coming technology’ (Bodily, Leary, and West Citation2019, 72) or written by distinguished scholars (Valtonen et al. Citation2022). To avoid the possible bias caused by ‘hot topics’ and ‘big names’ we decided not to search for the 200 absolutely most cited papers. Instead, we identified 10 major journals (consisting 17% of all edtech journals based on Valtonen et al,’s 2021 estimation) and selected the 20 most cited articles from each of them resulting to sample of 200 articles, which is higher than in previous edtech-themed HCRA-analyses (Bodily, Leary, and West Citation2019; Bond and Buntins Citation2018; Valtonen et al. Citation2022; West and Borup Citation2014). The sample of 200 papers was considered to be wide enough to gain understanding about the general characteristics of HCAs (see Elgendi Citation2019; Tahamtan, Safipour Afshar, and Ahamdzadeh Citation2016) while simultaneously being manageable for inductive and interpretative critical analysis.

We also decided to put focus on relatively new publications, that is, those that were published between 2015 and 2019 due to the following reasons. First, various historical analyses have been done in recent years (Bodily, Leary, and West Citation2019; Bond, Zawacki-Richter, and Nichols Citation2019; Pérez-Sanagustín et al. Citation2017; Valtonen et al. Citation2022; Zawacki-Richter, Alturki, and Aldraiweesh Citation2017; Zawacki-Richter and Latchem Citation2018; Zawacki-Richter and Naidu Citation2016). Second, since older papers are cited more than recent papers (Web of Science Citation2021) expanding the timespan to 10 or 20 years would most likely lead to a situation, where (almost) all the analyzed articles would be from the early part of the period, namely early 2000s or early 2010s. For instance, in the review by Valtonen et al. (Citation2022) only 1 out of 20 most cited papers published between 2011 and 2021 was published in the latter half of the timeframe. Third, we wanted to provide a set-point of what HCAs looked like pre-COVID-19 as the pandemic has affected publication and citation practices in various fields of research (Ioannidis et al. Citation2021).

We opted to use Google Scholar (GS) as the database and GS’s h5-index as the metric. GS describes the h5-index as ‘the h-index for articles published in the last 5 complete years’ (GS, Citationn.d.), which in the present study covers the years 2015–2019.Footnote2 These decisions were based on the following reasons. GS indexes all the major scientific databases as well as minor ones and thus includes more journals than major scientific databases (Martín-Martín et al. Citation2021; see also West and Borup Citation2014). There is also a significant amount of extra citation coverage in Google Scholar that is not found in any of the other data sources (Martín-Martín et al. Citation2021) because GS also counts citations from sources other than journal articles (e.g., doctoral theses, policy documents). A high number of citations in GS is therefore understood to be a proxy of a wider impact than citations from purely academic databases (see also Valtonen et al. Citation2022). The journals were selected by going through the articles from GS field-specific metric pages¹ one by one in descending order. During this screening, one journal, International Conference on Learning Analytics and Knowledge, was excluded as it is a conference proceeding series and therefore differs from the other outlets. The journals included in this study are outlined in . The complete list of articles is provided as an external online document.Footnote3

Table 1. Journals included in the study ordered by h5-index.

Analysis

The analysis process was guided by an abductive approach, which combines deductive and inductive reasoning (Grönfors Citation2011). The research questions were informed by previous research on HCAs and edtech journal articles (see the Background section), which provided theoretical threads for the analysis. To retrieve all the relevant information, we created a spreadsheet for each journal. presents the parts of the spreadsheet that were used in the present study, examples from one article, and a brief account of where the information was sourced and how it was screened.

Table 2. An overview of the data extraction and analysis process.

Following a theoretical thread does not mean that the theory is taken as a given or that the role of the analysis process is simply to test the theory. Instead, theoretical threads are complemented with inductive reasoning to open up new ways of thinking the phenomenon under investigation (Dey Citation2003). Inductive notions made from individual articles were first collected in a separate document. Then, they were compared with the whole sample to identify whether they were isolated incidents or emerging themes. An illustrative example of the latter was the lack of references to qualitative methodology in mixed methods papers (see the Findings section for further details). Last, following Antonakis et al.’s (Citation2014) strategy, we also identified the 50 most-cited articles from our dataset and compared their characteristics with the whole sample to determine whether certain features were more prominent in the highest quartile.

Findings

Our findings are presented in four sub-sections. We start with findings related to the journals and publishers, followed by the results regarding article type, methodologies, nature of findings, and the research context.

HCAs are published in high IF journals by major publishers

All eight journals with a known IF had a higher IF than the average IF in edtech journals, which has been established as 1.173 (Zurita et al. Citation2016). This finding supports previous research suggesting that HCAs are typically published in journals with high IFs (Aksnes Citation2003; Duyx et al. Citation2017). Eight of the ten journals were published by Springer (3), Elsevier (2), Wiley (2), and Routledge (1), which suggests that HCAs are mainly published via major publishing companies. The remaining two journals (JETS and IRRODL) were published by universities, and both were open access (OA) journals with no publishing fees. Lastly, 8 of the 10 journals can be described as general edtech journals that publish articles from various perspectives. The two exceptions were IHE (contextual focus on higher education) and IRRODL (thematic focus on open and distributed learning). This finding is similar to those of other studies from different fields, which have noted that HCRAs are more often published in general than specialized journals (Kelly and Jennions Citation2006). See for summary.

Table 3. Impact factors and publishers of HCAs ordered by the impact factor.

Edtech HCAs are quantitative empirical studies or review articles

summarizes the distribution of different article types. Within the different forms of empirical research, quantitative studies comprised 39% of the whole sample and 66% of the empirical articles. Contrast to qualitative research was notable: only six percent of the whole sample and 11 percent of the empirical articles were qualitative studies. Review articles accounted for 22% of the whole sample and 34% of the 50 most-cited papers. Both numbers were higher than the occurrence of review articles in general (see Watson et al. Citation2021). The popularity of review articles is often explained by the fact that they produce integrative knowledge of a particular field of research, which makes them valuable sources for other scholars to build on their arguments (Antonakis et al. Citation2014) avoiding detailed discussion (and excessive citation) of earlier work (Harwood Citation2009 ).

Table 4. Distribution of different article types in the whole sample and the 50 most cited.

The dominance of quantitative research over qualitative research can possibly be explained by the difference in the citation cultures inhabited by quantitatively and qualitatively oriented researchers (Swygart-Hobaugh Citation2004) or journals preference for quantitative studies (Caon Citation2017). That being said, the imbalance between the use of quantitative and qualitative methods was not restricted only to their occurrence. As an inductive finding, we noticed that 11 out of 28 mixed method papers contained no references to literature about qualitative methods. The following extract from one of the reviewed papersFootnote4 provides an illustrative example.

After qualitative researchers on our team analyzed the data multiple times, a decision was made to examine the 15 answers for each survey participant; in effect, treating these answers as one short interview per respondent.

The reader is not informed what kinds of analyses were conducted for the open-ended survey questions. Additionally, the decision to treat open-ended survey data as ‘short interview[s]’ is methodologically questionable. Unlike in surveys, the forms of knowledge produced in research interviews are constructed through the interaction between the interviewer and the interviewee (Kvale and Brinkman Citation2009). The interviewer is also able to use probes to obtain highly detailed responses (Keats Citation1999) – a feature not included in open-ended surveys. The lack of methodological literature is somewhat surprising, as scholars have reported that they consider citing methodological literature important for justifying the use of the chosen method and making the methodology accessible by guiding readers to sources that provide more detailed descriptions (Harwood Citation2009).

Furthermore, an inductive inspection of the methods of quantitative articles revealed that 61 out of 78 papers used survey data. Thirty-five relied solely on survey data and 26 studies combined survey data with some other data. Most often (n = 18) the ‘other data’ were the scores from different tests, including designs that use pre-and post-tests to investigate the effectivity of research interventions.

Empirical edtech HCAs often report positive findings

Seventy-three empirical articles out of 118 were formed around hypotheses or hypothesis-like RQs, which could be tested by the data. Of these, 68 reported positive findings, in which the hypotheses of the study were validated by the data as illustrated in the following extracts:

According to the research results, gamification-based teaching practices have a positive impact upon student achievement and students’ attitudes toward lessons.

The results indicated that visitors who used AR guidance showed significant learning and sense of place effects

Findings indicated a significant difference in the learning achievement and motivation between the two groups, with students using the flipped classroom performing better

This finding may reflect the positive publication bias, where reporting positive findings if preferred by authors and journals (Fanelli Citation2012). On the other hand, positivity is argued to be an inherent quality of edtech research (Bigum and Kenway Citation2005; Mertala Citation2021; Selwyn Citation2016) and locating one’s work in the pro-edtech zeitgeist may be a strategic choice to be part of the mainstream of the field.

The research context of edtech HCAs is most often higher education

As shown in , 40 papers were about school education (primary, secondary, and high school), whereas 66 papers were about higher education. The distribution differed from that identified by Bond, Zawacki-Richter, and Nichols (Citation2019) in favor of higher education. Bond, Zawacki-Richter, and Nichols (Citation2019) found that for each primary/secondary school education paper, 0.62 higher education articles were published, whereas in our data, the number was 1.65. This difference can be partly explained by the fact that one of the journals, IHE, specializes in higher education research. However, even if the 15 higher education-specific articles from IHE were removed, the relative distribution would remain higher education dominant (1.25).

Table 5. Distribution of the research contexts.

HCAs are from researchers from western and westernized contexts

As summarized in , 76.5% of the whole sample and 80% of the 50 most-cited articles were written by first authors from Western (Europe, North America, and Australia) contexts. A closer look at the Asian countries included in the top 50 suggested that the papers were also from more Westernized Asian countries, namely, Taiwan (N = 20), as these were highly cited. This notion is best explained by the fact that papers with Taiwanese origins were the fourth most common in C&E (Zawacki-Richter and Latchem Citation2018) and third most common in BJET (Bond, Zawacki-Richter, and Nichols Citation2019) in general.

Table 6. Continent of the first author.

Discussion

The present paper is the first study to explore the characteristics of HCAs in the field of EdTech research. According to our findings, a ‘typical’ EdTech HCA is a review article or quantitative research paper reporting positive findings, conducted in a higher education context, published in a generic high IF journal by a major publisher, and written by authors working in Western universities.

It is worth noticing, that many of the aforementioned qualities apply to this paper. One exception is that Learning, Media and Technology (LMT) is not a general edtech journal but a specialized one as it ‘seeks to include submissions that take a critical approach towards all aspects of education and learning, digital media and digital technology’ (LMT, Citationn.d.). Echoing the findings of Ritzhaupt, Sessums, and Johnson’s (Citation2012), we have chosen the target journal primarily due presumed fit between the content of manuscript, journal’s aims, and the journal readership – all factors that can be assumed to contribute positively to citation count. By saying this, we wish to make clear that we do not consider ourselves as outside observers of the citation and publishing cultures but insiders who have been socialized in certain practices. Thus, in the remaining paragraphs, we reflect on our choices with what we consider as the key findings of this study.

Dominance of quantitative research

Let us begin with the dominance of quantitative research. Because we have no information about the methodological motives behind the reviewed articles, we can only explain the reasoning behind of own contribution. While our article is not quantitative research per se, it nevertheless reports some of the findings in quantitative/numerical form, namely as frequencies and percentages. In our case, the (rather light) quantitative take was thought to support both, the descriptive and the critical objectives of the study. For example, by exploring the relative distribution between quantitative and qualitative studies among HCAs, we were able to identify the imbalance between these two methodological traditions, which we understand to indicate a discrepancy in power relations.

While the uneven distribution of quantitative and qualitative research was an expected finding (see Antonakis et al. Citation2014; Swygart-Hobaugh Citation2004), the number qualitative papers (6%) was notably smaller than the 15% from the analysis of highly cited articles about ICT in K-12 schools (Pérez-Sanagustín et al. Citation2017). These numbers are in stark contrast with arguments suggesting that there is a ‘good balance overall between qualitative and quantitative methods’ (West and Borup Citation2014, 550) in edtech research or that ‘qualitative research has at last achieved full respectability in the academic sphere’ (Bailey Citation2014, 167). This disparity may imply that the skewness in favor of quantitative studies may be restricted to HCAs (see also Bond and Buntins Citation2018) and/or highly ranked journals as the editors of C&E have expressed that quantitative research dominates the submissions they receive (Twining et al. Citation2017). The missing references for qualitative methods in mixed-method articles may indicate that the authors and/or the peer-reviewers of these particular articles may not be familiar with qualitative methodology and methods.

The notable amount of survey data used in quantitative studies is also worth discussing, especially when this finding is combined with the notion that higher education was the most common research context. While good surveys are laborious to put together, online surveys are still less resource-intensive data collection method than in-depth interviews or ethnographic observations. Higher education, in turn, is the context in which the vast majority of researchers work, and it the most prevalent context in edtech research in general (Valtonen et al. Citation2022). While higher education itself is well-justified research context, it is worth to ask whether the above-mentioned context-method-combination is partly due to convenience, as research is always conducted within limited resources. In fact, convenience plays a role in this study as well. A major reason for launching this research project, particularly, in 2020, was that the Covid-19 pandemic prevented doing ethnographic fieldwork involving human participants. Altogether, more research on the methodological choices in edtech research would be valuable.

Emphasis on positive findings

The fourth important finding is the high number of positive findings in which the authors’ hypotheses were validated by the data. The number of papers with hypothesis-like research questions was relatively high (66% of empirical papers), which is best explained by the dominance of quantitative research discussed in the previous section. What we mean by this is that the differences between quantitative and qualitative research go beyond the mere methodic decisions: the former seeks correlations and causalities (Hopkins Citation2008) while the latter aims to discover the essential qualities of a certain phenomenon (Miles, Huberman, and Saldana Citation2013). Thus, quantitative research often asks questions like ‘does A lead to B’ (causation) or ‘is C related to D’ (correlation), which can produce positive (yes it does/is) or negative findings (no it doesn’t/isn’t). Qualitatively oriented research, like the present study, proposes different kind of questions: ‘what kind of findings are presented in empirical HCAs’, for instance, is an open research question, which cannot be answered ‘yes’ or ‘no’.

That said, hypothesis-like research question itself does not lead to positive findings. Besides the previously mentioned positive publication bias and inherent positivity of edtech research, certain traditions of academic writing may play a role as well. Authors typically built their arguments on supportive references (Harwood Citation2009), a strategy that suits well for verbalizing hypothesis-like research questions (i.e., based on E, F, and G, we can assume H). In fact, in an earlier version of the present paper, the research questions were formulated in a more hypothesis-like manner. Many of the aspects we screened from the articles were based on findings of previous research. Since we wanted to acknowledge the supportive role of previous research as explicitly as possible, we initially opted for hypothesis-like questions and reported the related literature in the immediate context. However, the feedback we received from the anonymous peer reviewers, and the editors of LMT, led us to rephrase the research questions to more open-ended form (without compromising the acknowledgement of previous research).

Scarcity of articles from the Global South

One more finding worth highlighting was the scarcity of articles from the Global South, which consisted of only two percent of the sample. Educational practice and policy should ideally be based on evidence of research (Slavin Citation2020). However, if educational reforms in Global South are based on research conducted in selective Western settings, they can be thought as one form of edtech colonialism through which teachers and students ‘conform to patterns of education developed in European or American contexts’ (Hussein Citation2012, 135; see also Lund Citation2022). In line with qualitative research, the geographical imbalance appears to reflect the bigger picture of submissions and publishing culture: only five percent of the articles published in BJET and C&E between 2010 and 2018 were from Africa or South America (Bond, Zawacki-Richter, and Nichols Citation2019). Submissions from Africa on non-Westernized Asia are also notably more often desk-rejected than submissions from Western contexts (Heinrich et al. Citation2018). This notion implies quality issues, which can be due to a lack of resources, including access to literature databases and professional proofreading (for non-fluent English speakers) – both assets the authors of the present paper have the privilege to use. Scholars from emerging and developing countries have also experienced that their work is ‘perceived as lesser quality than that of scientists from the United States and Europe’ (Matthews et al. Citation2020, 490), which can also reflect in citation practices.

Final remarks

While our study has provided novel information, it is not without its limitations. It is important to acknowledge that mere citation count does not tell about the quality of the citations, which is a major limitation of the present study. Put differently, we do not know to which extent the referrers have built on (positive citation) or contested (negative citation) the HCAs studied in this paper. For instance, one publication the first author of the present paper cites frequently is Prensky’s Digital Natives – Digital Immigrants. The citations, however, are always negative as Prensky’s concepts are used as an example of an evocative – but false – rhetoric that has had notable effects for teachers’ beliefs about students’ and digital technologies (see Mertala Citation2020, 27–28). Neither do we know how accurately the HCAs are cited. Previous research suggests that in severe cases, the vast majority percent of citations can be inaccurate as the citers have relied on a secondhand source, which had misinterpreted the argument made in the original publication (Stang, Jonas, and Poole Citation2018).

Another limitation is that studies like the present one are always a snapshot of a certain moment in time. At the time of writing this, GS has published new h5 ranking covering the years 2016–2020. There are some changes in the top 10 as TT and JCAL were replaced by International Journal of Educational Technology in Higher Education (IJETHE) and Computer Assisted Language Learning (CALL)Footnote5 – both high IF journals published by major companies (IJETHE, 4.944, Springer; CALL, 4.789, Taylor and Francis), which are in line with our journal and publisher – based findings. IJETHE, as the name implies, is a specialized in HE, which suggests that the foothold of HE research among HCAs is growing stronger. CALL, in turn, has its focus in language learning, which means that two general journals were superseded by specialized journals resulting into 6–4 division in favor for general journals. However, since the differences between the h5 indexes of the ‘bottom’ of the top 10 and the closest ‘runner-ups’ are rather small, changes taking place in one year need to be interpreted with caution.

Lastly, we wish to highlight that all forms of the imbalances identified in this study are already recognized by some of the edtech journals and they have taken actions for improving the situation. C&E editors have published guidelines for conducting and reporting qualitative studies (Twining et al. Citation2017) whereas BJET editors have created the ‘BJET Early Career Researchers Toolkit’ to support authors from underrepresented areas in the preparation of their manuscripts (Bond, Zawacki-Richter, and Nichols Citation2019). The editors of LMT, in turn, have expressed an explicit wish to receive more submissions grounded on post-human, socio-material, and feminist perspectives to enhance the diversity of theoretical and methodological approaches (Williamson, Potter, and Eynon Citation2019). They also encourage researchers to focus on ‘edtech pushbacks’ (Williamson, Potter, and Eynon Citation2019, 88–89), which challenges the inherent positivity of the edtech research identified in this study and others. An additional method to increase diversity would be regular calls for thematic issues around underrepresented methodologies and geographical contexts (Gallagher and Knox Citation2019). Lund (Citation2022) goes a step further and argues that researchers from developed countries should regularly seek international collaborations with researchers from developing countries to support the growth of research fields and help deconstruct a western hemispheric hegemony of research approaches and publishing practices. Future research will hopefully tell us whether these efforts have paid off.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 To be precise, many of these studies have analysed a larger sample of articles but HCA sub-sample has been small.

2 ¹GS provides field-specific lists of journals with biggest h5-indexes. The list of edtech journals can be found here: https://bit.ly/3oyYd7t. Because the h5-index is updated annually, a screenshot of the 2020 list can be found here: https://bit.ly/3KBctpu.

4 We did not provide the reference here as the content of the extract (as an example of a more general phenomenon) is more important than naming the exact source.

5 To be precise, CALL was ranked 11th but we decided to neglect the 7th highest ranked journal International Journal of Instruction, which based on a screening of 20 most cited articles and the most recent issues is not an edtech journal and, thus, was (for a reason unknown) placed in a wrong subsection of educational research. Only 5 of the 20 most cited articles in IJI were about edtech. Likewise, only one third (21/63) of articles of the most recent issue (July 2022) were edtech-themed.

References

  • Aksnes, D. 2003. “A Macro Study of Self-Citation.” Scientometrics 56 (2): 235–246.
  • Antonakis, J., N. Bastardoz, Y. Liu, and C. Schriesheim. 2014. “What Makes Articles Highly Cited?” The Leadership Quarterly 25 (1): 152–179. doi:10.1016/j.leaqua.2013.10.014.
  • Avenier, M., and C. Thomas. 2015. “Finding One’s way Around Various Methodological Guidelines for Doing Rigorous Case Studies: A Comparison of Four Epistemological Frameworks.” Systemes D'information Management 20 (1): 61–98. doi:10.3917/sim.151.0061.
  • Azer, S., and S. Azer. 2019. “Top-cited Articles in Medical Professionalism: A Bibliometric Analysis Versus Altmetric Scores.” BMJ Open 9 (7): e029433. doi:10.1136/bmjopen-2019-029433.
  • Bailey, L. 2014. “The Origin and Success of Qualitative Research.” International Journal of Market Research 56 (2): 167–184.
  • Bigum, C., and J. Kenway. 2005. “New Information Technologies and the Ambiguous Future of Schooling—Some Possible Scenarios.” In Extending Educational Change, edited by A. Hargreaves, 95–115. Cham, NL: Springer.
  • Bodily, R., H. Leary, and R. E. West. 2019. “Research Trends in Instructional Design and Technology Journals.” British Journal of Educational Technology 50 (1): 64–79. doi:10.1111/bjet.12712.
  • Bond, M., and K. Buntins. 2018. “An Analysis of the Australasian Journal of Educational Technology 2013–2017.” Australasian Journal of Educational Technology 34 (4), n.p. doi:10.14742/ajet.4359.
  • Bond, M., O. Zawacki-Richter, and M. Nichols. 2019. “Revisiting Five Decades of Educational Technology Research: A Content and Authorship Analysis of the British Journal of Educational Technology.” British Journal of Educational Technology 50 (1): 12–63. doi:10.1111/bjet.12730.
  • Caon, M. 2017. “Gaming the Impact Factor: Where who Cites What, Whom and When.” Australasian Physical & Engineering Sciences in Medicine 40 (2): 273–276. doi:10.1007/s13246-017-0547-1.
  • Cloutier, C., and D. Ravasi. 2021. “Using Tables to Enhance Trustworthiness in Qualitative Research.” Strategic Organization 19 (1): 113–133. doi:10.1177/1476127020979329.
  • Dey, I. 2003. Qualitative Data Analysis: A User Friendly Guide for Social Scientists. London, U.K.: Routledge.
  • Duyx, B., M. Urlings, G. Swaen, L. Bouter, and M. Zeegers. 2017. “Scientific Citations Favor Positive Results: A Systematic Review and Meta-Analysis.” Journal of Clinical Epidemiology 88: 92–101. doi:10.1016/j.jclinepi.2017.06.002.
  • Elgendi, M. 2019. “Characteristics of a Highly Cited Article: A Machine Learning Perspective.” IEEE Access 7: 87977–87986. doi:10.1109/access.2019.2925965.
  • Fanelli, D. 2012. “Negative Results are Disappearing from Most Disciplines and Countries.” Scientometrics 90 (3): 891–904. doi:10.1007/s11192-011-0494-7.
  • Farsani, M. A., H. R. Jamali, M. Beikmohammadi, B. D. Ghorbani, and L. Soleimani. 2021. “Methodological Orientations, Academic Citations, and Scientific Collaboration in Applied Linguistics: What do Research Synthesis and Bibliometrics Indicate?” System 100: 102547. doi:10.1016/j.system.2021.102547.
  • Gallagher, M., and J. Knox. 2019. “Global Technologies, Local Practices.” Learning, Media and Technology 44 (3): 225–234. doi:10.1080/17439884.2019.1640741.
  • Google Scholar. n.d. https://scholar.google.com/intl/en/scholar/metrics.html#metrics.
  • Grant, M. J., and A. Booth. 2009. “A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies.” Health Information & Libraries Journal 26 (2): 91–108. doi:10.1111/j.1471-1842.2009.00848.x.
  • Grönfors, Matti. 2011. Laadullisen tutkimuksen kenttätyömenetelmät. Hämeenlinna, Finland: SoFia-Sosiologi-Filosofiapu Vilkka.
  • Halverson, L., C. Graham, K. Spring, J. Drysdale, and C. Henrie. 2014. “A Thematic Analysis of the Most Highly Cited Scholarship in the First Decade of Blended Learning Research.” The Internet and Higher Education 20: 20–34. doi:10.1016/j.iheduc.2013.09.004.
  • Harwood, N. 2009. “An Interview-Based Study of the Functions of Citations in Academic Writing Across two Disciplines.” Journal of Pragmatics 41 (3): 497–518. doi:10.1016/j.pragma.2008.06.001.
  • Heinrich, E., Henderson, M., and Redmond, P. 2018. ‘How International is AJET?’ Australasian Journal of Educational Technology, 34(4): i – vi. doi:10.14742/ajet.4830
  • Hopkins, W. G. 2008. Quantitative research design. http://www.citeulike.org/group/6675/article/3424132.
  • Hussein, A. 2012. “Freirian and Postcolonial Perspectives on E-Learning Development: A Case Study of Staff Development in an African University.” The International Journal of Critical Pedagogy 4 (1): 135–153.
  • Ioannidis, J., M. Salholz-Hillel, K. Boyack, and J. Baas. 2021. “The Rapid, Massive Growth of COVID-19 Authors in the Scientific Literature.” bioRxiv, doi:10.1101/2020.12.15.422900.
  • Jamali, H. R. 2018. “Does Research Using Qualitative Methods (Grounded Theory, Ethnography, and Phenomenology) Have More Impact?” Library & Information Science Research 40 (3–4): 201–207. doi:10.1016/j.lisr.2018.09.002.
  • Judge, T., D. Cable, A. Colbert, and S. Rynes. 2007. “What Causes a Management Article to be Cited—Article, Author, or Journal?” Academy of Management Journal 50 (3): 491–506. doi:10.5465/amj.2007.25525577.
  • Keats, D. 1999. Interviewing: A Practical Guide for Students and Professionals. Sydney: UNSW Press.
  • Kelly, C., and M. Jennions. 2006. “The h-Index and Career Assessment by Numbers.” Trends in Ecology & Evolution 21 (4): 167–170. doi:10.1016/j.tree.2006.01.005.
  • Kunnath, S. N., D. Herrmannova, D. Pride, and P. Knoth. 2021. “A Meta-Analysis of Semantic Classification of Citations.” Quantitative Science Studies 2 (4): 1170–1215. doi:10.1162/qss_a_00159.
  • Kvale, S., and S. Brinkman. 2009. Interviews: Learning the Craft of Qualitative Research Interviewing. Thousand Oaks, CA: Sage.
  • Learning, Media and Technology. n.d. “Aims and scope.” https://www.tandfonline.com/action/journalInformation?show = aimsScope&journalCode = cjem20.
  • Lund, B. D. 2022. “Is Academic Research and Publishing Still Leaving Developing Countries Behind?” Accountability in Research 29 (4): 224–231. doi:10.1080/08989621.2021.1913124.
  • Martín-Martín, A., M. Thelwall, E. Orduna-Malea, and E. Delgado López-Cózar. 2021. “Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations’ COCI: A Multidisciplinary Comparison of Coverage via Citations.” Scientometrics 126 (1): 871–906. doi:10.1007/s11192-020-03690-4.
  • Matthews, K. R., E. Yang, S. W. Lewis, B. R. Vaidyanathan, and M. Gorman. 2020. “International Scientific Collaborative Activities and Barriers to Them in Eight Societies.” Accountability in Research 27 (8): 477–495. doi:10.1080/08989621.2020.1774373.
  • Mertala, P. 2020. “Misunderstanding Child-Centeredness: The Case of “Child 2.0” and Media Education.” Journal of Media Literacy Education 12 (1): 26–41. doi:10.23860/JMLE-2020-12-1-3.
  • Mertala, P. 2021. ““‘It is Important at This Point to Make Clear That This Study is not “Anti-IPad”’: Ed-Tech Speak Around IPads in Educational Technology Research.” Learning, Media and Technology 46 (2): 230–242. doi:10.1080/17439884.2021.1868501.
  • Miles, M. B., A. M. Huberman, and J. Saldana. 2013. Qualitative Data Analysis. Thousand Oaks, CA: Sage.
  • Muller, J. Z. 2019. The Tyranny of Metrics. Princeton, NJ: Princeton University Press.
  • Patton, M. Q. 2002. Qualitative Research and Evaluation Methods. 3rd ed. Thousand Oaks, CA: Sage.
  • Pérez-Sanagustín, M., M. Nussbaum, I. Hilliger, C. Alario-Hoyos, R. Heller, P. Twining, and C. Tsai. 2017. “Research on ICT in K-12 Schools: A Review of Experimental and Survey-Based Studies in Computers & Education 2011 to 2015.” Computers and Education 104: A1–A15. doi:10.1016/j.compedu.2016.09.006.
  • Ritzhaupt, A., C. Sessums, and M. Johnson. 2012. “Where Should Educational Technologists Publish Their Research? An Examination of Peer-Reviewed Journals Within the Field of Educational Technology and Factors Influencing Publication Choice.” Educational Technology 52 (6): 47–56.
  • Scimaco Journal & Country Rank. n.d. https://www.scimagojr.com/journalrank.php?category=3304.
  • Selim, H. M. 2007. “Critical Success Factors for e-Learning Acceptance: Confirmatory Factor Models.” Computers & Education 49 (2): 396–413. doi:10.1016/j.compedu.2005.09.004.
  • Selwyn, N. 2016. ““Minding our Language: Why Education and Technology is Full of Bullshit … and What Might be Done About it.” Learning, Media and Technology 41 (3): 437–443. doi:10.1080/17439884.2015.1012523.
  • Slavin, R. 2020. “How Evidence-Based Reform Will Transform Research and Practice in Education.” Educational Psychologist 55 (1): 21–31. doi:10.1080/00461520.2019.1611432.
  • Stang, A., S. Jonas, and C. Poole. 2018. “Case Study in Major Quotation Errors: A Critical Commentary on the Newcastle–Ottawa Scale.” European Journal of Epidemiology 33 (11): 1025–1031. doi:10.1007/s10654-018-0443-3.
  • Stremersch, S., I. Verniers, and P. C. Verhoef. 2007. “The Quest for Citations: Drivers of Article Impact.” Journal of Marketing 71 (3): 171–193. doi:10.1509/jmkg.71.3.171.
  • Swygart-Hobaugh, A. 2004. “A Citation Analysis of the Quantitative/Qualitative Methods Debate's Reflection in Sociology Research: Implications for Library Collection Development.” Library Collections, Acquisitions, and Technical Services 28 (2): 180–195. doi:10.1080/14649055.2004.10765983.
  • Tahamtan, I., A. Safipour Afshar, and K. Ahamdzadeh. 2016. “Factors Affecting Number of Citations: A Comprehensive Review of the Literature.” Scientometrics 107 (3): 1195–1225. doi:10.1007/s11192-016-1889-2.
  • Twining, P., R. S. Heller, M. Nussbaum, and C. C. Tsai. 2017. “Some Guidance on Conducting and Reporting Qualitative Studies.” Computers & Education 106: A1–A9. doi:10.1016/j.compedu.2016.12.002.
  • Valtonen, T., S. López-Pernas, M. Saqr, H. Vartiainen, E. T. Sointu, and M. Tedre. 2022. “The Nature and Building Blocks of Educational Technology Research.” Computers in Human Behavior 128: 107123. doi:10.1016/j.chb.2021.107123.
  • Watson, R., A. Younas, S. Rehman, and P. Ali. 2021. “Clarivate Listed Nursing Journals in 2020: What They Publish and how They Measure use of Social Media.” medRxiv. doi:10.1101/2021.04.19.21255561
  • Web of Science. 2021. “Essential Science Indicators - Highly Cited Papers.” https://webofscience.help.clarivate.com/en-us/Content/esi-highly-cited-papers.html.
  • West, R., and J. Borup. 2014. “An Analysis of a Decade of Research in 10 Instructional Design and Technology Journals.” British Journal of Educational Technology 45 (4): 545–556. doi:10.1111/bjet.12081.
  • Williamson, B., J. Potter, and R. Eynon. 2019. “New Research Problems and Agendas in Learning, Media and Technology: The Editors’ Wishlist.” Learning, Media and Technology 44 (2): 87–91. doi:10.1080/17439884.2019.1614953.
  • Zawacki-Richter, O., U. Alturki, and A. Aldraiweesh. 2017. “Review and Content Analysis of the International Review of Research in Open and Distance/Distributed Learning (2000–2015).” International Review of Research in Open and Distributed Learning 18 (2): 1–26. doi:10.19173/irrodl.v18i2.2806.
  • Zawacki-Richter, O., and C. Latchem. 2018. “Exploring Four Decades of Research in Computers & Education.” Computers & Education 122: 136–152. doi:10.1016/j.compedu.2018.04.001.
  • Zawacki-Richter, O., and S. Naidu. 2016. “Mapping Research Trends from 35 Years of Publications in Distance Education.” Distance Education 37 (3): 245–269. doi:10.1080/01587919.2016.1185079.
  • Zurita, Gustavo, Jose-Maria Merigó, and Lobos-Ossandón Valeria. 2016. “A Bibliometric Analysis of Journals in Educational Research.” Proceedings of the World Congress on Engineering 1: 403–408.