19,342
Views
49
CrossRef citations to date
0
Altmetric
Articles

A Study of Quantitative Content Analysis of Health Messages in U.S. Media From 1985 to 2005

&
Pages 387-396 | Published online: 30 Jul 2010

Abstract

Content analysis is a research method that was traditionally utilized by communication scholars, but as the study of media messages has grown, scholars in other fields have increasingly relied on the methodology. This paper reports on a systematic review of studies using quantitative content analysis methods to examine health messages in the mass media, excluding the Internet, from 1985 to 2005. We searched for health-related content analysis studies published in peer-reviewed journals, identifying 441 articles meeting inclusion criteria. We examined article attributes including theories used, topics, media type, and intercoder reliability measures, and looked at differences over time. Our findings show that studies focusing on health-related messages increased from 1985 to 2005. During this time, studies primarily examined magazines, television, and newspapers, with an emphasis on topics related to substance use, violence, sex, and obesity and body image. Results suggest that studies published in communication journals are significantly more likely to include intercoder reliability data and theory discussion. We recommend that all publications, regardless of discipline or impact factor, request the inclusion of intercoder reliability data reported for individual variables, and suggest that authors address theoretical concepts when appropriate. We also encourage authors to include the term “content analysis,” as well as media type and health topic studied, as keywords to make it easier to locate articles of interest when conducting literature searches.

Content analysis is a research method that, although defined in numerous ways, generally involves a “systematic and replicable” analysis of messages (CitationRiffe, Lacy, & Fico, 1998, p. 20). The methodology originated with studies of newspaper content and was traditionally utilized in the communication field (CitationKrippendorff, 2004). In the mid-1900s, scholars from other disciplines began using the method, and since then it has greatly expanded in popularity as a useful methodology for studying messages in mass media and other sources (CitationKrippendorff, 2004).

This project began with an informal collection of articles to identify methods that had been used in previous content analysis studies and evolved into a systematic review of the literature to document characteristics of health-related quantitativeFootnote 1 content analysis studies. Content analysis of health messages in the media is an important area of health communication research, yet little is known about publication trends and key features of published studies, including use of theory and media types examined. Although CitationKline (2006) published a paper concerning the study of health messages in the media, it is unclear how many articles were included in the sample, and the results mainly discussed findings concerning specific messages. CitationNeuendorf (1990) also wrote about media messages and health, but, while this chapter provided important information about ideas and topics, it mainly discussed examples. More recent work by CitationNeuendorf (2008) presented a systematic review similar to this one but with different search terms and sample, and she solely reported on reliability statistics.

Understanding trends regarding health-related content analysis is important to assess development of the field. Our study provides a comprehensive search of multidisciplinary journals to assess the state of the field of quantitative content analysis of health-related messages in the media excluding the Internet. Similar to others (i.e., CitationAltheide, 1996), we define quantitative content analysis as the creation and use of predetermined categories for the purpose of understanding and describing media messages in a way that can be counted and quantified. Findings from qualitative methods of message analysis, including ethnographic (i.e., CitationAltheide, 1996), discourse, rhetorical, semiotic, and narrative analysis, offer important information as well. However, while Krippendorff expresses the idea that all content analysis studies offer some qualitative interpretation, he also identifies specific qualitative methods as “alternative protocols” to traditional content analysis methods (2004, p. 16), as does CitationNeuendorf (2002, pp. 4–5). Thus, we chose to focus on quantitative studies as they represent the majority of the work that has been done in this area and are more easily comparable because they use the same methodology. Also, as a focus on quantitative content analysis has been utilized in other similar research, it allows us to make comparisons with prior findings concerning content analysis studies appearing in communication journals (i.e., CitationKamhawi & Weaver, 2003; CitationRiffe & Freitag, 1997). Below, we address the main points discussed in our paper.

Frequency of articles

Several papers have provided data concerning the use of content analysis methods in the communication literature. Studies have found that the proportion of research articles using content analysis methods in communication journals has ranged from 7% to 30% depending on the journals included and time period studied (CitationBeck et al., 2004; CitationCooper, Potter, & Dupagne, 1994; CitationFreimuth, Massett, & Meltzer, 2006; CitationKamhawi & Weaver, 2003; CitationRiffe & Freitag, 1997). However, no study to date has systematically searched databases of multiple disciplines for articles using content analysis methods to examine health messages. To address this gap in the literature, our first research question is: What is the frequency of articles using quantitative content analysis methods and focusing on health-related messages in journals across all disciplines?

Topics and media types

Little research exists that documents the media types and topics studied using content analysis methods. We do know that content analysis studies published in Journalism and Mass Communication Quarterly from 1971 to 1995 focused on newspapers (47%), followed by television (24%) (CitationRiffe & Freitag, 1997). We also know from another paper that a majority of studies about health issues have focused on newspapers and magazines (CitationKline, 2006). We found no research presenting data concerning topics studied; it would be helpful to the field to know whether certain health topics have received more attention (CitationBeck et al., 2004) and where future research may be needed. Given this lack of knowledge concerning the topics and media types studied in health-related content analysis research, our second and third research questions are: What media types other than the Internet are studied in articles about health messages using quantitative content analysis methods? What topics are studied in articles about health messages using quantitative content analysis methods?

Use of theory

Often, content analysis studies offer discussion of theoretical principles, models, or frameworks to support the research by providing justification for how the messages under study could influence health policy or behavior or inform study design. Studies of theory use in content analysis research published in communication and marketing journals have found 6% and 28% of studies used theory in some way (CitationKolbe & Burnett, 1991; CitationRiffe & Freitag, 1997). However, little is known about what theories are used and the frequencies of theory use in content analysis studies focusing on health messages, including those appearing in non-communication journals. To address this gap in the literature, our fourth research question is: What are the frequency and names of theories discussed in articles about health messages using quantitative content analysis methods?

Reporting of Intercoder reliability

Similar to studies of theory, what is known about intercoder reliability is based primarily on publications in communication journals. Researchers examining intercoder reliability have found 56% (CitationRiffe & Freitag, 1997), 60% (CitationKolbe & Burnett, 1991), 62% (CitationNeuendorf, 2008), and 69% (CitationLombard, Snyder-Duch, & Bracken, 2002) of content analysis articles reporting intercoder reliability statistics. CitationNeuendorf (2008) examined several attributes of intercoder reliability statistics presented in health-related content analysis articles in her sample, including the use of different statistics. Of the 102 articles that she ultimately analyzed for use of reliability statistics, she found that 38% did not report reliability. These findings reporting limited use of recommended intercoder reliability statistics support the need for our fifth and final research question, which asks: What are the frequencies and types of intercoder reliability statistics presented in articles about health messages using quantitative content analysis methods?

Summary

Prior research provides important information concerning the popularity of using content analysis as a research method and traits of content analysis studies. However, the overwhelming focus on communication journals means that little is known about content analysis studies published in journals from other disciplines. Although we cite two references that focused on health-related studies, one mainly searched for articles in communication journals and provided little quantitative data to describe the studies examined (Klein, 2006), and the other reported data on reliability statistics only and covered a more restricted time period and smaller sample (CitationNeuendorf, 2008).

Given that there is limited knowledge about health-related content analysis studies published in journals from multiple disciplines, by reporting on characteristics of studies that have been published in peer-reviewed journals for all disciplines, we can offer a comprehensive summary of the status of the field. We examined articles reporting on content analysis studies of health messages in the mass media (excluding the Internet) published from 1985 through 2005 to achieve the following aims: (1) Identify articles reporting health-related content analysis studies, (2) report on attributes of these published studies, (3) compare characteristics of studies by journal discipline and impact factor, and (4) identify trends over time.

METHODS

Our goal was to identify all studies using quantitative content analysis methods to examine health messages appearing in U.S. media that were published in journals of any discipline during the years 1985 through 2005 for the following media types: magazines, movies, music, newspapers, television, and video games. Given the complexity of stringing together search terms to identify such articles, we used a multistage process to locate articles, depicted in and described in detail below.

FIGURE 1 Flow chart of article selection.

FIGURE 1 Flow chart of article selection.

Search Strategy

We began by performing multiple keyword searches in the following databases: PubMed, ComAbstracts, EBSCO (Academic Search Premier, ERIC, Social Science Abstracts), PsycINFO, and SCOPUS, all of which are comprehensive academic databases commonly used in literature reviews (i.e., CitationTrifiletti, Gielen, Sleet, & Hopkins, 2005).

The first set of search terms was designed to locate any articles that included the term “content analysis” anywhere in the record. We conducted seven unique searches in each database, instructing the search engines to look for articles published between 1985 and 2005 using the term “content analysis” combined with terms for various media types appearing in any field of the record (i.e., not restricted to titles only).Footnote 2 These searches located 5,233 articles.

Because not all articles used the phrase “content analysis,” we also conducted seven unique searches using a second set of keywords in each of the four databases that allowed complex search strings (PubMed, EBSCO, PsycINFO, and SCOPUS).Footnote 3 As these searches located tens of thousands of articles, we restricted this second set of searches to article titles only, and as with the first set of searches, we limited the search engines to look for articles published between 1985 and 2005. This search located 7,209 articles.

Article Selection

We then applied inclusion and exclusion criteria to the 12,442 articles identified in the searches described above. First, we chose to include any study that had a focus on health. We reviewed titles manually to look for the presence of health terms listed in the Appendix, keeping 4,585 articles that had at least one of the terms in the title.Footnote 4 We then reviewed article abstracts and, in some cases, the articles themselves, to apply exclusion criteria to each article. Our exclusion criteria and the exclusion process are described next, with exclusion criteria identified in italics text.

We first excluded all articles that did not study media messages, leaving 3,125 articles. We included experiments in this group as well as media interventions. We then excluded any articles that were not in English or examined non-U.S. media only.

We then applied the final exclusion criteria to the remaining 2,227 articles. We did not include commentaries or letters to the editor, essay or discussion papers, articles 1 page or less, or articles in journals not using peer-review (n = 117). We excluded articles studying messages from the Internet or sources not considered mass media, such as advertisements in professional publications and educational materials (n = 217).Footnote 5 We excluded articles that did not analyze media content, presented a discussion of content without specifying research methods, or did not report details of the content analysis in the paper (n = 382). We also excluded studies that did not assess messages (i.e., only looked at length of article), and studies using a group to provide a content rating for media titles through survey methods. We also excluded papers that that used nonquantitative content analysis methods (n = 120; 61 unique studies), such as semiotic, textual, or ethnographic analysis. We did not exclude articles using both quantitative and qualitative methods. If an article did not specify they used quantitative content analysis methods, we included it if it used at least some predefined categories to analyze content and reported some quantitative findings.

Finally, if an article met all criteria, we excluded it if it was a duplicate article that appeared in another search (n = 1,013),Footnote 6 resulting in 378 articles. We searched the references of these articles as well as CitationKline (2006), a process often used in literature reviews (i.e., CitationTrifiletti et al., 2005), to locate any articles missed in the database searches.Footnote 7 This process located 63 additional articles, resulting in a final sample size of 441.Footnote 8 The first author primarily conducted this process and discussed questions concerning whether an article should be included with the second author.

Article Review

We used content analysis methods to analyze article traits and content. We recorded the following information for each article: journal title, media type(s), topic(s), theory(s) used, and use of intercoder reliability statistics.

The journal title was recorded in a text field. Media type categories included television, movies, magazines, music videos, video games, music, and newspapers. We created a media summary variable to indicate the total number of media types examined in the study. We also recorded content type. Content type was characterized as follows: (0) default (entertainment programs for television, articles for magazines and newspapers, songs for music and music videos, movie for movies), (1) news (for media formats other than newspapers), (2) advertisements, (3) public service announcements, (4) advertisement and articles/programs, and (9) other.

The specified topics we marked while coding, all of which were health related, were violence, tobacco, alcohol, illegal drugs, prescription drugs, obesity (including nutrition and physical activity), body image, sexual behavior, injury, AIDS, cancer (including screening tests), aging, and other. Once data were collected, we examined the topics in “other” and created additional categories where we saw repeated topics. These categories were death and disability, mental health, health providers and organizations, genetics, and women's health (including topics like breast implants).

We also reviewed the articles for theory use, recording a positive response if a theory, framework, model, or guiding principle was explicitly mentioned or referenced. When coding for theory name, categories were: Social Cognitive Theory, Cultivation Theory, Framing, Agenda Setting Theory, Desensitization, Socialization, and other.

To assess reporting of intercoder reliability, we had one variable that captured whether or not intercoder reliability was reported. If the article reported a numeric assessment of intercoder reliability, the article was marked as reporting intercoder reliability. Articles with one coder and that reported test–retest reliability were coded as “no” as they did not report intercoder reliability, and articles that made statements such as “coders mostly agreed” were also coded as “no” since no numeric assessment was provided. We recorded type of coefficients used and whether intercoder reliability statistics were provided for all variables. We considered a number with a percent symbol after it as simple agreement unless identified as a different statistic. We marked numbers presented as decimals as unidentified unless a specific name was provided for the coefficient. If the article did not report numeric values for intercoder reliability, we noted whether the article discussed any attempt to evaluate agreement.

We also collected three journal-specific measures from sources outside of the article. These were impact factor, journal discipline, and the start (and end year, if applicable) of journal publication. Because we found a large number of journals identified as multidisciplinary, we used title words to group journals. Any journal title with the word communication, journalism, media, or newspaper was classified as Communication, including Health Communication and Journal of Health Communication. Journal titles with the word psychology or psychiatry were placed in the Psychology group, and titles with the word sociology were classified as Sociology journals. If the title had the word aging or gerontology, the journal discipline was classified as Aging. All journal titles with the word health, medical, medicine, disability, disease, illness, or a specific medical practice type (such as pediatrics) or disease were classified as Health. Journals placed in the other category included titles such as Adolescence, Sex Roles, and Justice Quarterly. We determined the publication years for each journal title to weight trends over time with the number of journals in our sample each year. We divided the number of articles with an attribute of interest by the number of journals in the sample published during that year, and then multiplied by 100 to enable reporting in whole numbers as opposed to decimals.

Intercoder Reliability

The two authors served as article coders. We created a spreadsheet using Microsoft Excel to record article data (CitationMicrosoft, 2002). Both coders reviewed a random sample of 85 articles, which represented almost 20% of the total number of articles. Responses were compared and intercoder reliability was calculated using Cohen's kappa. Intercoder reliability ranged from .74 to 1.0 for Cohen's kappa coefficient. For the seven measures with kappa scores between .74 and .8, simple agreement was 87% or higher.

Analysis

For the purpose of making comparisons among variables, we collapsed journals into three categories: communication, health, and other (combined psychology, sociology, aging, and other journals). We created groups for journal impact factor (<1, 1–<2, 2+, missing). We calculated frequencies and used chi-square tests to conduct significance testing for differences across categories and used chi-square trend tests to look at trends over time. We conducted analysis using STATA version 8.2 (CitationStataCorp, 2002).

RESULTS

presents data collected from the content analysis of articles. There were 182 unique journal titles for the 441 journal articles in the study. Of these journal titles, 130 were active in 1985, and 173 active in 2005. The journal title with the largest number of articles was Sex Roles (n = 23), followed by Journal of Broadcasting & Electronic Media (n = 21), American Journal of Public Health (n = 17), Journal of Health Communication (n = 14), and Journal of Communication (n = 14). When comparing journal disciplines, 31% of articles (n = 138) were published in health journals, compared to 25% in communication journals and 44% in other journals. Impact factors for articles in journals with indentified impact factors (n = 334) ranged from .08 to 51.3 (mean = 2.36, median = 1.06); 35% of articles were published in journals with an impact factor less than 1. Health journals were more likely to have a higher impact factor than communication and other journals (p < .0001).

TABLE 1 Characteristics of Content Analysis Articles Included in the Study (n = 441)

In answering research question 2, we found that content analysis studies looking at messages in magazines (35%), in newspapers (31%), and on television (29%) were most common. Of all articles, 32 (7%) of them analyzed two media types, and 7 (2%) articles analyzed three or more media types. The most frequent combination was newspapers and magazines. In many cases, content analyses of each media type focused on the default unit of analysis as defined earlier (i.e., article for magazines and newspapers). Notable exceptions occurred with magazines and television. Forty-five percent of magazine studies focused on advertisements only, while 16% examined a combination of articles and ads or other items (such as the front cover). For television, 27% of studies only analyzed advertisements, and 14% of studies examined television news.

In addressing research question 3, we found the majority of the content analyses focused on topics of substance use (22%), which included alcohol, drugs and tobacco, violence (20%), sex (16%), and body image/obesity (15%). Six percent of articles had topics classified as “other,” which included issues such as heart disease and diabetes. The number of topics in articles ranged from 1 to 11, with most articles focusing on a single topic (mean = 1.3, median = 1).

In answering research question 4, we found that 55% of the articles mentioned or cited at least one theory or model. Examples of theories appearing in the “other” category include Priming, Schema and Script Theories (n = 13), Feminist Theory (n = 6), Hegemonic Masculinity/Hypermasculinity (n = 4), and Attribution Theory (n = 2). Commonly used health behavior theories such as the Transtheoretical Model (n = 3) and Health Belief Model (n = 2) were rarely observed, and we did not have any occurrence of the Theory of Reasoned Action/Planned Behavior. Several studies used multiple theories, with 17% using two theories, and 3% using three theories or more. Two common combinations were Social Learning Theory and Cultivation Theory, and Agenda Setting and Framing.

For research question 5, we found that 70% percent of all 441 articles reported intercoder reliability statistics. For the 309 studies reporting intercoder reliability, the most common statistics used were simple agreement (45%), Cohen's kappa (17%), and Scott's pi (11%). Eighteen percent of studies reported coefficients as decimal numbers (i.e., .75), but did not identify the coefficient, and 13% used a statistic classified as “other.” For the 132 studies that did not report a numeric assessment of intercoder reliability, 32% (n = 42) did offer some attempt at addressing reliability. Such efforts included having a single author conduct a test–retest reliability check, stating that disagreements were discussed until consensus was reached, mentioning that agreement was assessed but not providing numbers, or reporting that reliability was assessed in an earlier study.

Eight studies (2%) reported using a computer to conduct the content analysis. Of these, three reported no reliability, and four used a human coder to examine reliability between the computer and the coder. One study used human coders to analyze some content and reported intercoder reliability for those categories, and then used a computer to analyze other content. Twenty-four studies (5%) reported using only one coder for the entire sample. Of these, two still reported intercoder reliability, which was assessed by coding a different sample with more than one coder.

We assessed whether trends appeared with respect to journal discipline () for use of theory and presentation of reliability statistics. Major differences included a greater use of theory (p < .0001) and presentation of intercoder reliability statistics (p < .0001) in articles from communication and other journals compared to health journals. We also compared characteristics of journal articles by journal impact factor. Because of the correlation between journal type and impact factor mentioned earlier, we examined article attributes by impact factor within each journal group using a three-way chi-square test. We found no significant trends for theory use or reporting of reliability statistics, and few trends of interest for topics and media type.

TABLE 2 Comparison of Article Attributes Across Journal Discipline (n = 441)

We examined whether attributes of content analysis studies have changed over time. illustrates the number of articles published over time weighted by active journal titles in each year. The number of health-related studies increased across all three journal types from 1985 to 2005.

FIGURE 2 Number of health-related content analysis articles by journal discipline over time (1985–2005) (n = 441).

FIGURE 2 Number of health-related content analysis articles by journal discipline over time (1985–2005) (n = 441).

The use of theory increased among all journal types over time (p < .0001), with the proportion of articles in health journals (p = .02) experiencing less of a change than the proportion of articles presenting theory in communication (p < .0001) or other (p = .0001) journals. There was no significant change in the proportion of articles reporting reliability statistics over time across all articles (p = .12), but the proportion of articles in health journals (p = .002) had a greater increase than those in communication (p = .24) or other journals (p = .22). There was little change in the likelihood that articles reported intercoder reliability separately for individual categories for all articles over time (p = .23). For all articles, there was a slight decrease in the proportion of studies of magazines over time (p = .04), a slight increase for newspaper studies (p = .03), and an increase in the study of movies over time (p = .007). For topics, when examining articles from any type of journal, there was a decrease in the proportion of articles about sex (p = .001) and aging (p = .002).

DISCUSSION

Our results confirm that content analysis studies of health messages have become increasingly popular, with a growing number of articles published in journals of all disciplines. Based on our findings, there are areas where additional content analysis research may be warranted. Numbers alone cannot justify the need for additional research, but certain topic areas and media types may deserve additional attention depending on the samples used, specific content analyzed, theoretical framework applied, change in content over time, and the needs of the audience using the data.

Our study found magazines and television to be the most frequently studied media types, followed by newspapers and movies. Video games and music were the least commonly analyzed media types. Our findings are relatively consistent with prior research that has found a large number of studies focusing on newspapers, television, and magazines (CitationKline, 2006; CitationRiffe & Freitag, 1997). While the Internet is an important source of health information, we excluded Internet studies because the current version of the Internet appeared in the mid-1990s (CitationMoschovitis, Poole, Schuyler & Senft, 1999), and evaluation of health information on Internet sites often uses techniques other than traditional content analysis methods. Although we did not look at Internet studies in this paper, Internet research is increasing (CitationTomasello, 2001), and we emphasize that while the Internet poses a challenge to researchers using content analysis given its ever-changing content, it is crucial for people to study the extent and nature of internet messages given the important role of the Internet in providing health information to its many users.

While many studies examined topics like violence and obesity, fewer studies explored illness-related topics such as AIDS and diabetes. Increasingly, people are turning to media to obtain information about a variety of health conditions. Understanding how messages are portrayed about a range of illnesses and treatments may help public health practitioners with designing intervention programs, and may benefit health providers by informing them about assumptions or knowledge people may have about illnesses.Footnote 9

Prior research examining content analysis studies has found fairly low rates of theory use (6% and 28%) (CitationKolbe & Burnett, 1991; CitationRiffe & Freitag, 1997). In our study, we found higher rates of theory use, with articles using theory most likely to appear in communication journals and least likely to appear in health-related journals. Others have also found low rates of theory use in health research (i.e., CitationTrifiletti et al., 2005). For instance, 36% of a sample (n = 193) of health behavior research articles used theory (CitationPainter, Borba, Hynes, Mays, & Glanz, 2008). Possible explanations are journal word limitations, which are more restrictive in many health journals or the fact that communication journals may be more likely to require theoretical discussions. However, CitationKamhawi and Weaver (2003) found that only 39% of research articles in communication journals mentioned theory, so we are uncertain why we found higher rates of theory use in communication compared to other types of journals.

Studies lacking theoretical discussions often referred to the potential of media to provide information and shape values and norms, or cited findings from prior media effects research. We also found that authors could provide more details regarding theory use, such as providing explicit details concerning how theory informed the content categories designed for the content analysis. We found limited use of important health behavior theories that are widely used in the health field, such as the Health Belief Model and the Theory of Reasoned Action/Planned Behavior. Future research could address this important gap by using health behavior theories to inform content analysis studies and explain how media messages may influence their target audience.

Inclusion of theory in content analysis studies is not necessary, but theory can help provide a solid rationale for analyzing media messages, offer a framework to guide research questions and methods, and guide the testing of message effects on behavior or policy (CitationManganello & Fishbein, 2008). Theory can also inform the assessment of construct validity (CitationRiffe et al., 1998) and can explain why messages might have an influence, supporting the significance of the findings. We recommend that study goals should inform researchers as to whether theory is required. For instance, a study examining nutritional content and health claims in television advertisements after a Federal Trade Commission change in food advertising policy in 1994 was designed to evaluate changes in advertising after the implementation of the policy rather than consider effects of content (CitationByrd Bedbrenner & Grasso, 2001), suggesting that theory may not be necessary. However, studies analyzing content with the assumption that the content is influencing policy or behavior can benefit from the discussion of theoretical principles. As some have suggested (CitationFreimuth et al., 2006; CitationTrifiletti et al., 2005), a field or body of literature can be advanced when theory is incorporated into research studies. Thus, we encourage researchers to identify ways they can further the field through attention to theory.

Intercoder reliability helps to establish the quality of data collected by confirming that the coding tool assesses messages in a consistent manner, which is important for enabling reproduction of the study and ensuring that measurement is consistent over time (CitationKrippendorff, 2004; CitationRiffe et al., 1998). In general, we found higher rates of reliability reporting than previous studies (CitationKolbe & Burnett, 1991; CitationLombard et al., 2002; CitationNeuendorf, 2008; CitationRiffe & Freitag, 1997). We also found reliability statistics were most commonly reported in communication journals, followed by other journals and then health journals. Because content analysis methodology originated in the communication field, it is not surprising that differences might exist between communication journals and all other journals with respect to intercoder reliability reporting (CitationNeuendorf, 2008) However, reporting of intercoder reliability is required according to any content analysis textbook (i.e., CitationKrippendorff, 2004), so differences across journals or disciplines should not dictate whether intercoder reliability statistics are provided or not.

While reliability measures should be reported for separate variables as opposed to providing an overall score for the entire coding tool to demonstrate the strength of the coding instrument (CitationNeuendorf, 2002, 2008), we found this did not always occur. A coding tool consists of several unique coding categories, and each unique category, or variable, must be assessed for reliability separately. Content analysis scholars suggest that taking an average of variables to calculate a reliability measure for the coding tool as a whole does not offer an accurate assessment of whether all of the results can be considered reliable and potentially valid. For example, as CitationKrippendorff (2004) states, some variables, such as page numbers of an article, might lend themselves to being extremely reliable, while other variables might assess more subjective concepts that are less likely to result in high agreement.

Similar to others (CitationNeuendorf, 2008), we found the most common reliability statistic presented was simple agreement, which does not account for agreement by chance and is not a sufficient measure according to CitationKrippendorff (2004). We also found that many authors did not identify which reliability coefficient they provided. There has been much debate concerning which reliability statistics should be provided and what cutoff levels should be used (CitationKrippendorff, 2004), so we encourage researchers to become familiar with what reliability statistics are available and what levels of each statistic indicate adequate intercoder reliability.Footnote 10 In addition, at least two coders are required for any content analysis using human coders in order to assess intercoder reliability (i.e., CitationNeuendorf, 2008; CitationRiffe et al., 1998). Several of the studies reviewed reported having only one coder, and while some assessed reliability by recoding materials after a specified period of time, this offers a measure of stability, which is “the weakest form of reliability” (CitationKrippendorff, 2004, p. 215).

Given these findings, we suggest that researchers ensure that intercoder reliability is determined when conducting a content analysis study and provide appropriate reliability statistics when writing up their research. More consistent requirements for articles reporting content analysis research in journals of all disciplines will ensure that researchers maintain rigorous standards when using and describing this methodology, and we encourage journals to require the inclusion of reliability statistics.

While our study provides an important assessment of the state of content analysis of health messages, there are limitations. We did not assess articles for sampling methods, unitizing, and analysis statistics used, all of which may be important indicators for trends in the field and are discussed at length in several content analysis books (CitationKrippendorff, 2004; CitationNeuendorf, 2002; CitationRiffe et al., 1998). We also did not assess validity, which is an important aspect of content analysis research (CitationKrippendorff, 2004; CitationRiffe et al., 1998).

Another limitation is that we may have missed articles given the methods used for article identification, as it was difficult to create a streamlined search process that was both inclusive and efficient. For instance, 63 articles that were included in the study were located via bibliography searches as opposed to search terms, suggesting the search terms were not all inclusive. However, we believe the sample we reviewed is representative, and any articles not identified for inclusion in the study were missed in a systematic manner. Given the difficulties we had in the search process, we recommend that authors use the keyword “content analysis,” as well as keywords that identify the main topic and media type, to enable easier identification of prior research in the field.

Content analysis of health messages has grown in popularity. While much has been accomplished, we identified potential content areas or media formats requiring more attention, and we observed that additional research providing direct links between content studied and health policy or behavior outcomes is also needed. Future studies could improve what is known by informing their work with theory and could better identify reliability assessment methods and measures. We also suggest that authors provide detailed descriptions of methods used. We often found confusing or no information provided about who coders were, how many coders analyzed messages, what intercoder reliability statistic was being provided, and what percent of the sample was coded for reliability assessment. Identifying the type of content analysis used would also be useful (i.e., quantitative vs. qualitative). Clearly providing these details will allow readers to assess the strength of the methods used for message analysis, and to understand which findings might be limited in some way. In addition, it will provide researchers with methodological information that may help inform their own research (CitationJordan, Kunkel, Manganello, & Fishbein, 2008). Given the wide range of disciplines involved with this research, it is important to ensure that scholars maintain consistent standards for reporting on studies using content analysis methods and that efforts are made to advance the field with future research.

Notes

1Throughout the paper, we mean quantitative content analysis when we say content analysis unless otherwise specified.

2“Content analysis” and advertisements; “content analysis” and “video game*”; “content analysis” and (movies OR film or videos); “content analysis” and (magazine* OR periodical*); “content analysis” and (music OR lyrics OR songs); “content analysis” and (TV OR televis*); “content analysis” and (news* OR press).

3(Video game*) AND (advertis* OR ads OR message* OR depict* OR portray* OR content OR popular* OR image* OR character); (movies OR films OR videos) AND (advertis* OR ads OR message* OR depict* OR portray* OR content OR popular* OR image* OR scenes); (magazine* OR periodical*) AND (advertis* OR ads OR message* OR depict* OR portray* OR content OR popular* OR image* OR coverage OR article* OR present (music OR lyrics OR songs) AND (advertis* OR ads OR message* OR depict* OR portray* OR content OR popular* OR image*); (TV OR televis*) AND (advertis* OR ads OR message* OR depict* OR portray* OR content OR popular* OR image* OR scenes OR prime-time OR program*); (news* OR press) AND (advertis* OR ads OR message* OR depict* OR portray* OR content OR popular* OR image* OR coverage OR article* OR report* OR present*); (mass media) AND (advertis* OR ads OR message* OR depict* OR portray* OR content OR popular* OR image* OR coverage OR article* OR present* OR program* OR report*).

4Because of the complex nature of the search terms used, and the large number of health topic keywords that we wanted to apply, we were not able to incorporate health keywords directly into the database searches.

5We excluded personal advertisements and obituaries because we did not consider them mass media messages.

6Because of the large number of articles identified in the searches, we only accounted for duplicates at this point. Thus, if an article about media in France was found in two different searches, it would have been excluded in both cases at the earlier step of excluding non-U.S. studies, as opposed to being excluded here as a duplicate study.

7We conducted a brief analysis to identify whether differences existed for key variables such as impact factor and intercoder reliability reporting for articles found in database searches compared to articles identified through references (14% of the sample). We found no significant differences.

8A list of articles included in the study sample is available from the first author upon request.

9Similar trends were found for frequency of media type and topic in the 61 qualitative studies we excluded from database searches, with one exception. The most commonly studied topics were violence, cancer, and sex, compared to violence, sex, and tobacco in the quantitative studies.

10The following books offer a thorough discussion of intercoder reliability: CitationKrippendorff (2004), CitationNeuendorf (2002), and CitationRiffe et al. (1998).

REFERENCES

  • Altheide , D. 1996 . Qualitative media analysis , Newbury Park, CA : Sage .
  • Beck , C. S. , Benitez , J. L. , Edwards , A. , Olson , A. , Pai , A. and Torres , M. B. 2004 . Enacting “Health Communication”: The field of health communication as constructed through publication in scholarly journals . Health Communication , 16 : 475 – 492 .
  • Byrd-Bredbenner , C. and Grasso , D. 2001 . The effects of food advertising policy on televised nutrient content claims and health claims . Family Economics and Nutrition Review , 13 : 37 – 49 .
  • Cooper , R. , Potter , W. J. and Dupagne , M. 1994 . A status report on methods used in mass communication research . Journalism Educator , 48 : 54 – 61 .
  • Freimuth , V. S. , Massett , H. A. and Meltzer , W. 2006 . A descriptive analysis of 10 years of research published in the Journal of Health Communication . Journal of Health Communication , 11 : 11 – 20 .
  • Jordan , A. , Kunkel , D. , Manganello , J. and Fishbein , M. , eds. 2008 . Media messages and public health: A decisions approach to content analysis , New York : Routledge .
  • Kamhawi , R. and Weaver , D. 2003 . Mass communication research trends from 1980 to 1999 . Journalism and Mass Communication Quarterly , 80 : 7 – 27 .
  • Kline , K. N. 2006 . A decade of research on health content in the media: The focus on health challenges and sociocultural context and attendant informational and ideological problems . Journal of Health Communication , 11 : 43 – 59 .
  • Kolbe , R. and Burnett , M. 1991 . Content-analysis research: An examination of applications with directives for improving research reliability and objectivity . Journal of Consumer Research , 18 : 243 – 250 .
  • Krippendorff , K. 2004 . Content analysis; An introduction to its methodology , Thousand Oaks, CA : Sage .
  • Krippendorff , K. 2004 . Reliability in content analysis: Some common misconceptions and recommendations . Human Communication Research , 30 : 411 – 433 .
  • Lombard , M. , Snyder-Duch , J. and Bracken , C. C. 2002 . Content analysis in mass communication: Assessment and reporting of intercoder reliability . Human Communication Research , 28 ( 4 ) : 587 – 604 .
  • Manganello , J. and Fishbein , M. 2008 . “ Using theory to inform content analysis ” . In Media messages and public health: A decisions approach to content analysis , Edited by: Jordan , A. , Kunkel , D. , Manganello , J. and Fishbein , M. 3 – 14 . New York : Routledge .
  • Microsoft . 2002 . Microsoft Excel , Redmond, WA : Microsoft Corporation .
  • Moschovitis , C. J. P. , Poole , H. , Schuyler , T. and Senft , T. M. 1999 . History of the Internet: A chronology, 1843 to the present , Oxford, , UK : ABC-CLIO .
  • Neuendorf , K. A. 2008 . “ Reliability for content analysis ” . In Media messages and public health: A decisions approach to content analysis , Edited by: Jordan , A. , Kunkel , D. , Manganello , J. and Fishbein , M. 67 – 87 . New York : Routledge .
  • Neuendorf , K. A. 2002 . The content analysis guidebook , Thousand Oaks, CA : Sage .
  • Neuendorf , K. A. 1990 . “ Health images in the mass media ” . In Communication and health: Systems and applications , Edited by: Ray , E. B. and Donohew , L. 111 – 135 . Mahwah, NJ : Lawrence Erlbaum Associates .
  • Painter , J.E. , Borba , C.P. , Hynes , M. , Mays , D. and Glanz , K. 2008 . The use of theory in health behavior research from 2000 to 2005: A systematic review . Annals of Behavioral Medicine , 35 : 358 – 362 .
  • Riffe , D. and Freitag , A. 1997 . A content analysis of content analyses: Twenty-five years of Journalism Quarterly . Journalism and Mass Communication Quarterly , 74 : 873 – 882 .
  • Riffe , D. , Lacy , S. and Fico , F. G. 1998 . Analyzing media messages: Using quantitative content analysis in research , Mahwah, NJ : Lawrence Erlbaum Associates .
  • Sribney, W. (1996). Does STATA provide a test for trend? http://www.stata.com/support/faqs/stat/trend.html (http://www.stata.com/support/faqs/stat/trend.html)
  • StataCorp . 2002 . Stata Statistical Software (Version 8.2) , College Station, TX : StataCorp LP .
  • Tomasello , T. 2001 . The status of Internet-based research in five leading communication journals, 1994–1999 . Journalism & Mass Communication Quarterly , 78 : 659 – 674 .
  • Trifiletti , L.B. , Gielen , A.C. , Sleet , D.A. and Hopkins , K. 2005 . Behavioral and social science theories and models: Are they used in unintentional injury prevention research? . Health Education Research , 20 : 298 – 307 .

APPENDIX: LIST OF HEALTH TERMS USED FOR SEARCHING ARTICLE TITLES

  • AIDS/HIV

  • Aging/aged/elders/elderly/older/mature/seniors/senior citizens

  • Alcohol/drinking

  • Body image/eating disorder/anorexia/bulimia/thin/attractive ness/body type/body orientation/body weight/beauty

  • Cancer/melanoma/oncology

  • Death/dying/die

  • Disability/deaf/hearing/blind/vision

  • Doctor/nurse/physician/hospital/clinic/pharmacy/home care/ any other type of health practitioner (i.e., chiropractor, dentists) or service provider location (i.e., nursing home)

  • Domestic violence/intimate partner violence/battering

  • Drugs/prescription drugs/DTC/medications/any name of a drug/substance use

  • Genetic

  • Health/hygiene/medical

  • Heart attack/cardiovascular

  • Illness/disease

  • Immunization/vaccination

  • Injury/safety/crash/accident/seat belt/trauma

  • Managed care/health insurance/Medicare/Medicaid

  • Mental health/mental illness/psychiatry/psychology/depression/anxiety/OCD/schizophrenia/ suicide/therapy

  • Menstruation/PMS

  • Obesity/weight/fat/nutrition/food/meal/diet/exercise/physical activity/muscle

  • Pregnancy/birth/breastfeeding/postpartum disorder

  • Rape/sexual abuse/molest

  • Sex/sexual/sexuality/STDs/contraception/erotic/pornography/ explicit/virgin/“birds and bees” or other colloquial expressions of sex/condom/IUD/birth control pill

  • Surgery

  • Tobacco/smoking/cigarettes/cigars

  • Violence/aggression/aggressor/victim/antisocial/gun/firearm/ shooting/murder/homicide/safe/safety/harm/hurt/child abuse/molester

Any specific disease or condition, body part, or medical test, such as arthritis/breast/continence/incontinence/diabetes/epilepsy/HPV/mammography/osteoporosis.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.