476
Views
10
CrossRef citations to date
0
Altmetric
Article

The Representation of Altruistic and Egoistic Motivations in Popular Music over 60 Years

Pages 59-78 | Published online: 20 Mar 2018
 

Abstract

Content analyses examining the values expressed in popular music have been predominantly ad hoc, limited to antisocial themes, and lacking a comprehensive theoretical coding scheme. We applied a content analytic scheme based in the model of intuitive morality and exemplars (MIME) to examine altruistic and egoistic values in popular music over 60 years. Findings show (a) more frequent representation of egoistic than altruistic motivations, and (b) the profusion of egoistic motivations focused mostly on romantic (in adult-targeted music) but also platonic (in child-targeted music) relationships.

Notes

1. We recognize that there is debate on this issue. We understand that Krippendorff’s alpha and Scott’s pi have become the standard for many researchers conducting content analyses in communication, and we do not mean to diminish the contributions of either’s work. At the same time, debate exists regarding whether either provides satisfactory indicators of reliability for circumstances such as those found in the present data, where raters had the option to code multiple motivations within one unit of analysis. Challenges to the appropriateness of Krippendorff’s alpha or Scott’s pi for these types of estimates have been recently explicated by Zhao (Citation2011), who has identified 18 paradoxes that hinder the reliability of these coefficients and three underlying assumptions about the coding procedures to which these coefficients are applied.

At least four of the limitations of Krippendorff’s alpha or Scott’s pi identified by Zhao apply to our data. The first set of issues concerns the falsely attenuated coefficients that could result from Krippendorff’s alpha or Scott’s pi even when coders demonstrate high agreement (e.g., the agreement in our data, which ranges from 83% to 100%; Paradox 1 and Paradox 8). We structured our codesheet so that each unit of analysis is a row and each motivation is a column. This enabled coders to identify multiple motivations within one unit of analysis. The result was 11 cells for every one unit of analysis (11 categories × 5,323 verses = 58,553 cells). As such, even when one motivation was identified, there were still 10 blank cells in our unit of analysis. An unfortunate by-product of this structuring is that coefficients (i.e., Krippendorff’s or Scott’s) that attempt to correct for chance agreement unduly “punish” reliability estimates when data require a coder to make a large number of decisions for one coding unit of analysis. This is due to the fact that both coefficients treat the observed coding distribution as though the only opportunity for coders to agree honestly is in cells that indicate presence (see Assumption 1 on p. 16 of Zhao, Citation2011), and that all cells indicating absence are due to chance agreement. We would argue that when coders agree that a motivation is present within one unit, there are 10 additional opportunities to honestly agree that the other motivations were absent. This is unlike coding situations where a blank cell means the absence of any content in the unit of analysis relevant to the study. This problem is perpetuated when there is a large N.

Consider, for instance, that in our data, coders indicated the presence of 2,635 motivations within the 5,323 units of analyses, and these were coded as 1. This left 55,918 cells with the value of 99. Calculating either Krippendorff’s alpha or Scott’s pi on this large number of cells containing the value 99 would result in false attenuation, despite the fact that agreement (as indicated by percent agreement) is high for all motivations. This is because both coefficients would assume that all agreements on the value 99 were purely due to chance agreement and, therefore, agreement could be calculated only on cells where at least one coder indicated presence. For example, both coefficients would assume that our data are equivalent to a bag of 58,553 marbles. In this bag, they would argue that we have 55,918 black marbles and 2,635 white marbles. Both coefficients would assume that, “when almost all marbles are of the same color, the coders have close to 100% probability agreeing by chance, and close to 0% opportunities to code honestly” (Zhao, Citation2011, p. 20). In our case, our data are “punished” with an inflated chance agreement estimate because both coefficients assume that coders could code honestly only when a white marble (i.e., motivation) is present.

This underlying assumption of Krippendorff’s alpha and Scott’s pi is seriously inconsistent with our coding scheme, which places great value on coders who can (a) recognize the presence of a motivation; and (b) recognize the absence of a motivation (i.e., discriminant validity). Both skills are necessary for coding intuitive motivations, yet the equations that Krippendorff’s alpha and Scott’s pi rely on are inconsistent with this.

A second issue indicated by Zhao (Citation2011), also present in our data, is that random coding often produces higher Krippendorff’s alpha and Scott’s pi coefficients than honest work (Paradox 9). Zhao offers an example (p. 12) wherein coders are asked to code 60 television segments for the presence of subliminal messages. Fifty of these 60 actually contain a subliminal message. Zhao states that,

One coder found the ads in all 60 segments, making 10 false alarms, while the other recognized only 40, calling 10 false negatives. The 40 positive agreements and 20 disagreements produce a 66.667% agreement…. While the instrument may seem adequate, especially considering the difficult task, Scott’s pi and Krippendorff’s alpha are both negative, at miserable −0.2 and −0.19. Now suppose we ask the coders to flip coins without looking at any television segments, ever. Their percent agreement is expectedly 50%, 16.667% lower than honest coding. … This totally dishonest coding, however, produces pi = 0 and alpha = 0.0083.” (Zhao, Citation2011, p. 12)

Thus, honest coding is shown to produce higher percent agreement but lower alpha and pi, yet the random dishonest coding is rewarded most by Krippendorff’s alpha and Scott’s pi. This is due to the fact that, again, both coefficients are equating a detailed coding procedure to drawing marbles. Here, though, they assume that “coders put 50 black marbles and 10 white marbles into an urn, and drew randomly from the urn” (Zhao, Citation2011, p. 24; see also Assumptions 1 and 2). This underlying assumption suggests that there is a finite number of motivations to be coded (i.e., marbles in the urn). Again, this is inconsistent with the underlying assumptions of our coding scheme, which require coding all 11 motivations in all 5,323 verses if need be. That is, the 2,635 motivations observed in our data could have varied from as high as 58,553 if all 11 motivations were present in each verse to as low as zero if none were present in any verse. The problems associated with this underlying assumption therefore again would result in a falsely attenuated Krippendorff’s alpha or Scott’s pi estimate if used with our data, though at a much larger scale given our large N.

This leads to the next paradox affecting our data, which is that a large N inflates alpha’s estimated chance agreement, thereby also falsely attenuating the estimate of intercoder agreement (Paradox 17). Put another way, higher chance agreement means that it is harder for Krippendorff’s alpha to reach acceptable levels (i.e., .80). Logically, then, the bigger the N, the harder it is for Krippendorff’s alpha to reach acceptable levels. This paradox is particularly damning in our data because of our large N. Notably, we sampled a large N to achieve higher generalizability and replicability. Yet using a coefficient such as Krippendorff’s alpha, which is supposed to be a “general indicator of reliability,” instead “systematically punishes [our] replicability” (Zhao, Citation2011, p. 15). Notably, Zhao argues that when researchers see a large N, they see more opportunities for honest coding to occur. However, when Krippendorff’s alpha encounters a large N, “it sees more marbles” in the urn and assumes this indicates more random agreement (p. 25). Although we agree that the larger the sample, the more chances there are for random agreement to occur, in our data the chance agreement estimate is falsely inflated due to the fact that we have 11 cells for each unit of analysis. If we simply computed Krippendorff’s alpha for the presence/absence of a motivation (i.e., any motivation) within a given verse, the alpha would be misleadingly high. In our data, 2,601 verses were coded as containing at least one motivation. This would result in 2,601 cells coded as 1 (present) and 2,722 cells coded as 99 (absent). However, this would be conceptually misleading because the resulting coefficient would measure only how well our coders could identify whether a verse contained any motivation. Given that simply identifying the presence of any motivation is not central to our study, it would be misleading to use this method for calculating Krippendorff’s alpha.

In sum, we do not mean to diminish the utility of either coefficient when its boundary conditions are met. In fact, we agree with Zhao (Citation2011) in maintaining that both Scott’s pi and Krippendorff’s alpha may be useful under certain conditions. However, due to the reasons above as well as other problems in the assumptions and calculations of these coefficients noted by Zhao (Citation2011), we believe that our data fall outside the necessary conditions for which it is proper to use either coefficient.

2. We thank Reviewer 1 for pointing out this issue and suggesting the language for this limitation.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 144.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.