Abstract
Although interactive features, such as comment sections, used to be rare on news websites, they are now the norm. Based on theoretical concepts of interactivity and convergence, we analyze whether diverse sites are similar in the provision and use of interactive features online. We conduct a content analysis of 155 news websites to examine the presence and use of social media buttons, lists of hyperlinks, polls, comment sections, and mobile sites. Television news and newspaper websites are compared, as are local and more broadly targeted news sites. The findings provide little evidence for interactive convergence. Rather, results reveal many differences in the adoption and use of interactive features based on medium and target. Reasons for differences across these sites are discussed.
Disclosure Statement
No potential conflict of interest was reported by the authors.
Notes
1. Drawing on past research (e.g., Stromer-Galley Citation2004), we approach interactivity as how a technological affordance facilitates human-to-human or human-to-computer experience. Interaction is the actual act of engagement with a technological affordance by an individual.
2. We targeted a minimum of 60 outlets each to ensure that the constructed weeks had a relative balance of news outlets. We sampled more than 60 newspaper and television news websites so that we could eliminate sites that did not have a news focus (e.g., a CW station).
3. After completing our coding, we analyzed whether our measures of social media button and comment section use varied by day parts. The trend was more Facebook, Twitter, and comment section use in the morning and evening than in the afternoon, but the differences were not statistically significant. The trend for sharing content using other social media (e.g., Google+) was in the opposite direction, with more sharing occurring in the afternoon relative to the morning and evening, but again, the differences were not significant.
4. On sites with paywalls, we accessed as much of the site as possible without having to pay. In most instances, we did not have difficulty in completing our coding despite the presence of a paywall.
5. Articles were examined in this order. We used five articles as a guideline because this was the most common number of most popular and most discussed articles that we saw among the websites.
6. We coded the most prominent articles embedded within rotating article features, as well as those listed as links on the upper left side of the main page.
7. Poll categories were adapted from Kim, Weaver, and Willnat (Citation2000).
8. We accessed each mobile site by searching for the outlet’s name on Safari browsers on an iPhone and then opening the main link.
9. Effect sizes are not the same as significance testing. Using the methods outlined by Cohen, we also conducted two-tailed, p < 0.05 significance tests. The h-critical values vary between 0.25 for comparisons between top and local outlets and 0.55 for local television and newspaper outlets with polls. Our broad conclusions are unchanged whether we use effect sizes or significance testing. We opt for effect sizes because they clearly illustrate the magnitude of the observed differences.