3,157
Views
8
CrossRef citations to date
0
Altmetric
Articles

Finding the news and mapping the links: a case study of hypertextuality in Dutch-language health news websites

ORCID Icon
Pages 2138-2155 | Received 05 Dec 2017, Accepted 10 May 2018, Published online: 05 Jun 2018
 

ABSTRACT

This study considers hyperlinks as digital navigational cues that can guide users through the increasingly complex and vast online health information landscape in order to examine how hypertextuality at both search engines and health news websites mediates access to further health-related information. This is important because online news media are frequently used and convenient sources for health information. The methodology unfolds in two steps. First, an environmental scan of search engine result pages for the term ‘health news’ was conducted. Second, an automated quantitative content analysis (N = 5428) of external hyperlinks found on three types of health news websites, i.e., net-native, mixed and legacy news brands, was performed. Most importantly, this study challenges the dominant internal-external distinction by introducing a systematic distinction between genuine external hyperlinks and pseudo-external hyperlinks when comparing various types of online health news. Net-native news websites provide more hyperlinks to thematically related information than legacy news websites with print origins. The latter often include pseudo-external hyperlinks to thematically unrelated, but organizationally affiliated websites, thus favoring financial relationships over thematic coherence as an incentive to link.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes on contributor

Joyce Stroobant is a PhD student at Ghent University (Department of Communication Sciences) and a member of the Center for Journalism Studies and of the interdisciplinary research group Health, Media and Society. Her research focuses on journalistic sourcing practices for health news in a rapidly changing digital environment. A recent peer-reviewed publication is ‘Tracing the Sources’, published in Journalism Practice [email: [email protected]].

Notes

1 For more information on the classification and on the organizational background of these websites, please contact author.

2 For all crawled seeds the robots.txt-file was checked to ensure the crawler was not disallowed to visit (parts of) the seed website. The robots.txt-file is publicly available and can be viewed by adding ‘/robots.txt’ after the website URL, e.g., http://www.bbc.com/robots.txt. With this Robots Exclusion Protocol web designers provide explicit instructions for bots. For example, search engines use bots to crawl and index the Internet's webpages. Web designers may refuse bots from certain areas, or impose a crawl delay in order not to overload the server. Nevertheless, some bots can still ignore the robots.txt, e.g., malware bots harvesting e-mail addresses (http://www.robotstxt.org).

Additional information

Funding

This work was supported by the Bijzonder Onderzoeksfonds (Special Research Fund) under Grant BOFGOA 2014 000 604 ‘(De)constructing Health News’.