Abstract
The growing use of difficult-to-parse algorithmic systems in the production of news, from algorithmic curation to automated writing and news bots, problematizes the normative turn toward transparency as a key tenet of journalism ethics. Pragmatic guidelines that facilitate algorithmic transparency are needed. This research presents a focus group study that engaged 50 participants across the news media and academia to discuss case studies of algorithms in news production and elucidate factors that are amenable to disclosure. Results indicate numerous opportunities to disclose information about an algorithmic system across layers such as the data, model, inference, and interface. Findings underscore the deeply entwined roles of human actors in such systems as well as challenges to adoption of algorithmic transparency including the dearth of incentives for organizations and the concern for overwhelming end-users with a surfeit of transparency information.
Acknowledgements
From the Tow Center for Digital Journalism at Columbia University we wish to thank Fergus Pitt for his deft help in moderating the focus groups and Claire Wardle for helpful feedback on an earlier version of this paper.
Notes
1. As news production process we define the research, decisions, creation, and distribution of news stories using various sources, tools (from pencil to algorithms), and platforms.