5,178
Views
36
CrossRef citations to date
0
Altmetric
Articles

Content Analysis by the Crowd: Assessing the Usability of Crowdsourcing for Coding Latent Constructs

, &

References

  • Banducci, S., de Vreese, C., Semetko, H., Boomgarden, H., & Luhiste, M. (2010). EES Longitudinal Media Study Data Advance Release Documentation, version 15.10. 2010. Retrieved from www.piredeu.eu
  • Behrend, T. S., Sharek, D. J., Meade, A. W., & Wiebe, E. N. (2011). The viability of crowdsourcing for survey research. Behavior Research Methods, 43(3), 800–813. doi:10.3758/s13428-011-0081-0
  • Benoit, K., Conway, D., Lauderdale, B. E., Laver, M., & Mikhaylov, S. (2016). Crowd-sourced text analysis: Reproducible and agile production of political data. American Political Science Review, 110(2), 278–295. doi:10.1017/S0003055416000058
  • Berinsky, A. J., Huber, G. A., & Lenz, G. S. (2012). Evaluating online labor markets for experimental research: Amazon.com’s Mechanical Turk. Political Analysis, 20(3), 351–368. doi:10.1093/pan/mpr057
  • Boumans, J. W., & Trilling, D. (2016). Taking stock of the toolkit: An overview of relevant automated content analysis approaches and techniques for digital journalism scholars. Digital Journalism, 4(1), 8–23. doi:10.1080/21670811.2015.1096598
  • Bradburn, N. M., & Sudman, S. (1979). Improving interview method and questionnaire design – Response effects to threatening questions in survey research. San Francisco, CA: Jossey-Bass Publishers.
  • Budak, C., Goel, S., & Rao, J. M. (2016). Fair and balanced? quantifying media bias through crowdsourced content analysis. Public Opinion Quarterly, 80(S1), 250–271. doi:10.1093/poq/nfw007
  • Burscher, B. (2016). Machine learning-based content analysis automating the analysis of frames and agendas in political communication research ( Doctoral dissertation). Faculty of Social and Behavioral Sciences, University of Amsterdam, Amsterdam, The Netherlands.
  • Burscher, B., Odijk, D., Vliegenthart, R., De Rijke, M., & De Vreese, C. H. (2014). Teaching the computer to code frames in news: Comparing two supervised machine learning approaches to frame analysis. Communication Methods and Measures, 8(3), 190–206. doi:10.1080/19312458.2014.937527
  • Catallo, I. (2015). Achieving quality in crowdsourcing through task design and assignment ( Doctoral dissertation). Dipartimento di Elettronica, Informazione e Bioingegneria, Polytechnic University of Milan, Milan, Italy.
  • Eberl, J.-M., Boomgaarden, H. G., & Wagner, M. (2015). One bias fits all? Three types of media bias and their effects on party preferences. Communication Research. doi:10.1177/0093650215614364
  • Eberl, J. M., Vonbun, R., Haselmayer, M., Jacobi, C., Kleinen-von Königslöw, K., Schönbach, K., & Boomgaarden, H. G. (2016). AUTNES manual content analysis of the media overage 2013. GESIS Data Archive, Cologne. [ZA5864 Data File Version 1.0.0]. doi:10.4232/1.12565
  • Eberl, J. M., Wagner, M., & Boomgaarden, H. G. (2017). Are perceptions of candidate traits shaped by the media? The effects of three types of media bias. The International Journal of Press/Politics, 22(1), 111–132. doi:10.1177/1940161216674651
  • Finnerty, A., Kucherbaev, P., Tranquillini, S., & Convertino, G. (2013, September). Keep it simple: Reward and task design in crowdsourcing. In Proceedings of the biannual conference of the Italian chapter of SIGCHI (pp. 14:1–14:4). ACM. doi:10.1145/2499149.2499168
  • Grimmer, J., & Stewart, B. M. (2013). Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Political Analysis, 21(3), 267–297. doi:10.1093/pan/mps028
  • Hasegawa-Johnson, M., Cole, J., Jyothi, P., & Varshney, L. R. (2015). Models of dataset size, question design, and cross-language speech perception for speech crowdsourcing applications. Laboratory Phonology, 6(3–4), 381–431. doi:10.1515/lp-2015-0012
  • Haselmayer, M., & Jenny, M. (2014). Measuring the tonality of negative campaigning: Combining a dictionary approach with crowd-coding. Paper presented at political context Matters: Content analysis in the social sciences, Mannheim, Germany.
  • Haselmayer, M., & Jenny, M. (2016). Sentiment analysis of political communication: Combining a dictionary approach with crowdcoding. Quality & Quantity, 1–24. doi:10.1007/s11135-016-0412-4
  • Heise, D. R. (2010). Surveying cultures: Discovering shared conceptions and sentiments. Hoboken, NJ: John Wiley & Sons.
  • Hill, C. A., Dean, E., & Murphy, J. (2014). Social media, sociality, and survey research. Hoboken, NJ: John Wiley & Sons.
  • Howe, J. (2006). The rise of crowdsourcing. Wired Magazine, 14(6), 1–4.
  • Hsueh, P. Y., Melville, P., & Sindhwani, V. (2009, June). Data quality from crowdsourcing: A study of annotation selection criteria. In Proceedings of the NAACL HLT 2009 workshop on active learning for natural language processing (pp. 27–35). Association for Computational Linguistics, Morristown, NJ, USA.
  • Jacobi, C., van Atteveldt, W., & Welbers, K. (2016). Quantitative analysis of large amounts of journalistic texts using topic modelling. Digital Journalism, 4(1), 89–106. doi:10.1080/21670811.2015.1093271
  • Kazai, G., Kamps, J., Koolen, M., & Milic-Frayling, N. (2011, July). Crowdsourcing for book search evaluation: Impact of hit design on comparative system ranking. In Proceedings of the 34th international ACM SIGIR conference on research and development in information retrieval (pp. 205–214). ACM. doi:10.1145/2009916.2009947
  • Kittur, A., Chi, E. H., & Suh, B. (2008, April). Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 453–456). ACM. doi:10.1145/1357054.1357127
  • Kittur, A., Nickerson, J. V., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J., … Horton, J. (2013, February). The future of crowd work. In Proceedings of the 2013 conference on Computer supported cooperative work (pp. 1301–1318). ACM. doi:10.1145/2441776.2441923
  • Kleinen-von Königslöw, K., Eberl, J. M., Haselmayer, M., Jacobi, C., Vonbun, R., Boomgarden, H. G., & Schönbach, K. (2016). AUTNES manual content analysis of the media coverage 2013 documentation version 1.0.0. Cologne, Germany: GESIS Data Archive.
  • Komarov, S., Reinecke, K., & Gajos, K. Z. (2013, April). Crowdsourcing performance evaluations of user interfaces. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 207–216). ACM. doi:10.1145/2470654.2470684
  • Krippendorff, K. (2013). Content analysis. An introduction to its methodology. Thousand Oaks, CA: Sage.
  • Levay, K. E., Freese, J., & Druckman, J. N. (2016). The demographic and political composition of mechanical turk samples. SAGE Open, 6, 1–17. doi:10.1177/2158244016636433
  • Liu, B. (2012). Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1), 1–167. doi:10.2200/S00416ED1V01Y201204HLT016
  • Lombard, M., Snyder‐Duch, J., & Bracken, C. C. (2002). Content analysis in mass communication: Assessment and reporting of intercoder reliability. Human Communication Research, 28(4), 587–604. doi:10.1111/j.1468-2958.2002.tb00826.x
  • Mason, W., & Suri, S. (2012). Conducting behavioral research on Amazon’s mechanical turk. Behavior Research Methods, 44(1), 1–23. doi:10.3758/s13428-011-0124-6
  • Mohammad, S. M. (2016a). Sentiment analysis: Detecting valence, emotions, and other affectual states from text. In H. L. Meiselman (Ed.), Emotion measurement (pp. 201–238). Duxford/Kidlington, UK: Elsevier Ltd.
  • Mohammad, S. M. (2016b). A practical guide to sentiment annotation: Challenges and solutions. In Proceedings of the workshop on computational approaches to subjectivity, sentiment and social media analysis. Retrieved from http://www.saifmohammad.com/WebDocs/SentimentAnnotation-wassa2016.pdf
  • Mullinix, K. J., Leeper, T. J., Druckman, J. N., & Freese, J. (2015). The generalizability of survey experiments. Journal of Experimental Political Science, 2(2), 109–138. doi:10.1017/XPS.2015.19
  • Neuendorf, K. A. (2002). The content analysis guidebook. Thousand Oaks, CA: Sage.
  • Oosterman, J., Yang, J., Bozzon, A., Aroyo, L., & Houben, G. J. (2015). On the impact of knowledge extraction and aggregation on crowdsourced annotation of visual artworks. Computer Networks, 90, 133–149. doi:10.1016/j.comnet.2015.07.008
  • Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1), 1–90. doi:10.1561/1500000011
  • Peer, E., Samat, S., Brandimarte, L., & Acquisti, A. (2016). Beyond the turk: An empirical comparison of alternative platforms for crowdsourcing online behavioral research. Social Science Research Network. doi:10.2139/ssrn.2594183
  • Potter, W. J., & Levine‐Donnerstein, D. (1999). Rethinking validity and reliability in content analysis. Journal of Applied Communication Research, 27(3), 258–284. doi:10.1080/00909889909365539
  • Prelec, D., Seung, H. S., & McCoy, J. (2017). A solution to the single-question crowd wisdom problem. Nature, 541(7638), 532–535. doi:10.1038/nature21054
  • Riloff, E., & Wiebe, J. (2003). Learning extraction patterns for subjective expressions. In Proceedings of the 2003 conference on empirical methods in natural language processing (pp. 105–112). EMNLP. doi:10.3115/1119355.1119369
  • Saur-Amaral, I. (2012, January). Wisdom-of-the-crowds to enhance innovation: A conceptual framework. In ISPIM conference proceedings (pp. 1–7). ISPIM, Barcelona, Spain.
  • Saxton, G. D., Oh, O., & Kishore, R. (2013). Rules of crowdsourcing: Models, issues, and systems of control. Information Systems Management, 30(1), 2–20. doi:10.1080/10580530.2013.739883
  • Schönbach, K., Kleinen-von Königslöw, K., Eberl, J. M., Haselmayer, M., Jacobi, C., & Vonbun, R. (2016). AUTNES manual content Analysis of the media coverage 2013 – codebuch. Cologne, Germany: GESIS Data Archive.
  • Schwarz, N., Knäuper, B., Oyserman, D., & Stich, C. (2008). The psychology of asking questions. In E. D. De Leeuw, & D. A. Dillman (Eds.), International handbook of survey methodology (pp. 18–22). New York, London: Taylor and Francis Group.
  • Shank, D. B. (2016). Using crowdsourcing websites for sociological research: The case of amazon mechanical Turk. The American Sociologist, 47(1), 47–55. doi:10.1007/s12108-015-9266-9
  • Simpson, E. D., Venanzi, M., Reece, S., Kohli, P., Guiver, J., Roberts, S. J., & Jennings, N. R. (2015, May). Language understanding in the wild: Combining crowdsourcing and machine learning. In Proceedings of the 24th international conference on world wide web (pp. 992–1002). ACM, New York, NY.
  • Snow, R., O’Connor, B., Jurafsky, D., & Ng, A. Y. (2008, October). Cheap and fast—But is it good?: Evaluating non-expert annotations for natural language tasks. In Proceedings of the conference on empirical methods in natural language processing (pp. 254–263). Stroudsburg, PA: Association for Computational Linguistics.
  • Song, H., Nyhuis, D., & Boomgaarden, H. G. (forthcoming). A network model of negative campaigning: The structure and determinants of negative campaigning in multi-party systems. Communication Research.
  • Surowiecki, J. (2005). The wisdom of crowds. New York, NY: Anchor.
  • Vakharia, D., & Lease, M. (2015, March). Beyond mechanical turk: An analysis of paid crowd work platforms. In Proceedings of the iConference. Newport Beach, CA: iSchools organization and the Donald Bren School of Information and Computer Sciences at the University of California, Irvine. Retrieved from https://www.ideals.illinois.edu/bitstream/handle/2142/73639/138_ready.pdf?sequence=2&isAllowed=y
  • Vuurens, J., de Vries, A. P., & Eickhoff, C. (2011, July). How much spam can you take? an analysis of crowdsourcing results to increase accuracy. In Proc. ACM SIGIR workshop on Crowdsourcing for Information Retrieval (CIR’11). Retrieved from http://mediamatica.ewi.tudelft.nl/sites/default/files/paper_2.pdf
  • Wang, S., Huang, C. R., Yao, Y., & Chan, A. (2015, October). Mechanical Turk-based experiment vs laboratory-based experiment: A case study on the comparison of semantic transparency rating data. In 29th Pacific Asia conference on language, information and computation (pp. 53–62). Shanghai, China: Department of Computer Science and Engineering, Shanghai Jiao Tong University.
  • Weiß, H.-J., Maurer, T., Schwotzer, B., Trautmann, H., & Zhu, J. (2009). Fernsehnachrichtenanalyse zum Bundestagswahlkampf 2005. Methodenbericht/Methodendokumentation. Potsdam, Germany: GöfaK Medienforschung GmbH.
  • Zhu, D., & Carterette, B. (2010, July). An analysis of assessor behavior in crowdsourced preference judgments. In SIGIR 2010 workshop on crowdsourcing for search evaluation (pp. 17–20). New York, NY: ACM.