355
Views
3
CrossRef citations to date
0
Altmetric
Original Articles

Big Data and New Metrics of Scholarly Expertise

Pages 288-313 | Published online: 28 Nov 2014
 

Abstract

The 2013 National Communication Association convention included a deliberative double-session on big data, featuring Professor Lawrence Martin, co-founder of one of the leading business intelligence firms serving institutions of higher education, Academic Analytics. Analyzing the program transcript and deliberative polling results of audience feedback, this article explores how the development and use of new scholarly metrics implicate professional knowledge production in communication, and conversely, how conceptual tools from the rhetorical tradition elucidate the surge of digital scholarship that promises to reshape the intellectual landscape. The essay's middle sections isolate in discourse by and about Academic Analytics three “congruities of expertise”—techne, rhetorical exigence, and audience deference—and three topoi of assessment expertise, identified in the analysis as deference, humanizing metric, and negative synecdoche. The article then reviews an ongoing effort to curate a Wikipedia catalog of “measurement fallacies,” designed to supply inventive resources for scholars engaged in practical dialogues in which big data and new metrics are deployed to assess scholarly expertise.

Notes

[1] Lokman I. Meho, “The Rise and Rise of Citation Analysis” Physics World (January 2007); in Measuring Scholarly Metrics, ed. Gordon R. Mitchell, University of Nebraska Papers in Communication Studies 25 (Lincoln, NE: University of Nebraska DigitalCommons, 2011). http://digitalcommons.unl.edu/commstudiespapers/25/ (accessed June 17, 2014).

[2] Michael Jensen, “Scholarly Authority in the Age of Abundance: Retaining Relevance within the New Landscape,” Keynote Address at the JSTOR annual Participating Publisher's Conference, New York, New York, May 13, 2008. http://www.nap.edu/staff/mjensen/jstor.htm (accessed June 17, 2014).

[3] Phillipe C. Baveye, “Sticker Shock and Looming Tsunami: The High Cost of Academic Serials in Perspective,” Journal of Scholarly Publishing 41 (2010): 191–215.

[4] National Communication Association. Impact Factors, Journal Quality, and Communication Journals: A Report for the Council of Communication Associations. Washington, DC: National Communication Association, 2013. http://www.natcom.org/uploadedFiles/More_Scholarly_Resources/CCA%20Impact%20Factor%20Report%20Final.pdf (accessed June 16, 2014).

[5] National Communication Association, Impact Factors, 8.

[6] National Communication Association, Impact Factors, 11.

[7] National Communication Association, Impact Factors, 17.

[8] Academic Analytics was founded by Martin in 2005 “to address universities’ growing needs for accurate and timely faculty scholarly productivity data” by providing “high-quality, custom business intelligence data and solutions for research universities in the United States and the United Kingdom.” “About Academic Analytics LLC,” http://www.academicanalytics.com/Public/About and http://www.academicanalytics.com (accessed June 20, 2014). Academic Analytics provides university administrators with comparative data: “The Academic Analytics Database (AAD) includes information on over 270,000 faculty members associated with more than 9,000 Ph.D. programs and 10,000 departments at more than 385 universities in the United States and abroad. These data are structured so that they can be used to enable comparisons at a discipline-by-discipline level as well as overall university performance.” “What We Do: The Comparative Database,” http://www.academicanalytics.com/Public/WhatWeDo (accessed October 6, 2014). The firm's “mission is to provide universities and university systems with objective data that administrators can use to support the strategic decision-making process as well as a method for benchmarking in comparison to other institutions.” The organization's website introduction states: “Our data are a useful tool to guide them [university leaders] in understanding strengths and weaknesses, establishing standards, allocating resources, and monitoring performance.”

[9] E. Johanna Hartelius and Gordon R. Mitchell, “NCA-Forum Double Session on Scholarly Metrics in a Digital Age,” Social Epistemology Review and Reply Collective 3 (2014): 1–29. http://wp.me/p1Bfg0-1rw (accessed June 16, 2014). In the remainder of the essay, excerpts and quotations from the event are cited with references to this publicly available transcript.

[10] NCAF Bylaws, adopted December, 2008.

[11] Worth noting here is that a second senior scholar in Communication and dean at a prestigious research university was invited as a featured speaker. On November 20, the Dean withdrew, citing in email communication discomfort “providing cover” for an event in which “Academic Analytics is driving the entire session.” Arguments regarding the importance of transparency and the utility of open deliberation of scholarly metrics among scholars, and the symbolic power of a disciplinary leader actively participating in such efforts did not persuade the Dean.

[12] According to Academic Analytics’ “Frequently Asked Questions” document, “The Faculty Scholarly Productivity Index (FSPi) is a subset of the FSP database. FSPi was developed to enable broader comparisons of scholarly performance across metrics and disciplines within a university and comparison of the overall performance of universities. The purpose of FSPi is to facilitate ‘apples-to-apples’ comparisons; for example, between the Department of English Language and Literature and the Department of Chemistry. FSPi includes metrics that are independent of discipline values and of the portfolio of disciplines at universities to rank programs within a discipline, universities within a broad field, or entire universities, when such a ranking is desirable.” “Frequently Asked Questions: Academic Analytics 2011 Database (AAD 2011),” June 2013, 7.

[13] Hartelius and Mitchell, 16.

[14] Hartelius and Mitchell, 18.

[15] Hartelius and Mitchell, 18.

[16] An illustration of the flower chart is published on Academic Analytics’ website. “What We Do: Our Online Portal,” http://academicanalytics.com/Public/WhatWeDo (accessed September 22, 2014).

[17] Hartelius and Mitchell, 22.

[18] Hartelius and Mitchell, 22.

[19] Hartelius and Mitchell, 26.

[20] Professor Simmons is a co-editor of the open-access online journal Liminalities, which publishes scholarship, aesthetic texts, and media projects in performance studies.

[21] E. Johanna Hartelius, The Rhetoric of Expertise (Lanham: Lexington Books, 2011), 11.

[22] Hartelius, 163.

[23] Hartelius, 3.

[24] Hartelius, 29, original emphasis.

[25] Hartelius, 20. See Aristotle, Ethics, trans. J.A. K. Thomson (Baltimore: Penguin Books, 1955), 175.

[26] Hartelius and Mitchell, 23.

[27] “Frequently Asked Questions,” 3.

[28] “Academic Analytics Database User Guide,” 6.

[29] “Academic Analytics Database User Guide,” 4.

[30] “Academic Analytics Database User Guide,” 10.

[31] “Frequently Asked Questions,” 4.

[32] It is here important to clarify that emphasizing the rhetorical construction of a techne is not the same as claiming or assuming an absence of “real” expertise. The project of analyzing the rhetoric of expertise becomes less productive when it is reduced to arbitrating a reductive dichotomy: Is expertise entirely symbolic or is there some recalcitrance of real knowledge? Does Academic Analytics have an epistemology and make references to is, or do the references themselves productive their referent? Our intent is to retain this heuristically useful tension, assuming that a rhetorical perspective encompasses both. This stance enables us to examine the academic assessment as a rhetoric of expertise, and to contextualize that examination in relation to the academy's other rhetoric of expertise, content expertise.

[33] Lloyd Bitzer, “The Rhetorical Situation,” Philosophy and Rhetoric 1 (1968): 6.

[34] Hartelius, 26.

[35] “Academic Analytics Database User Guide,” 9.

[36] Speed recurs in other portions of the User Guide as well. See, for example, the term “snapshot,” 10

[37] Hartelius and Mitchell, 23.

[38] Hartelius and Mitchell, 18.

[39] Bitzer, 6.

[40] Hartelius, 23–4.

[41] Hartelius, 24.

[42] Hartelius and Mitchell, 9. See Academic Analytics figure.

[43] “If there are topics described in this guide that are not available to you and you think you should have access to them, please contact your institutional Data Custodian.” “Academic Analytics Database User Guide,” 5.

[44] “Academic Analytics Database User Guide,” 5.

[45] Hartelius and Mitchell, 15, emphasis added.

[46] See also “facilitate,” Hartelius and Mitchell, 23.

[47] Hartelius and Mitchell, 15.

[48] Hartelius and Mitchell, 17.

[49] Hartelius and Mitchell, 19.

[50] Hartelius and Mitchell, 27.

[51] “Academic Analytics Database User Guide,” 17.

[52] Hartelius and Mitchell, 18.

[53] Casuistry allows language users to make arguments in the grey area between certainty and probability; as such, it is “intimately related to the process and business of language.” Jaime Lane Wright, “Argument in Holocaust Denial: The Differences between Historical Casuistry and Denial Casuistry,” Argumentation and Advocacy 43 (2006): 52. Casuistic “stretching” enables situational reasoning, making it an instrument of both/either good and/or evil. Martin's construction of a “humanizing” metric exemplifies what Wright, referencing Kenneth Burke, calls the “bendability” of language, as a function of which multiple attributes of Academic Analytics—attributes which may be mutually exclusive within the stricter principles of formal logic—may be combined to sway disparate audiences. Further, he illustrates Wright's contention that a reason for rhetoricians to study casuistry is that it is convincing.

[54] Hartelius and Mitchell, 8.

[55] Hartelius and Mitchell, 8.

[56] Hartelius and Mitchell, 10.

[57] Hartelius and Mitchell, 24.

[58] Hartelius and Mitchell, 24.

[59] Hartelius and Mitchell, 3, emphasis added.

[60] Hartelius and Mitchell, 8, emphasis added.

[61] Hartelius and Mitchell, 11, emphasis added.

[62] Hartelius and Mitchell, 28.

[63] Hartelius and Mitchell, 18.

[64] “Academic Analytics Database User Guide,” 15.

[65] “Academic Analytics Database User Guide,” 10.

[66] Amberyn Thomas, “Providing a Library Metrics Service: A Perspective from an Academic Library within an Australian University,” Bibliometrics Seminar, May 22, 2014, University Library System, University of Pittsburgh. PowerPoint presentation archived in the University of Pittsburgh D-Scholarship Digital Repository. http://d-scholarship.pitt.edu/21657/ (accessed June 17, 2014). Original emphasis.

[67] See, for example, David Hitchcock and Bart Verheij, eds., Arguing on theToulmin Model: New Essays in Argument Analysis and Evaluation (New York: Springer, 2006).

[68] See “Fallacies of Measurement,” subsection of the Wikipedia entry on “Fallacy.” http://en.wikipedia.org/wiki/Fallacy#Fallacies_of_Measurement (accessed June 18, 2014).

[69] Phillippe C. Bavaye, “Sticker Shock and Looming Tsunami: The High Cost of Academic Serials in Perspective,” Journal of Scholarly Publishing 41 (2010): 191–215. doi:10.1353/scp.0.0074.

[70] Zachary Stein, “Myth Busting and Metric Making: Refashioning the Discourse about Development,” Integral Leadership Review 8 (2008). http://integralleadershipreview.com/ (accessed June 19, 2014).

[71] Hartelius and Mitchell, 18.

[72] Hartelius and Mitchell, 3.

[73] Amos Tversky and Daniel Kahneman, “Judgment under Uncertainty: Heuristics and Biases,” Science 185 (1974): 1124–31.

[74] See National Communication Association, Impact Factors, 4–13.

[75] Eugene Garfield, “What Citations Tell us About Canadian Research,” Canadian Journal of Library and Information Science 18 (1993): 34.

[76] Hartelius and Mitchell, 18.

[77] David A. Freedman, Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao, eds., Encyclopedia of Social Science Research Methods (Thousand Oaks, CA: Sage, 2004), 293–5.

[78] Henry L. Allen, “Faculty Workload and Productivity: Ethnic and Gender Disparities,” NEA 1997 Almanac of Higher Education (1997): 39.

[79] In this sense, the catalog resonates with the “Ten Things You Must Not Do” list compiled by a group of bibliometricians led by Paul Wouters. Wouters’ group explains the rationale for their list, saying “researchers who are subjected to assessment should be able to provide counter-arguments if they think the filters have been used inappropriately.” Paul Wouters, Wolfgang Glanzel, Jochen Glazer, and Ismael Rafols, “The Dilemmas of Performance Indicators of Individual Researchers: An Urgent Debate in Bibliometrics,” International Society for Scientometrics and Infometrics Newsletter 9 (September 2013): 50. http://www.issi-society.org/archives/newsletter35.pdf (accessed June 17, 2014).

[80] G. Thomas Goodnight, “A ‘New Rhetoric’ for a ‘New Dialectic’: Prolegomena to a Responsible Public Argument,” Argumentation 7 (1993): 329–42.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 138.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.