3,491
Views
12
CrossRef citations to date
0
Altmetric
Program Sessions

A Model for Electronic Resources Value Assessment

Pages 245-253 | Published online: 08 Apr 2013

Abstract

The current budgetary climate is forcing libraries to be more selective about e-resource purchases and renewals, and often to consider cancellations. The Mary and Jeff Bell Library at Texas A&M University–Corpus Christi has developed a model for assessing the value of our e-resources to our community of patrons that relies on a combination of metrics including content coverage, usage, patron needs and feedback, and costs. The model is applied to decisions about renewal or cancellation and potential new purchases. This session described the model in detail including an explanation of each metric used, the sources of data for each metric, and the weight each metric carries in the overall decision-making process. It concluded with a discussion of how a similar model may be implemented in other libraries.

Most libraries have faced budget challenges during recent years. The Mary and Jeff Bell Library at Texas A&M University–Corpus Christi is no exception. Although fortunate in not having to mandate cuts to serials and electronic resources budgets, in the summer of 2011, the library administration requested an analysis of existing electronic resources subscriptions in order to identify their respective value to the university community. The purpose of the analysis was to identify resources that might be candidates for future cancellations and to establish criteria for renewal and purchase decisions. Thus began a year-long project to develop a model for assessing the value of subscribed electronic resources to the members of the A&M–Corpus Christi user community.

The model () consists of two levels of analysis, each of which includes several metrics and relies on several sources of data. The first level of analysis is applied to all resources while the second level of analysis is applied only to those resources for which additional analysis is warranted based on the results of the first level analysis. This article (and the presentation on which it is based) describes the model in detail including an explanation of each metric used, the sources of data for each metric, and the weight each metric carries in the overall decision-making process. It will also cover the determination of the level at which a decision is triggered by the model.

FIGURE 1 A model for electronic resources value assessment (color figure available online).

FIGURE 1 A model for electronic resources value assessment (color figure available online).

THE FIRST LEVEL OF ANALYSIS

As resources come up for renewal the first level of analysis is applied to every resource. This level of analysis is based on data the Library routinely collects in monthly and annual reports dating back to 2000. Thus, it was relatively easy to obtain the three years of data used in the first level of analysis.

First-Level Data Sources

The first level of analysis consists of four data points: sessions, searches, full text downloads (FTDs), and link outs. These data are provided primarily by resource vendors and defined by Project COUNTER (Counting Online Usage of Networked Electronic Resources). Searches are defined as “a specific intellectual query, either equated to submitting the search form of the online service to the server or by clicking a hyperlinked word or name which executes a search for that word or name. The results of a specific intellectual query submitted by a user and executed by a server. This can typically be via a search form, or else by clicking a hyperlinked word or name which submits a search query.”Footnote 1 Sessions are defined as “a successful request of an online service. It is one cycle of user activities that typically starts when a user connects to the service or database and ends by terminating activity that is either explicit (by leaving the service through exit or logout) or implicit (timeout due to user inactivity).”Footnote 2

FTDs are also defined by Project COUNTER as “a uniquely identifiable piece of published work that may be: a full-text article (original or a review of other published work); an abstract or digest of a full-text article; a book chapter; an encyclopedia entry; a sectional HTML page; supplementary material associated with a full-text article (e.g., a supplementary data set), or non-textual resources, such as an image, a video, or audio).”Footnote 3 Project COUNTER separates FTDs generated by Web-scale discovery systems and OpenURL links; however, for the purposes of this model both of these data points are included in FTDs. The fourth data point is link outs, which are obtained from the vendor of the content management system through which the Library provides resource access (LibGuides). Link outs represent the resources that patrons of the Library actively choose to use for their research and course work rather than those resources to which they are directed via an OpenURL link or discovery service. In that respect, they are unique to the community of users served by the Mary and Jeff Bell Library and provide insight into its patrons’ specific needs. It is important to note that to date the Library has only one year's worth of these data.

Creating a Baseline for Comparison

The first-level analysis data for each resource are compared to baseline data taken from the most heavily used of the Library's resources. The baseline was created by averaging data for each of the four data points in the first-level analysis from the Library's twenty highest usage resources in each data point. For the Mary and Jeff Bell Library this consists of data for five resources that fell into the top twenty in all four data sets (searches, sessions, FTDs, and link outs) as well as data for five resources that fell into the top twenty in three of the four data sets.

Data for searches, sessions, and FTDs for each resource were plotted over time in order to create a graphic display of trending increases or decreases. Outliers were removed in order to make the displays more visually useful. For example, shows the trends in changing FTDs for all eleven baseline resources (top) and for nine of the eleven baseline resources (bottom). The bottom graph in depicts trends in FTDs over time more clearly than the top graph. This graph thus represents the baseline trends over time for FTDs to which graphs of FTD trends for resources being analyzed would be compared.

FIGURE 2 Visual depiction of two and three year trends in FTDs for baseline resources.

FIGURE 2 Visual depiction of two and three year trends in FTDs for baseline resources.

In addition to graphing trends over time for the four data points, a baseline for cost per use was used in the first-level analysis. These are mean change in searches over three years, mean change in sessions over three years, mean change in FTDs over three years for the eleven core databases as well as mean cost per search, mean cost per session, mean cost per FTD (when applicable), and link outs. depicts these means and how they were calculated for the resources that make up the core for the Mary and Jeff Bell Library.

TABLE 1  Baseline Means for the Eleven Core Databases

The question asked of the data at the first level of analysis was: Does the resource in question compare favorably to the baseline? An answer of yes implied that the data supported renewal of the resource. An answer of no implied that the data supported cancellation of the resource. An answer of maybe implied that additional analysis was required. The trigger point for these answers depends on the context in which the analysis is being conducted. In the case of the Mary and Jeff Bell Library, the trigger points were relatively high since, at the time of this writing, there is no mandate for cancellations.

THE SECOND LEVEL OF ANALYSIS

When additional analysis was deemed necessary based on the initial analysis, a second level of analysis was applied. This level of analysis was based on a set of four additional data points: overlap, citations, journal usage, and impact factor. This data was not routinely collected by the Library, making the second level of analysis slightly more time-consuming than the first level.

Second-Level Data Sources

Overlap data described the amount of content included in a particular electronic resource that is unique to the Library's entire collection of electronic resources. It was obtained from the Library's link resolver vendor in the form of an Excel report. The report included a list of the unique titles that can be analyzed both by quantity (i.e., the number and/or percentage of unique titles the resource contains) and by quality. Quality was established using the second, third, and fourth data points in the analysis.

Lists of citations to the unique titles in the overlap report can be obtained from a variety of sources. The choice of the source will depend on variables like the primary disciplinary focus of the resource and the specific sub-group of local users of the resource, and can be customized to the institution or community of users served by the library. For example, for a resource whose primary disciplinary focus is psychology, the Library compared its unique title list to a list of titles recommended by the American Psychological Association (APA). More than 20% of the unique titles were included on the APA list, which led the Library to conclude that the resource had the potential to be of use to our patrons. Other potential local sources of title lists for comparing a resource's unique titles include lists of journals cited in the institution's master's theses and doctoral dissertations, journals requested for electronic reserves, journals in which faculty publish, and faculty requests. External sources include discipline specific professional organizations as well as proprietary lists such as those published by Ulrich's and Magazines for Libraries.

Usage of the journal titles unique to a particular resource in the Library's collection is the third data point for the second-level analysis. Individual title-level usage data are obtained from publishers, from the vendor of the resource under analysis, and from hits to the local A–Z list of journals. Depending on the local context of decision making, data from one, two, or all three of these sources can be used for analysis at this data point. High title-level usage of one or more of the unique titles will suggest that the resource is a candidate for renewal, whereas low title-level usage of most of the unique titles will suggest that the resource is a candidate for cancellation. Journal impact factors are the final data point in the second-level analysis. These are primarily obtained from Journal Citation Reports. The Mary and Jeff Bell Library's analyses used the five-year impact factor based on its use of trends over time, but the most recent one year impact factor or Eigenfactor could also be used, depending on the context and the specificity of the electronic resource being analyzed.

Decision Trigger Points

The choice of a trigger point for renewal or cancellation decisions will depend on the context in which the decision is being made. In the case of the Mary and Jeff Bell Library, trigger points for renewal are relatively high because the Library is not currently in a situation where a budget reduction via cancellations is mandatory.

ADAPTING THE MODEL FOR BROADER USE

One of the goals of the project through which this model was developed was to create it with a level of flexibility that would allow it to continue to evolve and thus remain useful to the Mary and Jeff Bell Library, as well as to allow other libraries to adapt it to their own use. The current iteration of the model has accomplished that goal.

At the first level of analysis, the model will withstand the upcoming changes to the COUNTER Code of Practice: Release 4, which will become effective on December 31, 2013.Footnote 4 This release includes a change to the report in which sessions are currently reported. Sessions will be replaced with statistics that report “record views” and “result clicks,” which will be incorporated into the model in place of the sessions data point.Footnote 5 The fourth data point, link outs, is also flexible. Other libraries wishing to make use of the model might not use the same content management system for providing access to electronic resources and therefore may want to replace the link out data used by the Mary and Jeff Bell Library with their own link out data. Alternatively, they may wish to replace the link out data point with another measure of their community of users’ conscious choices of which electronic resource to use.

Other libraries may wish to alter the way the baseline for comparison in the first level of analysis is calculated. For example, the Mary and Jeff Bell Library has access to a number of electronic resources that are provided to them through the Texas State Library's TexShare program at a very low cost.Footnote 6 Some of these electronic resources are among the most heavily used resources in the Mary and Jeff Bell Library's collection and were therefore included in the calculation of our baseline. This, in turn, caused the very low values in the baseline cost per use data. It was a conscious decision on the part of the Mary and Jeff Bell Library to include these resources in our baseline and should be a conscious decision for other libraries wishing to use the model since it will affect the decision trigger points. Leaving such low cost resources in the calculation of baseline data might cause other libraries to increase their decision trigger points associated with the baseline in order to compensate for them.

Except for the overlap data point, the data points in the second level of analysis are similarly flexible. The overlap data point is central to the model as it currently exists since it is the means of determining the unique titles in the resource under analysis and therefore should only be replaced with another means of determining which elements of the resource are unique to the library's collection.

Some libraries that do not subscribe to Journal Citation Reports may be unable to access that data and may wish to leave that data point out of the model or to replace it with an alternative measure of a journal's impact on the scholarly community. Several alternative models for that metric have been developed in recent years.Footnote 7 Other methods of measuring journal usage could replace the citations and journal usage data points in the model and it may be in the future that better methods will be developed.

Notes

1. Project COUNTER, “COUNTER—Counting Online Usage of Networked Electronic Resources | Home,” n.d., http://www.projectcounter.org/ (accessed July 8, 2012).

2. Ibid.

3. Ibid.

4. Project COUNTER, “The COUNTER Code of Practice for E-resources: Release 4,” Project COUNTER, April 2012, http://www.projectcounter.org/r4/COPR4.pdf (accessed July 8, 2012).

5. Ibid.

6. Library of Texas, “Library of Texas—TexShare Database Menu,” TexShare, 2012, http://www.libraryoftexas.org/service-proxy/texshare/?orgid=340 (accessed July 8, 2012).

7. Younghee Noh, “A Korean Study on Verifying Evaluation Indicators and Developing Evaluation Scales for Electronic Resources Using a Pilot Evaluation,” Libri: International Journal of Libraries & Information Services 60, no. 1 (March 2010): 38–56. doi:10.1515/libri.2010.004

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.