458
Views
7
CrossRef citations to date
0
Altmetric
Editor's Introduction

Editor's Introduction

Pages 177-181 | Published online: 23 Dec 2009

This special issue of Social Epistemology contains the first comprehensive collection of articles devoted to the US National Science Foundation’s (NSF) Broader Impacts Criterion (BIC). Since its inception in 1997, BIC has been attended with controversy. Initially ignored by many proposal writers and reviewers, BIC has been the focus of complaints from scientists and engineers, of queries from Congress, and attempts by NSF to improve its understanding and utilization.

The genesis of this volume was a research workshop in August 2007 held at the Colorado School of Mines on “Making Sense of the Broader Impacts of Science and Technology” supported by NSF and co‐sponsored by the Colorado School of Mines’ Hennebach Program in the Humanities and the Scientific Freedom, Responsibility & Law Program of the American Association for the Advancement of Science. Several of the contributors to this special issue—Warren Burggren, Erik Fisher, Robert Frodeman, Kristen Intemann, and Jonathan Parker—were participants in the 2007 workshop. As we continued to pursue our research on BIC, we encountered a number of other researchers with their own distinctive approaches to making sense of broader impacts. Despite their varied disciplinary backgrounds, all of the contributors to this issue are united in taking seriously the aim of Social Epistemology to serve as a guide for directing contemporary knowledge enterprises. Featuring articles from scientists, philosophers, researchers in science and technology studies, informal science education and public outreach professionals, and instructors in responsible conduct of research, we hope this volume will become an essential reference for scientists, engineers, and funding agency staff, as well as for policy‐makers and citizens interested in the return on their investments in basic research.

Although the specifics of the process vary from one agency to another—and sometimes within a single agency—most public science and technology (S&T) agencies worldwide rely heavily on peer review to make specific funding decisions. This reliance on the process of peer review is due in part to a belief that peer review in some sense guarantees accountability by rendering funding decisions that are most likely to produce results. Nevertheless, although prima facie support exists for the use of peer review to make S&T funding decisions—who could be better qualified to judge proposed S&T research than S&T experts?—funding agencies are facing increasing demands, especially from S&T policy decision‐makers, for a demonstrable return on the investment of public funds in S&T research.

The increasing demand for demonstrable results has led to the widespread introduction of considerations of societal impact (SI) into the peer review process of public S&T funding agencies worldwide.Footnote 1 For example, in September 2008 the science minister of the Netherlands, Ronald Plasterk, shifted €100 million of his country’s funding for universities to the Netherlands Organisation for Scientific Research. This decision was designed to ensure that more money would be distributed based on peer review, rather than simply funneled to the most established investigators. In addition, beginning in 2009, all researchers applying for Netherlands Organisation for Scientific Research funding were afforded the option of addressing the “expected impact of their research on society”—an option that had previously been reserved only for the technical disciplines in the Netherlands, which, unlike the pure sciences, could reasonably be expected to produce technologies with economic impacts.

Since April 2009, applicants for funding from the Research Councils UK have been required to include both an “impact summary” and an “impact plan” to ensure that researchers have considered the impact on society of their proposed research, as well as how to maximize that (presumably positive) impact. A story from the Times Higher Education Supplement, 15 January 2009,Footnote 2 strikes a familiar tone, at least to anyone who has paid attention to the US context for the past 12 years. Researchers seeking funding are concerned that such impact criteria are likely to reduce opportunities for funding—especially for those researchers whose main interest is in pursuing basic research. Meanwhile, funding agency officials insist that the effect of impact criteria on what research is actually funded will be minimal.

As this special issue lays out, the introduction of such SI criteria into the proposal peer review process introduces a peculiar strain on the process, not least because expertise in particular areas of S&T research is very often irrelevant to addressing the SI of proposed research. Indeed, Barry Bozeman and Craig Boardman begin this issue with an argument that the idea of introducing SI criteria into the process of peer review is “fundamentally flawed”: “Retooling or refining the broader impacts criterion does not alter the fact that conventional peer review, based on specialized scientific and technical expertise, is not up to the task of ensuring adequate judgments about social impact”. Bozeman and Boardman also offer several alternative ways of addressing societal impacts.

Melanie Roberts, who worked within NSF for a year while serving as a Science and Technology Policy Fellow at the American Association for the Advancement of Science, presents the findings of an empirical study of NSF award abstracts. The aim of her contribution is to answer whether including potential societal benefits in the broader impacts criterion actually leads to benefits for society. She concludes that many potentially useful results may not be disseminated beyond the scientific community, despite the fact that BIC ought to encourage such broad dissemination of results.

Warren Burggren is a biologist who, although sympathetic to the idea of BIC in principle, questions the idea that all scientists and engineers ought to engage in broader impacts activities. Asking scientists and engineers not trained in broader impacts activities to engage in such activities is inefficient at best, and at worst leads to incompetence. Burggren suggests alternative models that may increase both the efficiency of broader impacts activities and the quality of research NSF funds.

Both a working scientist and Curator of Vertebrate Paleontology in the Florida Museum of Natural History, Bruce J. MacFadden agrees with Burggren that asking scientists and engineers not trained in broader impact activities to engage in those activities as professionals is a mistake. However, he offers another alternative to those presented by Burggren: train them. MacFadden describes a course—to my knowledge, the first of its kind—designed to train graduate students in one aspect of broader impacts, working with a museum, as well as suggestions for teaching similar courses.

Carol Lynn Alpert of the Museum of Science, Boston, offers a model for interdisciplinary collaboration to address BIC. Identifying strongly with the pro‐BIC camp, she suggests that it is possible “to transform the perceived burden of the requirement into an opportunity to provide enhanced value to the constituencies of the partnering organizations, as well as to the larger community”—and then she describes how.

Kristen Intemann explores the potential of feminist philosophy of science to motivate S&T researchers to address BIC. Intemann distinguishes three different rationales for the importance of diversity in science and then turns to a discussion of BIC. Although there are obvious social justice issues involved in “increasing the participation of underrepresented groups”—as one element of BIC suggests—Intemann argues that there are also important epistemological reasons for doing so.

Although Erik Fisher and Michael Lightner argue that “engineering research laboratory directors have a responsibility to inform graduate engineering students who participate in their research projects of the potential broader social dimensions of those projects”, they note that while BIC “theoretically provides an opportunity” for researchers to address the societal impacts of their research, it does not mandate that they do so. Accordingly, BIC “pays no attention to the moral integrity of researchers beyond their responsibility to make a case for the ‘beneficial’ social relevance of the activities of the proposed research”. Fisher and Lightner recommend instead that “an ongoing discursive process” to address societal impacts be integrated with engineering research.

Simone van der Burg makes a similar argument with reference to two funding agencies in the Netherlands, SenterNovem and the Dutch Technology Foundation. She argues that addressing only intellectual merit and economic impact is insufficient, and that these agencies should “do more to enhance reflection during the research process on the ‘soft impacts’ of technologies, which refer to the alterations that technologies may bring about in the quality of human life”. Much like Fisher and Lightner, van der Burg suggests that agencies require researchers to “engage a specifically trained ethicist to monitor the research process and create a feedback loop from the ethicist to the funding institution”.

While Fisher, Lightner, and van der Burg urge doing more than current peer review processes require, Erich W. Schienke et al. address the potential of BIC as it exists for transforming instruction in the responsible conduct of research. Supplementing traditional responsible conduct of research instruction with instruction on BIC provides a fuller understanding of the ethics relevant to scientific research—encouraging scientists to think not only of responsible science, but also of science as responsive to society. This article also addresses NSF’s new research ethics requirement—a response to the America COMPETES Act—in relation to BIC.

In the concluding contribution, Robert Frodeman and Jonathan Parker seek “to reframe our thinking about the broader impacts of science”—and they use BIC as an occasion for doing so. In the end, Frodeman and Parker suggest that intellectual merit and broader impact ought to be seen not as two separate criteria in conflict with one another, but rather as different types of—narrower versus broader—impacts.

S&T funding agencies find themselves in a difficult position, since they must answer to different constituencies within the scientific and technical community as well as to different constituencies in the larger society. Oversimplifying to the extreme, scientists and engineers always want more money, while societal decision‐makers always want more evidence that the money is being well spent. Many S&T funding agencies have chosen to pass some of the responsibility for making the case for public investment in S&T research on to those scientists and engineers who actually receive the funding. Is this the correct decision for funding agencies to make? Although it does not present a consensus answer to this question, this volume sums up much of what we have learned from NSF’s 12‐year experience with institutionalizing SI criteria as part of peer review.

Acknowledgements

Thanks to all the participants of the August 2007 workshop in Golden, Colorado, USA (http://www.ndsciencehumanitiespolicy.org/workshop/index.php). The conversation begun then is continuing and widening in scope. In particular, however, I would like to single out the members of the organizing committee, Mark Frankel, Robert Frodeman, and Carl Mitcham for their invaluable contributions to making the workshop a success. All three have also continued to foster the research program we began together in 2007. Mark Frankel, Director of the Scientific Freedom, Responsibility & Law Program of the American Association for the Advancement of Science, graciously agreed to co‐sponsor the workshop. The Hennebach Program in the Humanities in the Division of Liberal Arts and International Studies at the Colorado School of Mines provided significant financial and logistical support for the workshop. I thank Robert Frodeman, Steve Fuller, and Carl Mitcham for their contributions in making this special issue come to fruition. Special thanks are also extended to Steven Hrotic for his excellent work formatting the articles. I also thank NSF because, as one says officially, this material is based upon work supported by NSF under Grant No. 064957. The opinions expressed in this special issue of Social Epistemology are solely those of the authors, however, and do not necessarily represent the view of NSF.

Notes

[1] The US National Institutes of Health recently changed the scoring system used in its peer review process to include an “impact score”, but this score is used to determine impact on science rather than societal impact. The US National Institutes of Health does consider societal impact, however, in the second tier of its review system with the involvement of members of the public on its Advisory Councils/Boards. Agencies worldwide have adopted various strategies for integrating SI considerations into their funding decision process, with many following NSF’s lead and integrating SI criteria into peer review. See http://www.csid.unt.edu/research/capr.html.

[2] Available at http://www.timeshighereducation.co.uk/story.asp?storycode=404997 (accessed 23 September 2009).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.