The Comparative Assessment of Peer Review as Field Philosophy

CAPR was a natural follow-up to the workshop. We had discovered that funding agencies around the world not only had incorporated societal impacts criteria into their own peer review processes, but also had experienced resistance to those criteria from proposers and reviewers. CAPR examined six different approaches to incorporating societal impacts considerations across three US federal agencies—the NSF, the National Institutes of Health (NIH), and the National Oceanic and Atmospheric Administration (NOAA)—and similar procedures at three non-US agencies—the European Commission (EC) Framework Programmes, focusing on the seventh (FP7), the Natural Sciences and Engineering Research Council of Canada (NSERC), and the Dutch Technology Foundation (STW). In addition to speaking with funding agency officials at each of these agencies, we held two workshops—one in Washington, DC and the other in Brussels—in order to bring researchers on peer review and funding agency officials together to engage each other.

Frodeman and Briggle (2016, p. 124) suggest that there are six “definitive characteristics” of the field philosopher:

  • • Goal: help excavate, articulate, discuss, and assess the philosophical dimensions of real-world policy problems.
  • • Approach: pursue case-based research at the meso-level that begins with problems as defined and contested by the stakeholders involved.
  • • Audience: the primary audience consists of non-disciplinary stakeholders faced with a live problem. Knowledge is produced in the context of use.
  • • Method: rather than a method, we speak of rules of thumb, a pluralistic and context-sensitive approach with a bottom-up orientation.
  • • Evaluation: context-sensitive standards for rigour, and non-disciplinary metrics for assessing success, which in the first instance is defined by one’s audience.
  • • Institutional placement: field philosophy resides on the margins of existing institutions, shuttling between the academy and the larger world; but also seeks to institutionalize itself both within academia and different communities of practice.

I will reserve the question of evaluation for the following section and return to the question of institutional placement at the end of this chapter. In the present section, I discuss how well CAPR’s goal, approach, audience, and method fit with Frodeman and Briggle’s description of field philosophy.

CAPR’s goal was to help policymakers in government and government agencies around the world address issues surrounding the incorporation of societal impacts criteria into the peer review of grant proposals.

It was Congress that had directed NSF to contract with NAPA to issue their 2001 report on the Broader Impacts Criterion (Holbrook 2012). After the Broader Impacts workshop in Golden, Frodeman, Mitcham, and I had a phone conversation with an NSF official, who wanted to know what “national needs” the Broader Impacts Criterion was best suited to meet. Just after the workshop, Frodeman and I had published an article in Professional Ethics Report (Holbrook and Frodeman 2007), which was edited by Frankel under the auspices of the AAAS. One passage from that piece is worth quoting here:

In a development that occurred independently of our research workshop planning, on August 9, 2007, the America COMPETES Act (H.R. 2272) was signed into law (Public Law 110-69). The America COMPETES Act (in Section 7022) requires the Director of NSF to issue a report to Congress “on the impact of the broader impacts grant criterion used by the Foundation” within one year of the date of enactment of the Act.... Although America COMPETES was not part of the original motivation for our workshop, workshop participants did discuss its impending passage and agreed that our research represents an important potential source of independently gathered information of value to NSF and the Director in answering this Congressional charge. Some participants suggested that our discussions might also be of interest to members of Congress.

(Holbrook and Frodeman 2007, p. 3)

Whether we had any information that could be helpful in writing this report to Congress was not discussed during the phone call. Instead, various candidate “national needs” were proposed by the NSF official for our comment. Although we allowed that some of the proposed national needs could conceivably be met by activities that would satisfy the Broader Impacts Criterion, our judgment at the time was that such a list would unnecessarily stifle proposers’ creativity in responding to national needs, and that national needs are not themselves predictable enough to be contained by a list. The list might be incomplete, or our needs might change. We suggested that the criterion as then written—asking, as it did, for potential benefits to society—was already well-suited to allowing for all sorts of activities that would meet the widest possible scope of national needs. The NSF official thanked us for our input and said goodbye. We moved on to working with other agencies that had indicated the need for to address the issue in their own context.

CAPR’s approach was to treat each of the different agencies we studied as their own case, although we later came to offer comparisons of the different cases as a way of bringing the issues faced by different agencies into greater relief. We actually engaged with the agencies we studied in various ways. NSF, it turns out, was trying to figure out the best ways to respond to the America COMPETES Act. In February 2010, the National Science Board (NSB), the governing board of NSF, created a Task Force on Merit Review to re-examine NSF’s process on Broader Impacts, including the criteria. Joanne Tornow, Executive Secretary of the NSB Task Force, contacted us to request 25 copies of the special issue of Social Epistemology that we had published in 2009 (Holbrook 2009) as a product of the Colorado workshop. Given the success of that previous grant, we held another workshop as part of CAPR, this time in Washington, DC, to allow for participation from NSF staff. We visited Europe and discovered that the EC was then making plans for its next Framework Programme (which later became Horizon 2020) and was interested in examining its own approach to societal impact. We had also visited other agencies in Europe interested in discussing the same issues, and were asked by the EU to hold a workshop in Brussels as well. We also invited Tornow, Executive Secretary of the NSB Task Force on Merit Review to attend (which she did).

CAPR’s audience had become a mixed group of scholars working in STS and Science Policy circles, plus funding agency officials from around the world. The workshops in Washington, DC and Brussels brought these groups together so that academics could hear the concerns of funding agency officials. Not surprisingly, different agency officials had different concerns, many of which were quite specific to the context in which their own agencies operated.

CAPR’s method was not to focus on producing ‘generalizable knowledge’ designed to fit every possible situation. Instead, we attempted to tailor our approach to various contexts. Sometimes, comparisons were helpful, and often different agency officials learned as much from each other as they did from us. While our role was to provide the occasion for meetings to take place, having academics there altered the context—these were still scholarly workshops, not business meetings. We did discover some general rules, though. For instance, vague criteria work best when an agency wants to maximize the creativity and autonomy of proposers. When an agency wants to make sure that specific impacts are targeted by funded proposals, it must specify what impacts are expected. This sort of intervention enabled agencies to see that they were making decisions about what they valued, rather than simply about policies to meet ends determined by legislators (who may not be very well acquainted with different funding agency cultures). Overall, I would characterize our ‘method’ as engaging (with) people in thinking. In other words, I do not think what we did actually counts as a method, but rather as a manner.2

I will return to this distinction between method and manner in the final section of this chapter, since I think it has implications for attempts to institutionalize field philosophy. But, at this point, I think CAPR fits rather well with Frodeman and Briggle’s account of field philosophy. The goal was to address the philosophical aspects of—and, in particular, the values embedded in— different approaches to incorporating societal impacts considerations into the peer review of grant proposals. The approach was case-based and began with problems as defined by stakeholders in proposal peer review, with a focus on funding agency officials. The audience was a mix of academics and funding agency officials. And the method was context-sensitive and ‘bottom-up,’ in the sense that we took our lead from the stakeholders, rather than trying to fit them into some sort of preordained theory.

 
Source
< Prev   CONTENTS   Source   Next >