Evaluating CAPR’s Broader Impacts

Frodeman and Briggle (2016, p. 124) suggest that field philosophy requires standards of assessment different from those of traditional philosophy: “Evaluation [of field philosophy requires] context-sensitive standards for rigor, and non-disciplinary metrics for assessing success, which in the first instance is defined by one’s audience.” As discussed in the previous section, CAPR had multiple audiences with different needs. In the interests of space, in this section I will suggest ways to evaluate CAPR only in terms of an audience made up of NSF stakeholders, rather than attempting to assess CAPR from the perspective of all the agencies we engaged with. It is worth noting in passing that, evaluated by more traditional criteria—such as number of publications produced by the grant —CAPR also did pretty well: we produced ten publications, some of which are fairly highly cited.

This is not the place to go into detail about CAPR’s scholarly impact, however. The question, insofar as CAPR is a case of field philosophy, is whether it achieved its goal vis-à-vis its audience. Put slightly differently, did CAPR manage to have a broader impact on society, specifically in terms of NSF? If so, how, and what was it?

Recall that CAPR took place while NSF was in the midst of a review of its Merit Review process. The NSB Task Force on Merit Review had been constituted in February 2010, in response, in part, to the fact that Congress had called out NSF, and specifically the Broader Impacts Criterion, in the America COMPETES Act. ’ The fact that the Task Force had requested copies of the special issue of Social Epistemology on the Broader Impacts Criterion suggested that NSF had some interest in our research.

Also in April 2010, we held a CAPR workshop in Washington, DC. Someone from NSF gave a presentation, and there were other representatives of NSF in the audience. It is worth stating explicitly that we held the workshop in Washington, DC to invite participation from interested parties. People from six different funding agencies were on the program, and many of those agencies sent other people to the workshop. There were also attendees from several other agencies in Washington, DC. On the afternoon of April 23, after the workshop had ended, Frodeman and I also met with John Veysey, Senior Legislative Assistant to Representative Daniel Lipinski (Democrat-Illinois). Lipinski, then Chair of the Research Subcommittee of the House Science Committee, was at that time working on the next version of the America COMPETES Act, which would also have a section on NSF’s Broader Impacts Criterion.

The original America COMPETES Act had asked NSF to provide a report to Congress that would, among other things, describe the national goals the Broader Impacts Criterion was best suited to promote. Our phone call with the NSF staffer after the Golden, Colorado workshop had focused on coming up with an answer to this question. H.R. 5116, which was the initial House version of the new America COMPETES Act, had been released on April 22. It contained the following directive to NSF:

Goals.—The Foundation shall apply a Broader Impacts Review Criterion to achieve the following goals:

  • 1 Increased economic competitiveness of the United States.
  • 2 Development of a globally competitive STEM workforce.
  • 3 Increased participation of women and underrepresented minorities in STEM.
  • 4 Increased partnerships between academia and industry.
  • 5 Improved pre-K-12 [pre-kindergarten through to twelfth grade]

STEM education and teacher development.

  • 6 Improved undergraduate STEM education.
  • 7 Increased public scientific literacy.
  • 8 Increased national security.
  • (America COMPETES Act 2007, $214)<

In our discussion with the House staffer, Veysey, Frodeman, and I emphasized that a simple list of goals for broader impacts might be interpreted by proposers and reviewers as exhaustive, so that they would feel limited to proposing activities that were included on the list. Veysey indicated that it might not be possible to alter the language of the Bill to remove the list, but that the Committee Report language could make clear that the list was not meant to be taken as exhaustive.

The House Committee Report (111-478), which was published in May 2010, makes this point explicitly:

The specific list of goals in subsection (a) was included in a report to Congress by the Foundation in 2008, as requested in the 2007 America COMPETES Act. The Committee chose not to amend that list developed by the Foundation in 2008. However, the Committee understands that this list may and perhaps should evolve over time, and does not intend to preclude the National Science Board from launching a more in-depth, comprehensive review of either the goals or implementation of the Foundation’s merit review criteria.

(Report of the Committee on Science and Technology [111-478] 2010, p. 109)

The final text of the America COMPETES Reauthorization Act of 2010 (Public Law 111-358) contains the identical list (from H.R. 5116) in §526. By 2011, the NSB Task Force on Merit Review was faced with a decision of how to handle the fact that this list was now written into law.

Joanne Tornow, Executive Secretary of the NSB Task Force, had presented at the second CAPR workshop held in December 2010 in Brussels, titled: “EU/US workshop on peer review: Assessing ‘broader impact’ in research grant applications.” There, Tornow revealed that the Task Force aimed to complete its review of the Merit Review Process by the fall of 2011. In June 2011, the NSB released proposed revisions to the Merit Review criteria (NSB-11-42) that provided a list of nine national goals—it added ‘enhancing infrastructure’ to the list from the America COMPETES Reauthorization Act of 2010—and, for the Broader Impacts Criterion, asked, “Which national goal (or goals) is (or are) addressed in this proposal?”

This approach, of course, was contrary to the one we had recommended based on CAPR’s research. Frodeman and I published a piece in Science Progress on June 27 (Frodeman and Holbrook 2011a) in which we argued against this approach:

Under the proposed new criteria, proposers and reviewers are limited to the list of national needs. Easier? Perhaps. But unless the list is made representative and nonexhaustive, proposers will be restricted to addressing only those national goals that have appeared on the list. This also restricts the list to current national needs, tying our hands to respond to new and future challenges.

On July 8, Frodeman and I published a letter in Science (Frodeman and Holbrook 2011b, p. 158), in which we expanded on the point:

The proposed changes in the merit review criteria move too far in the direction of accountability, at the cost of scientific creativity and autonomy. The set of principles (in terms of national goals) also suffers from excessive detail at the cost of flexibility.

Finally, in September 2011, Frodeman and I published an article in Research Evaluation that was an extended comparison between NSF’s and the EC’s approaches to impact (Holbrook and Frodeman 2011). There, we contrasted the EC’s top-down approach to specifying expected impacts with NSF’s bottom-up approach of allowing proposers to suggest the sorts of impacts their research could be expected to have. We also argued that worries about the vagueness of the criterion could be offset by thinking of it as more akin to NSF’s Intellectual Merit Criterion, which is also not overly prescriptive in terms of what research proposers may perform. We sent a copy to Tornow at NSF.

On December 13, 2011, the Nature Neu's Blog quotes John Bruer, co-chair of the NSB Task Force on Merit Review, as follows:

A National Science Foundation (NSF) task force has finalized its recommendations for tweaking the agency’s two merit review criteria, ‘intellectual merit’ and ‘broader impacts’. And central to that effort was a non-prescriptive, big-tent definition of broader impacts, says task force co-chair John Bruer, who presented the report on Tuesday to the National Science Board in Washington, DC.

“We don’t dictate what type of activities are intellectual merit,” says Bruer, president of the James McDonnell Foundation in St. Louis. “By the same token, we shouldn’t be prescriptive about what constitutes broader impacts. We’re not being overly prescriptive for either of them.”

Since 1997, the NSF has required all grant proposers to justify their requests not just on intellectual merit, but also on this notion of broader impacts. Yet researchers have found the requirements distressingly vague. Legislation passed by Congress in 2010 confirmed the importance of broader impacts, and also tried to be more specific, listing some of the activities that would count as having societal benefit. But when the task force’s May 2011 draft report dutifully repeated some of these examples, some critics worried that the NSF’s criteria would end up being too specific. Bruer’s team has since removed the list. “It raised problems about why some things were on the list and others not,” says Bruer.

(Hand 2011)

Bingo! NSB’s review and revisions of the criteria (NSB/MR-11-22) did, in fact, ditch the list. The Intellectual Merit and Broader Impacts criteria were more closely linked. And reviewers were asked to focus on the proposer’s plan to achieve broader impacts. In short, quite of few of CAPR’s suggestions had been taken to heart by the Task Force.

 
Source
< Prev   CONTENTS   Source   Next >