Reviewer Service in DEB, Part 2: Missed Connections

In the prior post we described the roles of panelists and ad hoc reviewers in the DEB merit review process and how others have highlighted the value of taking part in this process. We left off with an observation that has popped up in several comment threads on those other discussions: “I volunteered but no one ever called.” This post addresses why that happens.

Foremost among the reasons we haven’t called you back is a possibly startling admission: we don’t have an actual “reviewer database”. We have a reviewer records system but it’s an old architecture, built for a smaller scientific enterprise[i], and lacks the sort of substantial or topical content (not even research keywords) that would be needed for it to function as a database for identifying appropriate reviewers. Even with such content, because it is primarily for record-keeping and not discovery, it would simply search reviewers we’ve used in the past and wouldn’t help us to identify “new blood.” Yes, this is incredibly, woefully behind-the-times compared to what many journals use[ii]. This is an issue for which NSF is trying to find a solution.

The next major roadblock for self-identifying reviewers is timing. DEB programs need to find large numbers of reviewers in narrow windows of time. If you’re just sending us emails whenever the thought strikes you, they are likely landing on our desks in-between when we’re looking for reviewers. You can check out the review process calendar we posted previously to see when this happens for various programs. The take-home point is that we’re often looking for panelists before proposals have even been submitted (and months before a panel meets) and we are looking for specialist reviewers during just the few weeks when we are sending out proposals for “ad hoc” reviews. Hitting those periods can help put you in the right place at the right time, but is no guarantee, simply because…

Any email is one among numerous offers being passed around daily and is likely to have passed out of memory and been supplanted by other more recent offers by the time we have a need. Suffice it to say, there’s a lot of low-information-value noise in our in-boxes: CVs without context, boilerplate introductory letters where “review” appears as an afterthought, etc.

Even with a strategically-timed, well-written introduction, however, a potential reviewer may not best match our needs for a current proposal or panel and, ultimately, we don’t have anywhere to put the information and retrieve it efficiently. Any system we put in place for keeping tabs on volunteer offers is going to be 1) competing with the whole of the internet to quickly identify sufficient numbers of relevant experts and 2) filled with dead ends unless it is regularly updated. Your CV might get dropped in a shared folder and might turn up in a document search while you’re still at the same job, but it just as easily can get buried amongst the daily deluge of emails.

While advice around the academic web to “just send in your information” reflects individual experience, it is based on a perception of cause and effect when the reality is often just coincidence. You may get a call to review from someone who never saw your email if you come up in a search result. Around the time most people are first entering the potential PI/reviewer pool, they are also developing a professional web presence. And, as we said above, without a dedicated reviewer database, we put the whole internet to work to find reviewers. So, being searchable and showing up at the right moment can make sending us any sort of introduction moot. We’ll address this further in part 3.

 

[i] Even just 10 years ago we were dealing with only ~half the proposals we see today.

[ii] We would like to think we are pretty adept at working around this information system deficiency. Between panelists and individual (ad hoc) reviewers, DEB manages to obtain some ~10,000 separate evaluations of proposals each year. And, taking some pride in facing this adversity, we note that lots of folks never notice our lack of a proposal & reviewer matching database until they come here and we tell them.

Review Service in DEB, Part 1: Panelist vs Ad hoc Reviewer

It’s been said before elsewhere that serving on an NSF panel is an eye-opening experience, not just because you gain perspective on the work that goes into a panel but that as a reviewer you can learn so much about grant-writing which you can apply to your own pursuit of funding. But panels aren’t the only review opportunity in DEB.

There are two distinct roles for reviewers in DEB and you gain different experience and perspective in each role:

A panelist reviews a relatively large number of proposals, rating each one, and then participates in a multi-day discussion of each proposal’s merits. A panel usually meets at or near NSF, although virtual options are also used. For each proposal in a DEB panel at least two other panelists are assigned to provide reviews as well. DEB tends to organize larger than average panels in order to tackle the broad and shifting suite of specialties and diversity of projects in our programmatic area. These panels can seem downright unwieldy in comparison to programs with more narrowly defined boundaries. It’s not unusual for a DEB panel to be made up of 20 panelists (with 3-5 Program Officers and associated staff) to tackle more than 100 proposals over 3 days. Schedules, more so than interest, are a major hurdle in finding panelists.

An “ad hoc” reviewer is solicited to review just one proposal at a time (rarely two) and does not attend the panel. However, for a person in high demand for specialty knowledge, 1 or 2 requests from each of several programs could quickly pile up. Therefore, in DEB our practice is to check our records for recent review requests and avoid sending multiple requests to the same individual[i]. An ad hoc reviewer completes the same review form and rates proposals on the same criteria as do panelists but may also be asked by the Program Officer managing the review to focus on a specific aspect of the proposal they are particularly well-suited to evaluate. The individual ratings from ad hoc reviewers are provided to the panelists after the panelists have submitted their own reviews and in time for the panel discussion. Finding sufficient numbers of ad hoc reviewers that match our needs for expertise and are willing to complete reviews on a deadline has historically been the bottleneck to the review process.

These two roles are complementary. It’s a bit of a pet peeve here when we hear ad hocs referred to as “expert reviewers” to imply that panelists are not. Panelists are just as “expert” as any ad hoc reviewer (and there’s nary a panelist who hasn’t themselves provided ad hoc reviews). That said, the primary role of the ad hoc reviewer is to provide a specialist’s opinion on the quality of the specific proposal. The role of the panelist is to synthesize their own evaluation with that of the ad hoc and other panelist reviewers to arrive at a consensus evaluation of the value of the work to advancing the broader discipline. The justification for having a person come to a panel versus submit an ad hoc review hinges not only on the need to balance breadth and depth in the review process, but critically the reviewer’s skill in handling the larger, broader, panelist workload and discussion dynamic.

Of course, describing review roles and reiterating what others have said about reviewing being important, beneficial, and valuable doesn’t address the issue of becoming a reviewer. Plenty of you reading this have at one time or another emailed a Program Officer or sent us a CV after an annual meeting to volunteer as a reviewer and heard… nothing. And now many of you are probably thinking, “but I keep hearing that NSF is in desperate need of reviewers” and wondering “what’s going on?” We’re going to try and address that in part 2.

 

[i] This doesn’t work if you have multiple reviewer profiles for different programs in FastLane.

Invited Commentary: The Education Dimension of Biodiversity Science

Fletcher Halliday, a Biology PhD student at the University of North Carolina just returned from ScienceOnline Together 2014 where he networked BioDiverse Perspectives, a graduate student run science blog featuring what the next generation thinks is foundational and edgy in biodiversity science. In fact, BioDiverse Perspectives debuted at last year’s ScienceOnline conference, with nice shout outs by Jeremy Fox (Dynamic Ecology) and Marc Cadotte (EEB&Flow). Is this science? Shouldn’t Fletcher be spending his grad school time working on peer-reviewed publications? Turns out he’s doing that too, having just co-authored a paper in Ecography evaluating the stress-dominance hypothesis, along with a suite of UNC students and faculty member Alan Hurlbert. Fletcher’s experience, and that of his graduate co-authors is part of an experimental integration of research and education funded by the National Science Foundation Dimensions of Biodiversity program – how far can we scale up a distributed graduate seminar and still produce original innovative work?

Distributed graduate seminars (DGS) have been around for a while. A blend of networking, meta-analysis, synthetic problem-solving, and team and leadership skill development; DGSs have a practical goal of student-led peer-reviewed publication, and a larger goal of revamping graduate education to more closely match the way we do science – in fluid teams composed of experts who can collectively tackle the problems science faces today. The basic DGS structure includes multiple teams – each from a different university – composed of 3-15 graduate students drawn from several departments, and 1-3 facilitating faculty.

The heart of a DGS is when representatives – students and faculty – from all teams come together in synthesis meetings to share knowledge and skill sets, and allow cross-team projects to emerge from these interactions. In this model, students lead. Faculty take a supporting role – in our DGS faculty made themselves available for an informal session on Bayesian analysis or Rao’s Q, they were ready to pitch in as a writing workshop lead, or on track to review the conceptual model of a newly formed cross-team. Participating faculty are socially adept graduate student mentors naturally able to teach without lecturing. They are also available for synthesis meetings, collectively well-versed in the knowledge and skill sets students need to leap forward, and honestly, just plain fun.

Invented at NCEAS, this model of graduate education has branched into several approaches, from a more traditional (albeit online) reading and discussion group, to the relative chaos of allowing students to explore their own approach to synthesizing biodiversity science – welcome to the Dimensions of Biodiversity Distributed Graduate Seminar (DBDGS).

Imagine a single project with 117 graduate students and postdocs, 24 faculty, and 3 staff tasked with bounding and baselining the genetic, functional and taxonomic/phylogenetic dimensions of biodiversity. A project stretching across 13 universities, spread across four continents and five languages. Throw in the microbe-macrobe divide, a field through theory focus, and a “healthy difference” of approach from basic (biodiversity as driver) to applied (biodiversity as response variable) science. Top that off with PIs coming from an academic and a non-governmental organization, respectively, and you have a recipe for truly creative synthesis.

In the two and a half years since we started, we’ve held 6 synthesis meetings collectively attended by more than 200 people, held 6 writing and blogging workshops, and sponsored more than 40 graduate student papers at Ecological Society of America, American Association for Advancement of Science, Ssociety for Conservation Biology, and Evolution meetings. Here are the good – in fact, amazing – things that result from a DGS:

  • Graduate students really can take charge and plan-execute-publish synthetic meta-analyses. We’ve already had 6 papers published, and there are at least 10 in press, in review, or in (serious) prep.
  • Along the way of realizing this academic production, students have created a cross-university and interdisciplinary network, literally the future of biodiversity science. University of North Carolina students work with University of Connecticut, Universidade Federal do Rio Grande do Sul (Brazil), and Católica University (Chile) students on synthetic approaches to trait space conceptualization. Fisheries students from Oregon State University (Oregon) and marine ecology students from University of California Santa Barbara figure out how to work together to create the next generation of meaningful biodiversity indicators of fishery sustainability, even if our generation of academics are still duking it out. UCSB “terrestrial” students collaborated virtually with scientists from the Tropical Ecology, Assessment and Monitoring (TEAM) Network in the Republic of Congo, Uganda, Kenya, Ecuador, Italy, the Netherlands and Costa Rica to analyze data from 11 tropical forests spread across Africa, Asia and Latin America. The list of intersections and interactions is long.
  • Truly synthetic products will emerge from the efforts of the group, even if it’s impossible to predict what all of them will be, or who will be involved in producing them. BioDiverse Perspectives is a case in point. Faculty imagined this product as a “Foundations and Frontiers of Biodiversity” book. Students allowed as how books conveyed academic gravitas, but were also an increasingly out-of-date medium in a world of fast electronic communication. But a blog, that’s a different story – 130 posts, 53,400 page views, and a year later, our graduate student designed and delivered blog is helping define the edge of science communication within biodiversity. And it’s recruiting nonDBDGS students to contribute regularly: 35 bloggers from 5 countries and 14 academic institutions have posted so far on everything from what drives rodent diversity at the Portal site to Peter Kareiva’s thoughts on the future of biodiversity science.

Of course, not everything we tried worked, and not every synthesis project came to such rich fruition. Here’s what’s difficult to downright hard about running a DGS:

  • Giving graduate students the reins to a team-based project is not as easy as it could be. Turns out that true teamwork takes both time and work. Students need to learn to be leaders, and the realization that leadership doesn’t mean telling people what to do can be startling, even off-putting. What we hear now on the backside of successful publication is that the opportunity to really engage in team-based synthesis was one of the best opportunities of their graduate experience. But that’s not necessarily what they, or their advisors, said at the time…
  • Confronting the cultural barriers in the conduct of science was a major challenge. Not surprisingly, how we practice our craft in the U.S., China, Brazil, Chile and Kenya is different. We approached nearly 10 universities in China before we found a faculty leader excited about collaborating. Most found our model too contrary to their model of student research work very strongly directed by senior scientists.
  • The most difficult aspect of our model was the funding disparity between national and international teams. National teams had the benefit of NSF funding for an RA. International teams didn’t have this advantage; yet these are exactly the students who could really benefit from more financial support as central resources in some participating countries was lacking. Whereas some international students soared, many were unable to accomplish group tasks in addition to their own, often much more structured, research work.

Putting a DGS together isn’t for everyone. It takes a huge time commitment – like any large project – but without the a priori reward of multiple authorship in the peer-reviewed literature. Being PIs is somewhere between über Moms, motivational speakers, and crazy people. Definitely not for the early career scientist or those whose principal motivation is a focus on excellent primary science. On the other hand, as Chief Scientist of an international conservation NGO, DBDGS enabled me to engage with some of the brightest young conservationists worldwide, and especially provided a mechanism to connect developing country scientists working in remote tropical forests with an international science network. And as an Associate Dean for Academic Affairs and Diversity, it’s been a fabulous chance to have a hand in shaping one direction of graduate education. The era of the single PI creating and publishing solo work is largely over. We work in groups, both inside and outside of academe. Why not make that experience de rigueur, if not central, to graduate education?

Julia K. Parrish
Associate Dean, College of the Environment
University of Washington

Sandy Andelman
Chief Scientist
Conservation International

PIs – Dimensions of Biodiversity Distributed Graduate Seminar (DBDGS)

 

Program Announcement: Two Recent Dear Colleague Letters on the NSF BIO site

There are two new Dear Colleague Letters (DCLs) listed under “Special Announcements” on the NSF BIO homepage. Neither of these notices involve submissions to DEB; however, we think the subject matter of these letters may be of interest to some of our readers. Even if not directly applicable to you as a “DEB” PI, both DCLs overlap with topics we’ve seen in interdisciplinary collaborations so we know you know people who would be interested in these. Pass it along.

Beyond the potential for funding, these are also worth mentioning in the context of previous posts about the diversity of funding opportunities at NSF.

 

“A Dear Colleague Letter for BRAIN EAGERs to Enable Innovative Neurotechnologies to Reveal the Functional and Emergent Properties of Neural Circuits Underlying Behavior and Cognition has been posted. (March, 2014). This letter invites relevant submissions to IOS and DBI (Division of Integrative Organismal Systems and Division of Biological Infrastructure).

The BRAIN EAGERs Dear Colleague Letter is a fairly typical Dear Colleague Letter announcement. It does two things:

1. provides background on a topic that NSF wants to emphasize (in this case the President’s BRAIN Initiative)

2. points applicants to a pre-existing formal funding opportunity that can process the request (in this case the Early Concept Grants for Exploratory Research, EAGER, mechanism within the NSF Grant Proposal Guide).

This exemplifies how Dear Colleague Letters aren’t themselves opportunities for funding; rather, they are soft calls to advertise how an existing funding opportunity meets the needs of particularly relevant or timely areas of research.

 

“Special guidelines for submitting collaborative proposals under the US NSF/BIO – UK BBSRC Lead Agency Pilot Opportunity have been posted. (March, 2014). This pilot opportunity between NSF BIO and the UK Biotechnology and Biological Sciences Research Council (BBSRC) invites relevant submissions to MCB (Division of Molecular and Cellular Biosciences) and DBI

The US NSF/BIO – UK BBSRC Lead Agency Pilot Dear Colleague Letter is interesting because it does contain something “new” but doesn’t actually diverge from established review processes.

This letter announces an agreement between NSF BIO (MCB and DBI) and the UK’s BBSRC to pilot an international collaborative opportunity in which each agency recognizes each other’s process for reviewing proposals. Right now, most international collaboration requires each side to obtain funding separately from the home country. This “Lead Agency” model pilot enables a team of US and UK researchers to submit a single proposal to just one agency (but with separate US and UK budgets). The agencies will share the proposal information and cooperate through a single review process (hosted by whichever side is the larger part of the effort) to avoid the “double jeopardy” of having each side run independent reviews. If successful, each agency would fund their own investigators. Paperwork is only submitted to the non-Lead agency after both agencies have agreed to fund the project.

How is this unlike a normal submission to either program? A few ways:

1. your team needs to get in touch with the program and provide some documentation before you submit a proposal so both countries can make sure the project fits within their funding mandates, and that the appropriate agency has been selected as “lead”.

2. your proposal needs to fully describe both sides of the project in the single proposal and this Dear Colleague Letter specifies how to do that.

3. your NSF proposal budget would cover project costs for the US investigators and a supplemental document would describe project costs for the UK investigators, and vice-versa for BBSRC proposals

How then is this not a different mechanism?

Everything else follows the rules for a normal submission to the Lead agency (US or UK): it’s due on the same deadline, follows the same proposal format, and goes through the same standard peer review process as something submitted to that program without international partners.

In this case, the Dear Colleague Letter exemplifies the use of this mechanism to pilot a new[i] idea  without making major revisions to a formal funding opportunity.


[i] The actual merit review criteria and process for the agency handling review are unchanged. The only thing truly “new” is that both agencies hashed out a clear process and agreed to try promoting it.

Happy Taxonomist Appreciation Day!

It’s March 19th. A year ago, when we were just finding our bloggy footing, another newcomer to the biology blog scene made the radical proposal that on this day all biologists should take a moment to say thanks to their colleagues developing the taxonomic and systematic knowledge on which we all build our understanding of the diversity of life.

We think this is a wonderful idea!

So, thanks to Maureen, Simon, Joe, Judy, and David, our current DEB Systematics and Biodiversity Science Cluster, for serving as stewards of support to these fields.

Thanks to all those who have previously and continue to serve as Program Officers, Experts, and Administrative staff helping to manage our programs supporting taxonomy, systematics, species discovery, phylogenetics, and collections.

Thanks to the legions of reviewers who have contributed their expertise in identifying the best proposals in taxonomy and systematics for funding since the earliest days of NSF and to the PIs who are not just describing new species but pioneering new ways to do the work and share it with the scale and efficiency suited to the challenges faced by global biodiversity.

Here’s our minimally curated list of ways to honor your lumpers and splitters:

Visit your favorite species and share its naming history (#loveyourtaxonomist). (And if you don’t have a favorite species, take your cue from the inspiration for this day and learn about an ant.)

Check out the effort to create a unified index of all described species (>1.5 Million species catalogued so far)

Share some taxonomy humor, courtesy of Buzz Hoot Roar.

Got some time on your hands? Notes from Nature is enlisting volunteers to transcribe physical specimen labels into digitally accessible information.

And, for those of you with no taxonomic background whatsoever, check out this plain-language overview the field, its importance, and the global challenges to the development and preservation of taxonomic knowledge from the Convention on Biological Diversity, Global Taxonomic Initiative.

FastLane Review Score Options

Comments at a recent panel brought to our attention[i] one nugget of reviewer wisdom that we were surprised to learn isn’t widely known but makes sense to share here with many of you taking part in NSF reviews. Since review assignments for the latest round of preliminary proposals are making their way into the community, we thought it timely to post a quick explanation of the less-advertised options for rating the proposals you’ve been asked to review.

When you review an NSF proposal, you don’t need to give it a single letter score of E (excellent), V (very good), G (good), F (fair), P (poor). In the reviewer system (through FastLane) you can check more than one box for “Overall Rating” to give a score between two of the ranks, like V/G or G/F when the 5 point system feels too coarse, Whether you check one box or two, however, the purpose is to capture a single “Overall Rating”. In other words, we ask reviewers to synthesize their evaluations of intellectual and broader impact merits into a single score. Therefore, if a reviewer provides a split score we (and the PI) view it as indicating a score that is in between the two categories. A split score that spans more than two adjacent ratings, or is meant to reflect different scores for different aspects of the proposal, is not especially useful since we don’t know how the reviewer rated the overall proposal on balance.

On the flip side, be careful if you’re trying to select a score and check the wrong box: FastLane doesn’t automatically clear the first choice when you make another selection, creating the potential for unintended scores like “V/G/F/P”.

Generally, the written content of the review matters more than the rating score: we don’t have an average-score-based “funding line”. Nonetheless, scores aren’t ignored: they’re a concise indicator of a reviewer’s opinion, can be really helpful for interpreting the written content. Scores are incredibly useful in managing panel discussion because they allow us to compare general opinions and quickly see if the reviewers are all starting from a similar place or whether there may be divergent views to work through. Being judicious in your assignment of scores can also be useful to you as a reviewer/panelist to differentiate between your many assignments and remember them through hours of discussions. On rare occasions, a reviewer may opt not to provide an overall rating at all and just provide the written comments. While acceptable, we discourage doing this on a regular basis and expect to see only a handful each year out of 10,000+ reviews.

Keep these tips in mind next time you’re reviewing for us.

~~mentally insert your favorite Public Service Announcement catchphrase and jingle here~~


[[i]] From our side of the proposal and review process, we’re not always privy to how the agency-wide externally facing systems like FastLane are displaying your options and instructions: this is why there’s a dedicated helpdesk for FastLane issues at 1-800-673-6188 and an extensive online help resource.

What we’re up to right now in DEB: February 2014

  • Synthesizing panel recommendations for Doctoral Dissertation Improvement Grants (491 proposals across the 4 clusters)
  • Reviewing and processing supplement requests (~200 REU, RET, ROA, and RAHSS requests)
  • Finalizing participation and assigning reviews for preliminary proposal panels (~5000 individual reviews distributed across 200+ panelists comprising 10 different panels, see comparison below)
  • Waiting on final 2014 budget numbers to reach the program level (and providing information to support the 2015 budget request)
  • Managing winter/spring special program reviews in between winter storms (a huge thanks to the hearty panelists who worked on through a DC-paralyzing foot of snow last week)

Preliminary Proposals Sent for DEB Review:

FY2012 FY2013 FY2014
1626 1629 1636[i]

[i] Pending any incomplete transfers, withdraws, or returns without review.