Targeted, actionable and fair: Reviewer reports as feedback and its effect on ECR career choices
Previous studies of the use of peer review for the allocation of competitive funding agencies have concentrated on questions of efficiency and how to make the 'best' decision, by ensuring that successful applicants are also the more productive or visible in the long term. This paper examines the components of feedback received from an unsuccessful grant application, is associated with motivating applicants career decisions to persist (reapply for funding at T), or to switch (not to reapply, or else leave academia). This study combined data from interviews with unsuccessful ECR applicants ( = 19) to The Wellcome Trust 2009-19, and manual coding of reviewer comments received by applicants ( = 81). All applicants received feedback on their application at T with a large proportion of unsuccessful applicants reapplying for funding at T. Here, peer-review-comments-as-feedback sends signals to applicants to encourage them to (continue) or (not continue) even when the initial application has failed. Feedback associated by unsuccessful applicants as motivating their decision to resubmit had three characteristics: actionable; targeted; and fair. The results lead to identification of standards of feedback for funding agencies and peer-reviewers to promote when providing reviewer feedback to applicants as part of their peer review process. The provision of quality reviewer-reports-as-feedback to applicants, ensures that peer review acts as a participatory research governance tool focused on supporting the development of individuals and their future research plans.
Describing the state of a research network: A mixed methods approach to network evaluation
Diabetes Action Canada Strategy for Patient-Oriented Research (SPOR) Network in Chronic Disease was formed in 2016 and is funded primarily through the Canadian Institutes of Health Research (CIHR). We propose a novel mixed-methods approach to a network evaluation integrating the State of Network Evaluation framework and the Canadian Academy of Health Sciences (CAHS) preferred framework and indicators. We measure key network themes of connectivity, health and results, and impact and return on investment associated with health research networks. Our methods consist of a longitudinal cross-sectional network survey of members and social network analysis to examine Network Connectivity and assess the frequency of interactions, the topics discussed during them, and how networking effectively facilitates interactions and collaboration among members. Network Health will be evaluated through semistructured interviews, a membership survey inquiring about satisfaction and experience with the Network, and a review of documentary sources related to funding and infrastructure to evaluate Network Sustainability. Finally, we will examine Network Results and Impact using the CAHS preferred framework and indicators to measure returns on investment in health research across the five domains of the CAHS framework, which include: advancing knowledge, capacity building, informing decision making, health impact, and economic and social impact. Indicators will be assessed with various methods, including bibliometric analyses, review of relevant documentary sources (annual reports), member activities informing health and research policy, and Patient Partner involvement. The Network Evaluation will provide members and stakeholders with information for planning, improvements, and funding future Network endeavors.
Evaluating the Revised National Institutes of Health Clinical Trial Definition Impact on Recruitment Progress
The National Institutes of Health (NIH) announced a revised, expanded definition of "clinical trial" in 2014 to improve trial identification and administrative compliance. Some stakeholders voiced concerns that the policy added administrative burden potentially slowing research progress.
Transdisciplinary research outcomes based on the Transdisciplinary Research on Energetics and Cancer II initiative experience
Intractable public health problems are influenced by interacting multi-level factors. Dynamic research approaches in which teams of scientists collaborate beyond traditional disciplinary, institutional, and geographic boundaries have emerged as promising strategies to address pressing public health priorities. However, little prior work has identified, defined, and characterized the outcomes of transdisciplinary (TD) research undertaken to address public health problems. Through a mixed methods approach, we identify, define, and characterize TD outcomes and their relevance to improving population health using the Transdisciplinary Research on Energetics and Cancer (TREC) II initiative as a case example. In Phase I, TREC II leadership ( = 10) identified nine initial TD outcomes. In Phase II (web-based survey; = 23) and Phase III (interviews; = 26; and focus groups, = 23) TREC members defined and characterized each outcome. The resulting nine outcomes are described. The nine complementary TD outcomes can be used as a framework to evaluate progress toward impact on complex public health problems. Strategic investment in infrastructure that supports team development and collaboration, such as a coordination center, cross-center working groups, annual funded developmental projects, and face-to-face meetings, may foster achievement of these outcomes. This exploratory work provides a basis for the future investigation and development of quantitative measurement tools to assess the achievement of TD outcomes that are relevant to solving multifactorial public health problems.
A framework for coordination center responsibilities and performance in a multi-site, transdisciplinary public health research initiative
Funding bodies in the USA and abroad are increasingly investing in transdisciplinary research, i.e. research conducted by investigators from different disciplines who work to create novel theoretical, methodological, and translational innovations to address a common problem. Transdisciplinary research presents additional logistical and administrative burdens, yet few models of successful coordination have been proposed or substantiated, nor have performance outcomes or indicators been established for transdisciplinary coordination. This work uses the NIH-funded Transdisciplinary Research on Energetics and Cancer (TREC) Centers Initiative as a case study to put forward a working framework of transdisciplinary research coordination center (CC) responsibilities and performance indicators. We developed the framework using a sequential mixed methods study design. TREC CC functions and performance indicators were identified through key-informant interviews with CC personnel and then refined through a survey of TREC research center and funding agency investigators and staff. The framework included 23 TREC CC responsibilities that comprised five functional areas: leadership and administration, data and bioinformatics, developmental projects, education and training, and integration and self-evaluation, 10 performance outcomes and 26 corresponding performance indicators for transdisciplinary CCs. Findings revealed high levels of agreement about CC responsibilities and performance metrics across CC members and constituents. The success of multi-site, transdisciplinary research depends on effective research coordination. The functions identified in this study help clarify essential responsibilities of transdisciplinary research CCs and indicators of success of those transdisciplinary CCs. Our framework adds new dimensions to the notion of identifying and assessing CC activities that may foster transdisciplinarity.
An evaluation of the National Institutes of Health Early Stage Investigator policy: Using existing data to evaluate federal policy
To assist new scientists in the transition to independent research careers, the National Institutes of Health (NIH) implemented an Early Stage Investigator (ESI) policy beginning with applications submitted in 2009. During the review process, the ESI designation segregates applications submitted by investigators who are within 10 years of completing their terminal degree or medical residency from applications submitted by more experienced investigators. Institutes/centers can then give special consideration to ESI applications when making funding decisions. One goal of this policy is to increase the probability of newly emergent investigators receiving research support. Using optimal matching to generate comparable groups pre- and post-policy implementation, generalized linear models were used to evaluate the ESI policy. Due to a lack of control group, existing data from 2004 to 2008 were leveraged to infer causality of the ESI policy effects on the probability of funding applications from 2011 to 2015. This article addresses the statistical necessities of public policy evaluation, finding administrative data can serve as a control group when proper steps are taken to match the samples. Not only did the ESI policy stabilize the proportion of NIH funded newly emergent investigators but also, in the absence of the ESI policy, 54% of newly emergent investigators would not have received funding. This manuscript is important to as a demonstration of ways in which existing data can be modeled to evaluate new policy, in the absence of a control group, forming a quasi-experimental design to infer causality when evaluating federal policy.
'Your comments are meaner than your score': score calibration talk influences intra- and inter-panel variability during scientific grant peer review
In scientific grant peer review, groups of expert scientists meet to engage in the collaborative decision-making task of evaluating and scoring grant applications. Prior research on grant peer review has established that inter-reviewer reliability is typically poor. In the current study, experienced reviewers for the National Institutes of Health (NIH) were recruited to participate in one of four constructed peer review panel meetings. Each panel discussed and scored the same pool of recently reviewed NIH grant applications. We examined the degree of intra-panel variability in panels' scores of the applications before versus after collaborative discussion, and the degree of inter-panel variability. We also analyzed videotapes of reviewers' interactions for instances of one particular form of discourse--as one factor influencing the variability we observe. Results suggest that although reviewers within a single panel agree more following collaborative discussion, different panels agree less after discussion, and Score Calibration Talk plays a pivotal role in scoring variability during peer review. We discuss implications of this variability for the scientific peer review process.
Greatest 'HITS': A new tool for tracking impacts at the National Institute of Environmental Health Sciences
Evaluators of scientific research programs have several tools to document and analyze products of scientific research, but few tools exist for exploring and capturing the impacts of such research. Understanding impacts is beneficial because it fosters a greater sense of accountability and stewardship for federal research dollars. This article presents the High Impacts Tracking System (HITS), a new approach to documenting research impacts that is in development at the National Institute of Environmental Health Sciences (NIEHS). HITS is designed to help identify scientific advances in the NIEHS research portfolio as they emerge, and provide a robust data structure to capture those advances. We have downloaded previously un-searchable data from the central NIH grants database and developed a robust coding schema to help us track research products (going beyond publication counts to the content of publications) as well as research impacts. We describe the coding schema and key system features as well as several development challenges, including data integration, development of a final data structure from three separate ontologies, and ways to develop consensus about codes among program staff.
Piloting an approach to rapid and automated assessment of a new research initiative: Application to the National Cancer Institute's Provocative Questions initiative
Funders of biomedical research are often challenged to understand how a new funding initiative fits within the agency's portfolio and the larger research community. While traditional assessment relies on retrospective review by subject matter experts, it is now feasible to design portfolio assessment and gap analysis tools leveraging administrative and grant application data that can be used for early and continued analysis. We piloted such methods on the National Cancer Institute's Provocative Questions (PQ) initiative to address key questions regarding diversity of applicants; whether applicants were proposing new avenues of research; and whether grant applications were filling portfolio gaps. For the latter two questions, we defined measurements called focus shift and relevance, respectively, based on text similarity scoring. We demonstrate that two types of applicants were attracted by the PQs at rates greater than or on par with the general National Cancer Institute applicant pool: those with clinical degrees and new investigators. Focus shift scores tended to be relatively low, with applicants not straying far from previous research, but the majority of applications were found to be relevant to the PQ the application was addressing. Sensitivity to comparison text and inability to distinguish subtle scientific nuances are the primary limitations of our automated approaches based on text similarity, potentially biasing relevance and focus shift measurements. We also discuss potential uses of the relevance and focus shift measures including the design of outcome evaluations, though further experimentation and refinement are needed for a fuller understanding of these measures before broad application.
Measuring the evolution and output of cross-disciplinary collaborations within the NCI Physical Sciences-Oncology Centers Network
Development of effective quantitative indicators and methodologies to assess the outcomes of cross-disciplinary collaborative initiatives has the potential to improve scientific program management and scientific output. This article highlights an example of a prospective evaluation that has been developed to monitor and improve progress of the National Cancer Institute Physical Sciences-Oncology Centers (PS-OC) program. Study data, including collaboration information, was captured through progress reports and compiled using the web-based analytic database: Interdisciplinary Team Reporting, Analysis, and Query Resource. Analysis of collaborations was further supported by data from the Thomson Reuters Web of Science database, MEDLINE database, and a web-based survey. Integration of novel and standard data sources was augmented by the development of automated methods to mine investigator pre-award publications, assign investigator disciplines, and distinguish cross-disciplinary publication content. The results highlight increases in cross-disciplinary authorship collaborations from pre- to post-award years among the primary investigators and confirm that a majority of cross-disciplinary collaborations have resulted in publications with cross-disciplinary content that rank in the top third of their field. With these evaluation data, PS-OC Program officials have provided ongoing feedback to participating investigators to improve center productivity and thereby facilitate a more successful initiative. Future analysis will continue to expand these methods and metrics to adapt to new advances in research evaluation and changes in the program.
Modeling the dissemination and uptake of clinical trials results
A select set of highly cited publications from the National Institutes of Health (NIH) HIV/AIDS Clinical Trials Networks was used to illustrate the integration of time interval and citation data, modeling the progression, dissemination, and uptake of primary research findings. Following a process marker approach, the pace of initial utilization of this research was measured as the time from trial conceptualization, development and implementation, through results dissemination and uptake. Compared to earlier studies of clinical research, findings suggest that select HIV/AIDS trial results are disseminated and utilized relatively rapidly. Time-based modeling of publication results as they meet specific citation milestones enabled the observation of points at which study results were present in the literature summarizing the evidence in the field. Evaluating the pace of clinical research, results dissemination, and knowledge uptake in synthesized literature can help establish realistic expectations for the time course of clinical trials research and their relative impact toward influencing clinical practice.
Mapping a research agenda for the science of team science
An increase in cross-disciplinary, collaborative team science initiatives over the last few decades has spurred interest by multiple stakeholder groups in empirical research on scientific teams, giving rise to an emergent field referred to as the science of team science (SciTS). This study employed a collaborative team science concept-mapping evaluation methodology to develop a comprehensive research agenda for the SciTS field. Its integrative mixed-methods approach combined group process with statistical analysis to derive a conceptual framework that identifies research areas of team science and their relative importance to the emerging SciTS field. The findings from this concept-mapping project constitute a lever for moving SciTS forward at theoretical, empirical, and translational levels.
Integrating utilization-focused evaluation with business process modeling for clinical research improvement
New discoveries in basic science are creating extraordinary opportunities to design novel biomedical preventions and therapeutics for human disease. But the clinical evaluation of these new interventions is, in many instances, being hindered by a variety of legal, regulatory, policy and operational factors, few of which enhance research quality, the safety of study participants or research ethics. With the goal of helping increase the efficiency and effectiveness of clinical research, we have examined how the integration of utilization-focused evaluation with elements of business process modeling can reveal opportunities for systematic improvements in clinical research. Using data from the NIH global HIV/AIDS clinical trials networks, we analyzed the absolute and relative times required to traverse defined phases associated with specific activities within the clinical protocol lifecycle. Using simple median duration and Kaplan-Meyer survival analysis, we show how such time-based analyses can provide a rationale for the prioritization of research process analysis and re-engineering, as well as a means for statistically assessing the impact of policy modifications, resource utilization, re-engineered processes and best practices. Successfully applied, this approach can help researchers be more efficient in capitalizing on new science to speed the development of improved interventions for human disease.
Scientific and Public Health Impacts of the NIEHS Extramural Asthma Research Program - Insights from Primary Data
A conceptual model was developed to guide evaluation of the long-term impacts of research grant programs at the National Institutes of Health, National Institute of Environmental Health Sciences. The model was then applied to the extramural asthma research portfolio in two stages: (1) the first used extant data sources, (2) the second involved primary data collection with asthma researchers and individuals in positions to use asthma research in development of programs, policies, and practices. Reporting on the second stage, this article describes how we sought to broaden the perspectives included in the assessment and obtain a more nuanced picture of research impacts by engaging those involved in conducting or using the research.
Comparing modeling approaches for assessing priorities in international agricultural research
This article examines how the estimated impacts of crop technologies vary with alternate methods and assumptions, and also discusses the implications of these differences for the design of studies to inform research prioritization. Drawing on international potato research, we show how foresight scenarios, realized by a multi-period global multi-commodity equilibrium model, can affect the estimated magnitudes of welfare impacts and the ranking of different potato research options, as opposed to the static, single-commodity, and country assumptions of the economic surplus model which is commonly used in priority setting studies. Our results suggestthatthe ranking oftechnolo- gies is driven by the data used for their specification and is not affected by the foresight scenario examined. However, net benefits vary significantly in each scenario and are greatly overestimated when impacts on non-target countries are ignored. We also argue that the validity of the singlecommodity assumption underpinning the economic surplus model is case-specific and depends on the interventions examined and on the objectives and criteria included in a priority setting study.