Date: Feb. 17, 1999
Contacts: Barbara J. Rice, Deputy Director
David Schneier, Media Relations Assistant
(202) 334-2138; e-mail <email@example.com>Soundness of Long-Term Federal Research Programs Can Be Assessed Annually
WASHINGTON -- The effectiveness of federally funded research programs -- both basic and applied -- can be assessed meaningfully on an annual basis as required by law, says a new report
from the Committee on Science, Engineering, and Public Policy, a joint committee of the National Academies of Sciences and Engineering and the Institute of Medicine. But different criteria should be used for different types of research to ensure that assessments fairly gauge progress.
"Measuring the performance of basic research is particularly challenging because major breakthroughs can be unpredictable and difficult to assess in the short term," said committee chair Phillip A. Griffiths, director, Institute for Advanced Study, Princeton, N.J. "Federal agencies should use a method we call 'expert review' to assess the quality of research they support, the relevance of that research to their mission, and the leadership of the research. This will ensure that funds spent on the research will generate the kinds of knowledge that in the past have brought great practical benefits."
During the course of its study, the committee heard two conflicting viewpoints on approaches to measuring basic research. One held that it is possible to measure research annually and provide quantitative measures of the useful outcomes of both basic and applied research. The other viewpoint was that given the long-range nature of basic research, no sensible way exists to respond to the annual measurement requirement. Therefore, some agencies may resort to using measures that seem to respond to federal law -- such as a list of the agency's top 100 discoveries of the preceding year -- but are actually meaningless.
"We concluded that both basic and applied research can be evaluated meaningfully on a regular basis," said Griffiths. "But it is important that agencies evaluate their research programs using measurements that match the character of the research." Differences in character will lead to differences in the appropriate time scale for measurement; in what is measurable and what is not; and in the expertise needed by those who contribute to the measurement process, the report says.
All federal agencies are mandated by law, under the Government Performance and Results Act, to set goals and use performance measures to encourage greater efficiency, effectiveness, and accountability. The Act requires that agencies write strategic plans with annual performance targets and produce an annual report that demonstrates whether these targets are met. The first performance reports are due in March 2000.Use of Expert Review
The most effective means of evaluating federally funded research programs, the committee said, is expert review, which should be used to assess both basic and applied research. The committee outlined three forms of expert review and their applications: quality review, relevance review, and benchmarking.
To assess quality, peer review should be used, the committee said. Peer reviewers should include scientists at agency, university, and industrial laboratories who have participated in and who best understand federally funded research programs. However, peer review as it currently takes place in most federal agencies varies greatly and should be analyzed and modified as necessary. Relevance review and international benchmarking also are needed.
Relevance review draws on not only the views of experts in the field but also potential users and experts in related fields, to evaluate the relevance of the research to an agency's mission. Benchmarking reviews use panels of experts from the United States and elsewhere to judge the international leadership status of the United States in a program.Basic vs. Applied Research
Basic research involves theoretical or experimental investigation to advance scientific knowledge, without immediate practical application as a direct objective. For example, basic research by physicists on atomic structure more than 60 years ago led to today's Global Positioning System. Scientists could not know at the time that their work decades later would help create a network of satellites that could send signals to a receiver, costing a few hundred dollars, and instantly pinpoint someone's location on the planet to within 100 feet.
Because many years can pass before an advancement is achieved, the outcomes or results of basic research cannot be measured on an annual basis but only in retrospect, the committee concluded. However, agencies can regularly assess the progress of basic research in terms of quality and relevance to agency goals and intended users. Another proposed measure is leadership -- that is, whether the research is being performed at the forefront of scientific and technological knowledge, and leads the world in that particular field.
Applied research uses knowledge gained through theoretical or experimental investigation to make things or create situations that will serve a practical purpose. Programs in applied research usually include a series of milestones to be reached by particular times, and a description of the intended outcomes as well as their significance to society. Progress toward these milestones can be measured annually, the committee found. For example, if the U.S. Department of Energy adopted the goal of producing cheaper solar energy, it could measure the results of research designed to decrease the costs.
To produce and benefit from advances in science and technology, the nation also must have a continuing supply of well-educated and highly trained scientists and engineers, the committee said. Strategic and performance plans should focus more attention on the goal of developing and maintaining human talent in fields critical to the agencies' missions. To ensure this, agencies should require and evaluate education and training components in research programs.
The science and engineering communities also should play an important role in the implementation of the Government Performance and Results Act. As a first step, they should become familiar with agency strategic and performance plans, which are available on agency web sites. Many researchers contribute much time and effort to review papers submitted for publication, grant applications, and program proposals that are supported by federal funds, but few of them are aware of the Act, the committee said.
In conducting its study, the committee reviewed and assessed the strategic and performance plans of 10 federal agencies -- the U.S. departments of Agriculture, Defense, Energy, and Transportation as well as the National Institutes of Health (NIH), the National Science Foundation (NSF), NASA, the Environmental Protection Agency, the National Institute of Standards and Technology, and the National Oceanic and Atmospheric Administration. Over the course of the 12-month study, workshops were held to gather information, including industry methods to evaluate research performance.
The study was sponsored by the National Research Council, NSF, NIH, NASA, and the U.S. departments of Agriculture, Transportation, and Defense. The National Academy of Sciences, National Academy of Engineering, and Institute of Medicine are private, non-profit institutions that provide science, technology, and health policy advice under a congressional charter. A committee roster follows.
Read the full text of Evaluating Federal Research Programs: Research and the Government Performance and Results Act
are available from the National Academies Press on the Internet at www.nap.edu
or by calling 202-334-3313 or 1-800-624-6242. Reporters may obtain a pre-publication copy from the Office of News and Public Information at the letterhead address (contacts listed above).NATIONAL ACADEMY OF SCIENCESNATIONAL ACADEMY OF ENGINEERINGINSTITUTE OF MEDICINE
Policy DivisionCommittee on Science, Engineering, and Public PolicyPhillip A. Griffiths
Institute for Advanced Study
Princeton, N.J.Bruce M. Alberts
National Academy of Sciences
Washington, D.C.Peter Diamond
Professor of Economics
Massachusetts Institute of Technology
Science and Technology
Honeywell Inc. (retired)
Edina, Minn.Mildred S. Dresselhaus
Institute Professor of Electrical Engineering and Physics
Massachusetts Institute of Technology
CambridgeJames J. Duderstadt
President Emeritus and
University Professor of Science and Engineering
University of Michigan
Ann ArborMarye Anne Fox
North Carolina State University
RaleighRalph E. Gomory
Alfred P. Sloan Foundation
New York CityRuby P. Hearn
Robert Wood Johnson Foundation
Princeton, N.J.Philip W. Majerus
Professor of Medicine, Biochemistry, and Molecular Biophysics and Director, Division of Hematology-Oncology
Washington University School of Medicine
St. LouisJune E. Osborn
Josiah Macy Jr. Foundation
New York CityKenneth I. Shine
Institute of Medicine
Washington, D.C.Morris Tanenbaum
Vice Chairman and Chief Financial Officer
Short Hills, N.J.William Julius Wilson
Malcolm Wiener Professor
Center for Social Policy
John F. Kennedy School of Government
Cambridge, Mass.William A. Wulf
National Academy of Engineering
Executive Director, Policy DivisionDeborah Stine
Study DirectorAnne-Marie Mazza
Senior Program Officer
(1 Member, National Academy of Sciences)
(2 Member, National Academy of Engineering)
(3 Member, Institute of Medicine)