- American Physical Society Sites
- Meetings & Events
- Policy & Advocacy
- Careers In Physics
- About APS
- Become a Member
By Michael Lucibella
A committee of The National Academies is preparing a report that will take a tougher stance on defining scientific misconduct, and focus on attacking the institutional environment that often leads to it.
“Misbehavior in science has typically been seen as a failing of the individual,” said Brian Martinson of the HealthPartners Research Foundation. “We believe that it is not simply a failing of the individual; scientists simply don’t behave in a void.”
At the American Association for the Advancement of Science meeting in San Jose, California, committee members outlined how they planned to update the 1992 National Academies report, Responsible Science — Ensuring the Integrity of the Research Process, Volume I, which helped codify what qualifies research misconduct.
The report in part defined research misconduct as “fabrication, falsification, or plagiarism,” a definition that was broadly adopted by the federal government in 2000. It also highlighted other “questionable research practices” that didn’t amount to outright fraud but skirted the line of impropriety. These include authorship abuses, exploiting research assistants, misleading statistical analyses, and withholding data, all of which fall short of falsification and fabrication.
“We suggest that they be renamed ‘detrimental [research practices],’ that we don’t equivocate on that issue, and don’t suggest that by ‘questionable’ they might be ok,” said Paul Wolpe of Emory University. “We want to take a stand and say no, let’s call them ‘detrimental research practices’ because we don’t want there to be any question about how we consider them and the damage that they do to science as an enterprise.”
Committee members are hoping to release the report this coming summer and plan to include a list of best practices that research institutions can adopt. In the works for years, the report comes after a number of recent high-profile retractions over misconduct, most notably the stem cell controversy coming out of teams from Harvard and RIKEN.
“There’s really only been, relatively speaking, a few cases,” said Robert Nerem of the Georgia Institute of Technology, and chair of the committee. “Even so, the media attention very much weakens the public faith in the reliability of scientific research.”
The extent of scientific misconduct is difficult to pin down precisely. In 2013, the last year that numbers are available, about 500 research articles were retracted out of the more than 1 million published across all scientific disciplines. I don’t think the statistics begin to capture the amount of scientific misconduct,” Wolpe said.
He pointed to surveys of scientists conducted by several investigators, including Martinson, which indicate between 5 and 33 percent of scientists admitted to knowing of someone who falsified their work in some way. Nearly 2 percent admitted to doing it themselves at some point during their career.
“That is implying a much higher rate of scientific misconduct than we normally appreciate,” Wolpe said.
In addition, the number and rate of retractions has been rising over the last two decades as well. “It might be heartening because what it might mean is not more misconduct, but more vigilance, and lets hope that is in fact what we are seeing,” Wolpe said.
The committee is in part drawing on current social psychology research that looks at the motivations for improper behavior. Their approach puts a new emphasis on the influence that an institutional environment can have on a person’s actions.
“As we learn more, all the time, about the cognitive biases, the fallacies, the pressures, the incentives, and in particular the environments in which we operate, it means that we have to think a little differently about how we protect ourselves against the errors to which we are all prone,” said C. Kristina Gunsalus of the National Center for Professional and Research Ethics.
She added that individuals tend to give into temptation when they and their peer groups are overly ambitious, promote a sense of entitlement, or work in obtuse systems with inefficient rules.
“The amount of cheating which humans are willing to engage in depends on the structure of our daily environment,” Gunsalus said. “It is always possible to rationalize something scummy you want to do.”
The committee hopes that by highlighting these root causes and laying out best practices, it will begin an effort at research institutions to identify and address problems in their working atmosphere.
“Either the scientific community [and] the research community address these problems, or the government will,” Nerem said. “Government intervention in my opinion would not be desirable, and I suspect that’s true of everybody in this room.”
©1995 - 2020, AMERICAN PHYSICAL SOCIETY
APS encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated or changed.