Perhaps nothing frustrates us more than studying something only to produce no significant results or results that don't support the hypotheses. You may recall that Thomas Edison allegedly said, “I have not failed. I've just found 10,000 ways that won't work” (Goodreads, n.d.). Unfortunately for those of us in the scientific community, successes are the things that make headlines, not the failures. You may think that is peculiar, so let me explain.
An editor has considerable freedom to determine what will or will not be published. I have found that we generally are in one of two camps. One camp—the one more common in the scientific community—is devoted to publishing only manuscripts that report statistically significant or positive findings. The other camp—of which I am a member—is devoted to publishing findings so the next study can build on, or not repeat the issues associated with, studies that produced no significant or negative results. An additional issue is the discussion around the importance of original research versus replication research.
Imagine my delight when I saw THE article in The New York Times. The article was titled “Congratulations. Your Study Went Nowhere” (Carroll, 2018). As Carroll pointed out, when we report only the positive results, we create a publication bias. Carroll then reported that in 105 studies of antidepressants registered with the U.S. Food and Drug Administration, the split between the positive results and negative results was about 50–50. What was dramatically different, however, was what got published. The drug studies with positive results resulted in 98% being published. The studies with the negative results resulted in 48% being published. That is clear evidence that editors tend not to publish studies that don't produce positive results or those that don't produce statistically significant results.
Several other examples of outcome reporting bias, citation bias, and spin were provided. One of the common spin techniques, which we find in nursing publications occasionally, is to discuss the results as trends. In other words, something that was not significant is presented in a manner as to help readers think of the results as “almost there.” (This has often been referred to as massaging the data.) When articles such as these are then cited subsequently, the potential problem of misrepresentation spreads.
When we are biased toward original research, we fail to test the prior original studies upon which many subsequent researchers base their work. When editors ask for details about the steps in the processes leading to a study's conclusion, we are attempting to provide sufficient information to readers who may wish to test the study by replicating it. When we publish studies with no significant findings or negative results, we are serving the broader community in an attempt to help all of us produce the best science possible.
A commitment to advancing science should allow for replication studies, publication of insignificant findings, and publication of findings that didn't support the hypotheses. In essence, publishing unremarkable results must be remarkable.
Patricia S. Yoder-Wise, RN, EdD, NEA-BC, ANEF, FAAN