Research in Gerontological Nursing

Editorial Free

Common Mistakes to Avoid When Reporting Quantitative Analyses and Results

Christine R. Kovach, PhD, RN, FAAN, FGSA

Reporting statistical data analysis is fundamental to most manuscripts published in Research in Gerontological Nursing. Statistics can bring order and meaning to complex sets of data, and a well-constructed table or figure can concentrate a wealth of important information in a coherent form. Given the central value of statistics, it behooves us to report findings in a value-added manner. Fortunately, it is easy to find textbooks and online reports from statisticians on the most common analytical mistakes health care scientists make. In this editorial, I briefly describe some of the fundamental and often easily fixed mistakes and problems I see when reading research results in manuscripts.

Description of Analyses

Serious errors can result if incorrect statistical tests are used. Readers should receive enough information to evaluate the quality of the analytic approach. Writers should describe how missing data were managed and explicitly state that assumptions were checked for the relevant statistics. This description can be a simple statement such as, “Data had no severe skew, relationships were linear, and multi-collinearity was not a problem.” If the data set did not meet assumptions, readers need to be informed of how these problems were managed. The p value used as the critical value for determining statistical significance should also be provided.

Inconsistency Between Purpose and Results

The questions asked should be those that are answered. Adding extraneous findings to a manuscript confuses readers and disrupts the logical flow. In addition, the time points need to be consistent between the design and analysis. The sample should not be too small to fulfill the study's purpose. If a pilot study is underpowered, emphasis in the manuscript should be on descriptive results. Using high-powered inferential statistics in an underpowered study can lead to erroneous conclusions.

Frequencies and Measures of Central Tendency

Percentages should not be presented without providing numbers (i.e., n/N values). If frequencies are small, a table presenting the frequency distribution should use logical groupings to categorize results. A mean should not be presented without a standard deviation. Readers cannot interpret means without knowing the possible range of scores. If a mean is 19, it indicates something very different if the possible range is 0 to 20 versus 0 to 100. If nonparametric statistics are used, the median should be reported as the measure of central tendency, rather than the mean.

p Values

By far the most common error is reporting that p = 0.000. Obviously, a value with a zero probability of occurrence is, by definition, an impossible value. Some statistical programs provide p values of 0.000 in their output, but this is due to automatic rounding or truncation to a preset number of digits after the decimal point. Therefore, “p = 0.000” should be replaced with “p < 0.001,” as the latter allows for the probability of Type I error and does not alter the importance of the p value reported.

Too Much Claimed by Statistical Significance

Statistically significant results may be spurious or caused by a confounding variable or variables. Only studies using experimental designs should claim cause–effect relationships. Results of observational studies are more accurately described as associations or differences.

Results that are statistically significant may not be clinically meaningful. If there are ≥100 participants per group, statistical significance starts to lose meaning. This loss of meaning is why it is important to present some indication of the effect size of the findings. Effect sizes emphasize the size of the difference rather than confounding the result with sample size. Cohen's d, eta-squared, and partial eta-squared are all measures of effect size. Two other ways to provide some interpretation of the meaningfulness of findings are confidence intervals and minimum clinically important differences (MCIDs). Confidence intervals describe the probability that a population parameter will fall between two set values and provide useful information. MCID is the smallest change in an outcome from a patient perspective or another justified metric that is identified as important. MCIDs are not commonly reported, probably because finding thresholds for clinically important changes often requires prior research.

Tables and Figures

Overly large, cluttered, and unclear tables and figures are the bane of an editor's existence. Tables and figures use a lot of valuable space. Results that can be clearly and cogently presented in the text of the manuscript, and those that are better conveyed through a table or figure, should be thoughtfully discerned. If a table is unusually short, the results may best be presented in the text. If there are a lot of numbers to report, the numbers are often better conveyed using a table or figure. The same data should not be presented in both a graph and table. A table needs to be reformatted using American Psychological Association guidelines rather than cut and pasted from statistical software output files.

Readers should be able to read and understand a table or figure without referring to the text. The title of tables and figures should not be a word-for-word description of, for example, all rows and columns in a table. Rather, the title should convey the criterion variable or key purpose and data manipulations performed. Temptations to use multiple features in graphics programs to develop “fancy” figures should be resisted. Instead, a simple, uncluttered presentation of findings that is free of borders, extra lines, and unneeded text should be created. Figures that contain too much information are difficult to read and interpret. Resources that provide guidance in creating statistical tables and graphs should be consulted.

Avoid Jargon When Describing Results

Results should be written in clear, straightforward English. Credibility is not gained by using overly complex language or including unnecessary statistical jargon. It is often better to plainly state the finding, then provide the statistical evidence. For example, rather than saying “A t test (t = 3.29) revealed that the groups were significantly different (p = 0.046),” state: “Women scored higher than men in social networking (t = 3.21, p = 0.046).”

Use Your Brain, Not the Computer's

The computational power at our fingertips from statistical software packages makes it easy to let the “computer do the thinking” and report nonsensical, flawed, minor, meaningless results. For example, nominal level variables should not be analyzed as ordinal or interval. When many bivariate analyses are reported, chance plays a larger role in creating error and five of 100 findings are probably wrong.

Fishing for significant results by presenting descriptive and inferential results and reporting only results that are statistically significant is highly problematic. Readers are left to wonder about the point of the study. Theory, previous research, and careful thought should delineate the central premise of the study and hypotheses. Consistency between the problem, purpose, theory, measures, and analyses should be easily identifiable.

In almost all instances, hierarchical regression is preferred over stepwise regression. Stepwise regression is atheoretical because the order of entry of variables is based solely on the statistical significance of semi-partial correlations, rather than theory and a priori thought.

Conclusion

The results section of the manuscript provides answers to the specific aims or purpose of the paper. In some ways, reporting the analyses and results is formulaic; however, it is exceedingly easy to make mistakes. Seeking advice of a trusted statistical expert is wise. Although it is better to include too much information than too little, the audience and what information readers need and would find useful should be considered. Having a lay reader review your manuscript for clarity prior to submission may also be helpful.

Christine R. Kovach, PhD, RN, FAAN, FGSA

Editor

Authors

The author has disclosed no potential conflicts of interest, financial or otherwise.

10.3928/19404921-20180226-01

Sign up to receive

Journal E-contents