Spin and bias are prevalent in a high percentage of published studies about randomized, clinical trials of breast cancer treatments, according to research published in Annals of Oncology.
Randomized phase 3 studies are designed to detect or rule out clinically important differences between experimental and control groups. Bias in reporting of clinical trials can create false perceptions of an experimental drug’s efficacy and safety.
In the current study, researchers hypothesized that — despite guidelines created to diminish spin and bias in reporting of phase 3 clinical trials — both would remain common in published studies.
Francisco Emilio Vera-Badillo, MSc, clinical research fellow in the division of medical oncology and hematology at Princess Margaret Hospital and University of Toronto, and colleagues identified all 164 randomized controlled phase 3 studies related to breast cancer published between January 1995 and August 2011.
Vera-Badillo and colleagues defined bias as “inappropriate reporting of the primary endpoint and toxicity, with emphasis on reporting of these outcomes in the abstract.”
They defined spin as “the use of words in the concluding statement of the abstract to suggest that a trial with a negative primary endpoint was positive based on some apparent benefit shown in one or more secondary endpoints.”
The prevalence of spin and bias in reporting the primary endpoint of the study served as this study’s primary analysis.
Study results showed that 54 trials (32.9%) were reported as positive despite the fact the researchers involved did not find a statistically significant difference in the primary endpoint.
“These reports were biased and used spin in attempts to conceal that bias,” Vera-Badillo and colleagues wrote.
When researchers only examined trials with no significant difference in the primary endpoint between study arms (n=92), the incidence of bias increased to 59%.
Ian F. Tannock
“Better and more accurate reporting are urgently needed,” Ian F. Tannock, MD, PhD, DSc, FRCPC, medical oncologist in the division of medical oncology and hematology at the Princess Margaret Hospital and University of Toronto, said in a press release. “Journal editors and reviewers, who give their expertise on the topic, are very important in ensuring this happens. However, readers also need to critically appraise reports in order to detect potential bias. We believe guidelines are necessary to improve the reporting of both efficacy and toxicity.”
A total of 110 papers (67%) met the researchers’ definition of biased reporting of toxicity, according to investigators.
Researchers observed a significant association between biased reporting of toxicity and observation of a statistically significant difference in the arms for the primary endpoint (P=.044).
A trial’s side effects were likely under-reported if the drug showed a benefit, the researchers wrote.
“A possible explanation for this could be that investigators, sponsors or both prefer to focus on the efficacy of the experimental treatment and downplay toxicity to make the results look more attractive,” Vera-Badillo said in a press release.
Of the studies included in the analysis, 103 reported funding from industry partners, 32 reported funding by academic or government grants and 29 did not report the source of funding for their trials.
The source of funding was not associated with bias or spin, according to study results.
“Bias in the reporting of efficacy and toxicity remains prevalent,” Vera-Badillo and colleagues concluded. “Clinicians, reviewers, journal editors and regulators should apply a critical eye to trial reports and be wary of the possibility of biased reporting. Guidelines are necessary to improve the reporting of both efficacy and toxicity.”
Disclosure: Vera-Badillo reports no relevant financial disclosures.