Commentary

Science is an iterative process

In his editorial in this issue, Derek Raghavan, MD, PhD, expresses exasperation with the field of cancer epidemiology — noting, in particular, its large number of seemingly duplicative studies and the “litany of papers reporting weak associations.”

He is not alone in his frustration, nor is he the first to express such concerns. Yet, although the field of cancer epidemiology is the subject of his editorial, it appears that the root of his frustration lies with the scientific process itself.

Science — including epidemiology — is an iterative process. Truly “paradigm-shifting studies” are rare, and even those studies stand on the shoulders of prior research.

Amanda I. Phipps, MPH, PhD
Amanda I. Phipps

On the surface, this iterative process may be misconstrued as duplicative. However, repeated studies are necessary to confirm — or refute — hypotheses and to refine our understanding of the truth.

In publishing the results of epidemiologic studies, investigators have an obligation to not only articulate their methods and their findings, but to highlight the limitations of their research — thus enabling future studies to improve upon past work and move the field forward.

To draw on an example of our own research, as cited by Dr. Raghavan, we have published on the relationship between alcohol consumption and colorectal cancer survival three times over the past 5 years, each time trying to overcome past limitations.

Our first investigation indicated no association between overall alcohol consumption and colorectal cancer survival. However, although there were reasons to suspect that survival could be influenced by consumption of just some forms of alcohol, limitations in sample size prevented us from examining this issue with any precision. Thus, we sought out opportunities to extend our analysis in other pre-existing study populations — a population-based extension of our initial study and again in the setting of a cancer treatment trial.

These two most recent investigations have yielded similar results, suggesting a slightly more favorable prognosis associated with prediagnostic wine consumption. The fact that we observed complementary results in different study settings lends credibility to our findings and reinforces the notion that studies of overall alcohol consumption may obscure associations with specific alcohol types. Our example is not a unique one, but rather reflects the process of building context within the scientific literature.

Although we have no immediate plans for further study on this topic, the case is hardly closed. Among others, questions remain as to whether changes in alcohol consumption patterns following cancer diagnosis may play a role in treatment response and cancer outcomes.

The purpose of scientific literature, within and beyond epidemiology, is to create a dynamic record that represents our state of knowledge. Most contributions to that record, as bemoaned by Dr. Raghavan, tend to reflect weak associations.

However, the strength of an association generated by a scientific study must not be taken as an indication of its value. Scientists have a responsibility to publish the results of rigorous scientific investigations regardless of whether their findings are strong, weak or completely null. Although scientific journals — and the media and general public — may take greater interest in strong or sensational results, the credibility of the science we consume hinges on the representativeness of the scientific literature.

Moreover, associations that appear to be modest on the relative scale can have a large impact on the population level, depending on the prevalence of the exposure and underlying risk for the outcome under study.

To quote Sir Richard Doll, one of the “luminaries” who deservedly earned the respect of Dr. Raghavan and the broader scientific community, “weak associations, so far from being the Achilles heel of epidemiology that critics ... would like to make out, are among its most important contributions; they may be socially of great importance and can often be revealed only by epidemiologic investigation. They are certainly difficult to establish and to interpret, but their establishment and correct interpretation is a challenge that modern epidemiologists should willingly accept.”

Although there are undoubtedly flawed cancer epidemiology studies, I suggest that we take the weak associations and duplicative studies that can be found in the cancer epidemiology literature as a reflection of scientific progress.

References:

Doll R. J Epidemiol. 1996;6:S11-20.

Feinstein AR. Science. 1988;242:1257-63.

Phipps AI, et al. Cancer. 2011;doi:10.1002/cncr.26114.

Phipps AI, et al. Cancer. 2016;doi:10.1002/cncr.30446.

Phipps AI, et al. Int J Cancer. 2016;doi:10.1002/ijc.30135.

For more information:

Amanda I. Phipps, MPH, PhD, is assistant professor of epidemiology at University of Washington and assistant member in the public health sciences division at Fred Hutchinson Cancer Research Center. She can be reached at aphipps@fredhutch.org.

Disclosure: Phipps reports no relevant financial disclosures.

In his editorial in this issue, Derek Raghavan, MD, PhD, expresses exasperation with the field of cancer epidemiology — noting, in particular, its large number of seemingly duplicative studies and the “litany of papers reporting weak associations.”

He is not alone in his frustration, nor is he the first to express such concerns. Yet, although the field of cancer epidemiology is the subject of his editorial, it appears that the root of his frustration lies with the scientific process itself.

Science — including epidemiology — is an iterative process. Truly “paradigm-shifting studies” are rare, and even those studies stand on the shoulders of prior research.

Amanda I. Phipps, MPH, PhD
Amanda I. Phipps

On the surface, this iterative process may be misconstrued as duplicative. However, repeated studies are necessary to confirm — or refute — hypotheses and to refine our understanding of the truth.

In publishing the results of epidemiologic studies, investigators have an obligation to not only articulate their methods and their findings, but to highlight the limitations of their research — thus enabling future studies to improve upon past work and move the field forward.

To draw on an example of our own research, as cited by Dr. Raghavan, we have published on the relationship between alcohol consumption and colorectal cancer survival three times over the past 5 years, each time trying to overcome past limitations.

Our first investigation indicated no association between overall alcohol consumption and colorectal cancer survival. However, although there were reasons to suspect that survival could be influenced by consumption of just some forms of alcohol, limitations in sample size prevented us from examining this issue with any precision. Thus, we sought out opportunities to extend our analysis in other pre-existing study populations — a population-based extension of our initial study and again in the setting of a cancer treatment trial.

These two most recent investigations have yielded similar results, suggesting a slightly more favorable prognosis associated with prediagnostic wine consumption. The fact that we observed complementary results in different study settings lends credibility to our findings and reinforces the notion that studies of overall alcohol consumption may obscure associations with specific alcohol types. Our example is not a unique one, but rather reflects the process of building context within the scientific literature.

Although we have no immediate plans for further study on this topic, the case is hardly closed. Among others, questions remain as to whether changes in alcohol consumption patterns following cancer diagnosis may play a role in treatment response and cancer outcomes.

The purpose of scientific literature, within and beyond epidemiology, is to create a dynamic record that represents our state of knowledge. Most contributions to that record, as bemoaned by Dr. Raghavan, tend to reflect weak associations.

However, the strength of an association generated by a scientific study must not be taken as an indication of its value. Scientists have a responsibility to publish the results of rigorous scientific investigations regardless of whether their findings are strong, weak or completely null. Although scientific journals — and the media and general public — may take greater interest in strong or sensational results, the credibility of the science we consume hinges on the representativeness of the scientific literature.

Moreover, associations that appear to be modest on the relative scale can have a large impact on the population level, depending on the prevalence of the exposure and underlying risk for the outcome under study.

To quote Sir Richard Doll, one of the “luminaries” who deservedly earned the respect of Dr. Raghavan and the broader scientific community, “weak associations, so far from being the Achilles heel of epidemiology that critics ... would like to make out, are among its most important contributions; they may be socially of great importance and can often be revealed only by epidemiologic investigation. They are certainly difficult to establish and to interpret, but their establishment and correct interpretation is a challenge that modern epidemiologists should willingly accept.”

PAGE BREAK

Although there are undoubtedly flawed cancer epidemiology studies, I suggest that we take the weak associations and duplicative studies that can be found in the cancer epidemiology literature as a reflection of scientific progress.

References:

Doll R. J Epidemiol. 1996;6:S11-20.

Feinstein AR. Science. 1988;242:1257-63.

Phipps AI, et al. Cancer. 2011;doi:10.1002/cncr.26114.

Phipps AI, et al. Cancer. 2016;doi:10.1002/cncr.30446.

Phipps AI, et al. Int J Cancer. 2016;doi:10.1002/ijc.30135.

For more information:

Amanda I. Phipps, MPH, PhD, is assistant professor of epidemiology at University of Washington and assistant member in the public health sciences division at Fred Hutchinson Cancer Research Center. She can be reached at aphipps@fredhutch.org.

Disclosure: Phipps reports no relevant financial disclosures.