In the JournalsPerspective

ASES computerized adaptive testing system reliable for assessing outcomes

Published results showed the American Shoulder and Elbow Surgeons computerized adaptive testing system is reliable for assessing outcomes in patients undergoing shoulder surgery and can be used interchangeably with the full ASES instrument.

Researchers applied the ASES computerized adaptive testing system to the responses of 2,763 patients who underwent shoulder evaluation and treatment and had answered all questions on the full ASES instrumentation. To assess the accuracy of the computerized adaptive testing score in replicating the full-form ASES score, researchers analyzed the mean and standard deviation of both groups of scores, frequency distributions of the two sets of scores and score differences, Pearson and intraclass correlation coefficients and Bland-Altman assessment of patterns in score differences.

Results showed a reduction of question burden by 40% by tailoring the questions according to prior responses with the computerized adaptive testing system. Researchers found a mean difference of –0.14 between computerized adaptive testing and full ASES scores. In 95% of cases, the computerized adaptive testing and full ASES scores were within five points and were clustered around zero, according to results.

Researchers noted the frequency distributions were nearly identical between the computerized adaptive testing and full ASES scores, and the scores had a correlation coefficient of 0.99. The differences between scores were independent of the overall score, according to the Bland-Altman plot. Researchers also found no significant bias for computerized adaptive testing scores in either a positive or negative direction.

“Our surgeons and those in clinical practice who obtain [patient-reported outcome measures] PROMs often note that form completion, whether paper or digital format, drops off with repeated assignments, precluding the accumulation of valuable longitudinal outcome data,” the authors wrote. “The fatigue factor will decrease with the abbreviated and more personalized [computerized adaptive testing] CAT format, thus allowing improved engagement with the ASES outcome score. The goal of 80% retention of respondents over the long term then becomes more achieveable.” – by Casey Tingle

Disclosures: Plummer reports he is chief scientific officer of Universal Research Solutions — OBERD. Please see the full study for a list of all other authors’ relevant financial disclosures.

Published results showed the American Shoulder and Elbow Surgeons computerized adaptive testing system is reliable for assessing outcomes in patients undergoing shoulder surgery and can be used interchangeably with the full ASES instrument.

Researchers applied the ASES computerized adaptive testing system to the responses of 2,763 patients who underwent shoulder evaluation and treatment and had answered all questions on the full ASES instrumentation. To assess the accuracy of the computerized adaptive testing score in replicating the full-form ASES score, researchers analyzed the mean and standard deviation of both groups of scores, frequency distributions of the two sets of scores and score differences, Pearson and intraclass correlation coefficients and Bland-Altman assessment of patterns in score differences.

Results showed a reduction of question burden by 40% by tailoring the questions according to prior responses with the computerized adaptive testing system. Researchers found a mean difference of –0.14 between computerized adaptive testing and full ASES scores. In 95% of cases, the computerized adaptive testing and full ASES scores were within five points and were clustered around zero, according to results.

Researchers noted the frequency distributions were nearly identical between the computerized adaptive testing and full ASES scores, and the scores had a correlation coefficient of 0.99. The differences between scores were independent of the overall score, according to the Bland-Altman plot. Researchers also found no significant bias for computerized adaptive testing scores in either a positive or negative direction.

“Our surgeons and those in clinical practice who obtain [patient-reported outcome measures] PROMs often note that form completion, whether paper or digital format, drops off with repeated assignments, precluding the accumulation of valuable longitudinal outcome data,” the authors wrote. “The fatigue factor will decrease with the abbreviated and more personalized [computerized adaptive testing] CAT format, thus allowing improved engagement with the ASES outcome score. The goal of 80% retention of respondents over the long term then becomes more achieveable.” – by Casey Tingle

Disclosures: Plummer reports he is chief scientific officer of Universal Research Solutions — OBERD. Please see the full study for a list of all other authors’ relevant financial disclosures.

    Perspective
    Grant Garcia

    Grant Garcia

    With recent health care changes, patient-reported outcomes (PROs) are becoming more integral to assessing orthopedic surgeons and their surgeries. These PROs are not only demanded in research, but there is a larger push for their integration into surgeon reimbursement algorithms.

    As with most patient questionnaires, there is user fatigue that can occur, leading to lower follow-up and inaccurate responses. I applaud Otho R. Plummer, PhD, and colleagues for evaluating this computer-adaptive testing system for the ASES questionnaire. Their findings of a 40% reduction in ASES question burden while still maintaining an interclass correlation coefficient of 0.99 compared to the full assessment is impressive. In addition, their learning system was found reliable and accurate for numerous shoulder conditions. 

    With further emphasis on PROMs and many patients enduring more than one survey postoperatively, a computerized learning system such as this is vital. If we demand higher follow-up for research publications and eventually physician reimbursement, we must focus on better ways to obtain this data and eliminate redundancies. Furthermore this “survey efficiency” is not only important for our own research endeavors, but it will likely lead to improved patient satisfaction with this academic process.

    References:

    Brogan AP, et al. J Manag Care Spec Pharm. 2017;doi:10.18553/jcmp.2017.23.2.125.

    Chien TW, et al. Health Qual Life Outcomes. 2009;doi:10.1186/1477-7525-7-39.

    • Grant Garcia, MD
    • Orthopedic Specialists of Seattle
      Seattle

    Disclosures: Garcia reports he has no relevant financial disclosures.