Orthopedics

The articles prior to January 2011 are part of the back file collection and are not available with a current paid subscription. To access the article, you may purchase it or purchase the complete back file collection here

Guest Editorial 

Randomized-Controlled Trials for Surgical Implants: Are Registries an Alternative?

Markus Melloh, MD, MPH; Lukas P. Staub, MD; Thomas Zweig, MD; Thomas Barz, MD; Jean-Claude Theis, MD

Due to its success in pharmaceutical drug evaluation, randomized-controlled trials have been accepted as the method providing the highest level of evidence for the comparison of treatments.1,2 To date, it is regarded as the most reliable method of conducting clinical research.3 Randomization is considered almost indispensable for establishing the superiority or noninferiority of new therapeutic approaches over standard ones. The fundamental feature of randomization is to provide comparability of the treatment groups with respect to all known and unknown factors, thus permitting an unbiased comparison between groups. From a theoretical point of view, the advantages of randomized-controlled trials are undisputed, and their value has been proven in thousands of clinical trials. No alternative excludes all confounders when measuring exclusively treatment effects.

The Consolidated Standards of Reporting Trials (CONSORT) guidelines have standardized the design, conduct, and reporting of randomized-controlled trials.4 However, specific settings may complicate or even prevent the use of randomized-controlled trials in surgery.5,6 Despite the success in assessing pharmaceutical products, randomized-controlled trials are not often used when the outcome of surgical implants is reported.7

Clinical Testing of New Surgical Implants

In the market introduction of a new surgical implant, 2 clinical phases can be identified: pilot studies where surgical techniques are optimized, and clinical, often multicenter studies to demonstrate safety and efficacy of the implant.

When clinical testing is started, the implant is usually available in its final version following in vitro testing and, in some cases, animal and cadaver testing. None or only minor changes in the implant are expected before definitive market introduction.

In most cases, pilot studies are done by 1 or 2 surgeons who will define the indications for the implant and develop the surgical technique and instrumentation to improve the surgical procedure. The technique will be taught by the designer surgeon(s) to other surgeons before the implant is launched on the market, and this is often followed by a multicenter clinical trial.

After Food and Drug Administration or comparable approval by regulatory bodies, surgeons will start to implant the new device following appropriate training, and each will go through a learning phase. This is one of the reasons why outcomes may vary significantly from 1 surgeon to another.8-13 However, having mastered the new procedure, the surgical technique will continuously change to improve outcomes, as perceived by the surgeons themselves.

Even in routine procedures like total hip arthroplasty (THA), considerable surgical variations are described, and each surgeon adapts his or her technique according to patient outcomes.

The implantation of a surgical device is characterized by a complex procedure. Failure can occur at different stages following the surgery. Implant failure is multifactorial and often caused by suboptimal surgical technique or external factors; only in rare cases is failure due to the implant itself.14

Registries as a Potential Alternative to Randomized-Controlled Trials

Randomized-controlled trials are considered a reference standard for assessing medical interventions such as surgical implants. However, randomized-controlled trials may entail a number of limitations that affect their performance,15 such as ethical aspects due to the process of randomization.16-18 They may not be necessary when referring to interventions with dramatic effects that are only minimally influenced by confounders.5,6 Randomized-controlled trials may be inappropriate if they neutralize the effectiveness of an intervention that depends on active participation conditioned by the patient’s beliefs and preferences.19 Proof of potential effect modifiers of outcomes (eg, by incorporating a stratified analysis or controlling for interaction) may be impossible in randomized-controlled trials that do not show an effect in their primary endpoint.19 Finally, randomized-controlled trials may be inadequate when they show a low external validity, eg, if health care professionals or settings are unrepresentative or patients are atypical.20,21

A valid alternative for assessing surgical procedures are observational studies, especially collecting cohort data in a registry.22-25 Some registries have a coverage of up to 99% of all cases in a certain geographic region with a specified diagnosis or specific therapy.26

In contrast to the randomized-controlled trial, the registry operates with a full dataset but does not use any randomization. In a joint registry, a nondefined amount of different diagnoses, implants, types of surgical applications, or therapeutic procedures are mixed,27 and hence potentially various comparator groups are an inherent part of the cohort. The Table compares characteristics between randomized-controlled trials and registries for surgical implants.

Table: Characteristics of Randomized-Controlled Trials Versus Registries for Surgical Implants

A randomized-controlled trial would be appropriate to determine whether patellar resurfacing is necessary during total knee arthroplasty. Such a randomized-controlled trial would compare 2 groups, 1 with and 1 without patellar resurfacing, looking at differences in outcome. The question of whether patellar resurfacing should be used could not be answered by a joint registry comparing surgeons who resurface versus those who do not, because in a registry both groups of surgeons and patients may be too different to exclude a selection bias.

A registry would be the method of choice for the comparison of various types of implants in THA. Implant performance can be monitored over time to reveal which implants perform best. The Swedish Hip Register has shown that serious complications and revision rates have declined significantly by making information from the registry available to the entire community of surgeons in Sweden.28 A randomized-controlled trial would not have been feasible, taking into consideration the multitude of implants and the necessary length of observation time.

The ability to set up benchmarks and assess implant safety and effectiveness is the principle advantage of a registry.29 Benchmarks enable each surgeon, hospital, or subregion to compare their own results with the pool of all other registry participants. Pooled data in registries are advantageous when providing evidence of safety and suitability for everyday use in the clinical setting, described by Archie Cochrane as “efficiency.”30,31 This may result in a potentially higher external validity of registry results. However, registries with a limited participation may have a low external validity because they are unrepresentative.

When introducing new implants, registries can be used for benchmarking, where the outcome of the new implant will be compared with the existent benchmarks of commonly used implants. In the case of a registry covering the majority of procedures performed on a national or international basis, a registry might be even more reliable than a randomized-controlled trial in assessing implant outcomes. In the case where no existing registry provides the benchmark, the establishment of a new registry to prove the effectiveness of a new therapeutic measure might fail, because the alternative therapy is often already established. In such a case, it is more difficult to impose a documentation burden on the alternative therapy compared with the new therapy.

Establishing a new registry is challenging.32 To cope with administrative issues and legal requirements, a centralized documentation system and data anonymization are inevitable.26 Setting up such a system requires a network connecting participants, as well as commonly accepted measurement tools and quality indicators. Crucial issues are surgeon participation; ownership of data; data handling; implementation; statistical analyses; and funding.

References

  1. Sackett DL, Straus SE, Richardson WS, Rosenberg W, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. 2nd ed. London, England: Churchill Livingstone; 2000.
  2. Doll R. Summation of the conference. Doing more good than harm: the evaluation of health care interventions. Ann NY Acad Sci. 1993; (703):310-313.
  3. Brox JI. The contribution of RCTs to quality management and their feasibility in practice [published online ahead of print May 1, 2009]. Eur Spine J. 2009; (18 Suppl 3):279-293.
  4. Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001; 357(9263):1191-1194.
  5. Rothwell PM. External validity of randomised controlled trials: “to whom do the results of this trial apply?” Lancet. 2005; 365(9453):82-93.
  6. Rothwell PM. Factors that can affect the external validity of randomised controlled trials. PLoS Clin Trials. 2006; 1(1):e9.
  7. Abraham NS. Will the dilemma of evidence-based surgery ever be resolved? ANZ J Surg. 2006; 76(9):855-860.
  8. Renzulli P, Lowy A, Maibach R, Egeli RA, Metzger U, Laffer UT. The influence of the surgeon’s and the hospital’s caseload on survival and local recurrence after colorectal cancer surgery. Surgery. 2006; 139(3):296-304.
  9. Renzulli P, Laffer UT. Learning curve: the surgeon as a prognostic factor in colorectal cancer surgery. Recent Results Cancer Res. 2005; (165):86-104.
  10. Meyer HJ. The influence of case load and the extent of resection on the quality of treatment outcome in gastric cancer. Eur J Surg Oncol. 2005; 31(6):595-604.
  11. Konety BR, Dhawan V, Allareddy V, Joslyn SA. Impact of hospital and surgeon volume on in-hospital mortality from radical cystectomy: data from the health care utilization project. J Urol. 2005; 173(5):1695-1700.
  12. Kauvar DS, Braswell A, Brown BD, Harnisch M. Influence of resident and attending surgeon seniority on operative performance in laparoscopic cholecystectomy [published online ahead of print January 18, 2006]. J Surg Res. 2006; 132(2):159-163.
  13. Birkmeyer JD, Siewers AE, Finlayson EV, et al. Hospital volume and surgical mortality in the United States. N Engl J Med. 2002; 346(15):1128-1137.
  14. Herberts P, Malchau H. How outcome studies have changed total hip arthroplasty practices in Sweden. Clin Orthop Relat Res. 1997; (344):44-60.
  15. McCulloch P, Taylor I, Sasako M, Lovett B, Griffin D. Randomised trials in surgery: problems and possible solutions. BMJ. 2002; 324(7351):1448-1451.
  16. Lilford RJ, Jackson J. Equipoise and the ethics of randomization. J R Soc Med. 1995; 88(10):552-559.
  17. Schulz KF. Subverting randomization in controlled trials. JAMA. 1995; 274(18):1456-1458.
  18. Harbour R, Miller J. A new system for grading recommendations in evidence based guidelines. BMJ. 2001; 323(7308):334-336.
  19. Röder C, Müller U, Aebi M. The rationale for a spine registry [published online ahead of print November 16, 2005]. Eur Spine J. 2006; (15 Suppl 1):S52-56.
  20. Stiller CA. Centralised treatment, entry to trials and survival. Br J Cancer. 1994; 70(2):352-362.
  21. Ward LC, Fielding JW, Dunn JA, Kelly KA. The selection of cases for randomised trials: a registry survey of concurrent trial and non-trial patients. The British Stomach Cancer Group. Br J Cancer. 1992; 66(5):943-950.
  22. Audigé L, Hanson B, Kopjar B. Issues in the planning and conduct of non-randomised studies [published online ahead of print February 17, 2006]. Injury. 2006; 37(4):340-348.
  23. Röder C, EL-Kerdi A, Grob D, Aebi M. A European spine registry. Eur Spine J. 2002; 11(4):303-307.
  24. Röder C, El-Kerdi A, Frigg A, et al. The Swiss Orthopaedic Registry. Bull Hosp Jt Dis. 2005; 63(1-2):15-19.
  25. Zweig T, Mannion AF, Grob D, et al. How to Tango: a manual for implementing Spine Tango [published online ahead of print June 28, 2009]. Eur Spine J. 2009; (18 Suppl 3):312-320.
  26. Melloh M, Staub L, Aghayev E, et al. The international spine registry SPINE TANGO: status quo and first results [published online ahead of print April 30, 2008]. Eur Spine J. 2008; 17(9):1201-1209.
  27. Röder C, El-Kerdi A, Eggli S, Aebi M. A centralized total joint replacement registry using web-based technologies. J Bone Joint Surg Am. 2004; 86(9):2077-2079.
  28. Herberts P, Malchau H. Long-term registration has improved the quality of hip replacement: a review of the Swedish THR Register comparing 160,000 cases. Acta Orthop Scand. 2000; 71(2):111-121.
  29. Röder C, Staub L, Dietrich D, Zweig T, Melloh M, Aebi M. Benchmarking with Spine Tango: potentials and pitfalls [published online ahead of print April 1, 2009]. Eur Spine J. 2009; (18 Suppl 3):305-311.
  30. Schluessmann E, Diel P, Aghayev E, et al. SWISSspine: a nationwide registry for health technology assessment of lumbar disc prostheses [published online ahead of print March 20, 2009]. Eur Spine J. 2009; 18(6):851-861.
  31. Cochrane AL. Archie Cochrane in his own words. Selections arranged from his 1972 introduction to “Effectiveness and Efficiency: Random Reflections on the Health Services” 1972. Control Clin Trials. 1989; 10(4):428-433.
  32. Fritzell P, Strömqvist B, Hägg O. A practical approach to spine registers in Europe: the Swedish experience [published online ahead of print November 23, 2005]. Eur Spine J. 2006; 15(Suppl 1):S57-63.

Authors

Drs Melloh and Theis are from the Department of Orthopedic Surgery, Dunedin School of Medicine, University of Otago, Dunedin, New Zealand; Drs Melloh, Röder, Staub, Zweig, and Müller are from the Institute for Evaluative Research in Medicine, Maurice-E.-Müller Research Center, University of Berne, Berne, Switzerland; Dr Barz is from the Department of Orthopedic Surgery, Asklepios Klinikum Uckermark, Schwedt/Oder, Germany.

Drs Melloh, Röder, Staub, Zweig, Barz, Theis, and Müller have no relevant financial relationships to disclose.

Correspondence should be addressed to: Markus Melloh, MD, MPH, Department of Orthopedic Surgery, Dunedin School of Medicine, University of Otago, Private Bag 1921, Dunedin 9054, New Zealand (markus.melloh@otago.ac.nz).

doi: 10.3928/01477447-20110124-03

10.3928/01477447-20110124-03

Advertisement

Sign up to receive

Journal E-contents
Advertisement