Ophthalmic Surgery, Lasers and Imaging Retina

Clinical Science 

Variability in Optical Coherence Tomography Angiography Interpretation in a Cohort of Retina Specialists

Sofia Prenner; Howard F. Fine, MD, MHSc; Leonard Feiner, MD, PhD

Abstract

BACKGROUND AND OBJECTIVE:

To investigate the variability in optical coherence tomography angiography (OCTA) image interpretation in a cohort of retina specialists.

PATIENTS AND METHODS:

A survey consisting of a study set of images from 12 eyes examined by OCTA was created. Eight multiple-choice answers were provided as response options for each case. The survey was sent to 100 retina specialists, with instructions to complete the survey only if they had facility with the interpretation of OCTA images. Thirty-eight physicians completed the survey. Data generated were subsequently analyzed and interpreted.

RESULTS:

Krippendorff's alpha coefficients of agreement and their associated 95% confidence intervals (CIs) were utilized for statistical analyses. For the overall data, the estimated alpha coefficient was 0.366 (95% CI, 0.31–0.47). Although the estimated alpha coefficient is significant, the level of significance is considered low, as it is far from unity (0.366). Therefore, although statistically significant, the overall data did not demonstrate either high reliability or agreement in interpretation. Additional analyses evaluating the influence of years and location of practice, and frequency of OCTA use did not demonstrate a significant effect on reliability measures.

CONCLUSIONS:

Significant variability exists in the interpretation of OCTA images in this cohort of retina specialists. The overall data did not demonstrate high reliability or agreement in interpretation of images, suggesting the need for additional study of this nascent technology.

[Ophthalmic Surg Lasers Imaging Retina. 2019;50:344–353.]

Abstract

BACKGROUND AND OBJECTIVE:

To investigate the variability in optical coherence tomography angiography (OCTA) image interpretation in a cohort of retina specialists.

PATIENTS AND METHODS:

A survey consisting of a study set of images from 12 eyes examined by OCTA was created. Eight multiple-choice answers were provided as response options for each case. The survey was sent to 100 retina specialists, with instructions to complete the survey only if they had facility with the interpretation of OCTA images. Thirty-eight physicians completed the survey. Data generated were subsequently analyzed and interpreted.

RESULTS:

Krippendorff's alpha coefficients of agreement and their associated 95% confidence intervals (CIs) were utilized for statistical analyses. For the overall data, the estimated alpha coefficient was 0.366 (95% CI, 0.31–0.47). Although the estimated alpha coefficient is significant, the level of significance is considered low, as it is far from unity (0.366). Therefore, although statistically significant, the overall data did not demonstrate either high reliability or agreement in interpretation. Additional analyses evaluating the influence of years and location of practice, and frequency of OCTA use did not demonstrate a significant effect on reliability measures.

CONCLUSIONS:

Significant variability exists in the interpretation of OCTA images in this cohort of retina specialists. The overall data did not demonstrate high reliability or agreement in interpretation of images, suggesting the need for additional study of this nascent technology.

[Ophthalmic Surg Lasers Imaging Retina. 2019;50:344–353.]

Introduction

Optical coherence tomography (OCT) has become a critical imaging tool for the evaluation and management of retinal disease.1 The technology allows for noninvasive imaging and real-time image acquisition, providing highly resolved images of the retina and choroid in cross-section. However, although structural anatomy of the retina is well resolved by OCT, imaging of the microvasculature can be achieved only with utilization of advanced software algorithms.

OCT angiography (OCTA) has recently become commercially available, allowing for the detection of flow in the retinal microvasculature based on split-spectrum amplitude decorrelation angiography (SSADA) and other advanced algorithms.2 Like OCT, OCTA is noninvasive, but allows for imaging and analysis of retinal and choroidal vasculature without the need for injectable angiographic dye.3

Multiple clinical studies demonstrate the ability of OCTA to detect retinal vascular pathology.4–6 Recent studies also suggest that using OCTA to measure vascular pathology is reproducible.7,8 However, the process of adopting a new technology into clinical practice is often an iterative one, as end users develop expertise over time. Physicians who are adopting imaging technologies may be particularly exposed to this learning curve, as the interpretation of visual images may be subject to differences in interpretation between observers. This study sought to determine if agreement in OCTA image interpretation existed in a cohort of retina specialists facile with this new technology.

Patients and Methods

A set of study images was created for use in this investigation. Representative images were chosen from case files of 12 eyes of 12 patients who were imaged by OCTA with the Angiovue imaging system (Optovue, Fremont, CA). All cases were either normal or carried diagnoses that were determined by clinical history, examination, OCT, and fluorescein angiography. Each case was then assigned a selection of eight potential diagnoses for retina specialists to choose from, in multiple-choice format. Of note, the eight potential diagnoses were identical for each case example and were listed in the same order (Figure 1).

Image No. 3 presented to physicians depicts an optical coherence tomography angiogram of choroidal neovascularization.

Figure 1.

Image No. 3 presented to physicians depicts an optical coherence tomography angiogram of choroidal neovascularization.

A sample of 100 retina specialists listed in the American Society of Retina Specialists membership database, the most complete listing of retina specialists in the United States, was selected. The survey group was evenly balanced by geographic location to account for potential regional differences in prescribing habits. Each potential study participant was sent an invitation to participate along with the study set. Physicians were asked to participate only if they deemed themselves facile with interpreting OCTA images. Completed survey responses were collected through software provided by Survey Monkey (San Mateo, CA). Participants were not aware that other retina specialists were completing the same survey.

Data were extracted from Survey Monkey software for analysis. In addition to completing the multiple-choice questions concerning the set of study images, respondents provided answers concerning their location of practice, years of practice, and utilization of OCTA. Statistical analyses were performed primarily to determine the reliability of image interpretation between graders. A secondary objective was to assess if the reliability and variability of interpretation was influenced by other factors such as the frequency of use of OCTA or by practice location.

The data were nominal, with eight possible diagnoses for each image. To measure the degree of agreement of diagnosis by different physicians, inter-rater agreement was assessed and tested across factors such as experience, frequency of use OCTA, and location of practice. Krippendorff's alpha measure was used to test agreement between participants in the study. The test statistic (reliability) was assessed by estimating the alpha coefficient. The 95% confidence intervals (CIs) for the alpha coefficient were also reported and were used to compare the reliability measure across factors of years of practice, frequency of use of OCTA, and location of practice. For Krippendorff's alpha, bootstrapping was used to construct 95% CI, because the theoretical sampling distribution of alpha coefficient is unknown. Bootstrapping was implemented via an algorithm in SAS- and SPSS-macro from Hayes and Krippendorff.9 The inter-rater reliability analysis was repeated again after grouping the interpretations of each image into correct or incorrect diagnosis.9–11

Three analyses of variance (ANOVA) were also conducted to test the variability of responses between physicians with different levels of experience, frequency of use of OCTA, and location of practice. The dependent variable for ANOVAs was the total number of correct diagnosis for the 12 images. ANOVA requires that the assumption of equality of variances is met, which was tested using Levene's test. Normality of data was tested using a histogram plot.

A further set of 12 analyses were conducted to test the significance of the association between experience and correct/incorrect interpretation for each image. These analyses were carried out using Chi-square tests of independence and Fisher's exact tests. All statistical tests were performed at 0.05 level of significance. Data analysis was performed using SPSS version 20.0 software application (IBM, Armonk, NY).

Results

Table 1 presents frequency results of responses to each of the 12 OCTA images by the 38 physicians. Only image No. 3 had perfect agreement, with all 38 physicians correctly diagnosing neovascular age-related macular degeneration. Similarly, image No. 8 had a very high degree of agreement, with 97.4% of raters selecting the incorrect diagnosis of diabetic retinopathy rather than the correct diagnosis of branch retinal vein occlusion. Image 1 (Figure 1) also had a high level of agreement, with 76.4% of raters giving the correct diagnosis of diabetic retinopathy.

Interpretation of OCTA Images of Retinal Vasculature by Different Physicians

Table 1:

Interpretation of OCTA Images of Retinal Vasculature by Different Physicians

Table 2 presents Krippendorff's alpha coefficients of agreement and their associated 95% CIs. For the overall data, the estimated alpha coefficient was alpha = 0.366 (95% CI, 0.31–0.47). Although the coefficient was statistically significant, it was far from unity (0.366). In order to define inter-rater reliability as high, Krippendorff's alpha coefficient of agreement should be greater than 0.8. Therefore, for the overall data, though statistically significant, no high inter-rater reliability in interpretation of images was found.

Krippendorff's Alpha Coefficients of Agreement

Table 2:

Krippendorff's Alpha Coefficients of Agreement

Similar analyses were repeated to see whether years of practice, self-reported facility with OCTA, or geographic location of practice were correlated with the ability to correctly interpret the OCTA images (Table 3). Again, the coefficients were statistically significant, but had overlapping CIs and were all less than 0.8. Years of practice, frequency of use of OCTA, and location of practice showed no significant impact upon inter-rater reliability.

Krippendorff’s Alpha Coefficients of Agreement Using Correct/Incorrect Data

Table 3:

Krippendorff’s Alpha Coefficients of Agreement Using Correct/Incorrect Data

A further reliability analysis was conducted after scoring the diagnosis (interpretations) of each image as either correct or incorrect. Reliability statistics and the associated 95% CI were computed again using Krippendorff's measure for the new scored data. Alpha reliability coefficient was alpha = 0.466 (95% CI, 0.281–0.652). This result based on correct and incorrect classification of interpretation again demonstrated statistical significance (as the CI does not contain zero) but was less than 0.8, and therefore inter-rater reliability was not high. The 95% CIs for different levels of years of experience, frequency of use of OCTA, and location of practice were also overlapping in this analysis. Again, we concluded that there was no significant difference in alpha coefficient between different levels of categories of years of experience, frequency of use of OCTA, or location of practice.

The first ANOVA was used to test the variability of responses by level of experience. Figure 2 is a histogram of the distribution of number of correct interpretations by physicians. The histogram shows that the data can be considered as distributed normally. A box plot of the distribution of number of correct interpretations across categories of experience is presented in Figure 3. Table 4 also presents descriptive statistics of correct number of diagnosis by level of experience of the observer. The overall mean number of correct interpretations was M = 4.263 (standard deviation [SD] = 1.638). The mean number of correct interpretations for physicians with zero to 10 years of experience was highest compared to the others (M = 4.789, SD = 1.583). More experienced observers, with 10 to 20 years of experience, had a mean of 3.750 (SD = 1.388). The most experienced physicians, with 20 or more years of experience, had the lowest mean number of correct interpretations as M = 3.727 (SD = 1.737).

Histogram of the distribution of the number of correct interpretations.

Figure 2.

Histogram of the distribution of the number of correct interpretations.

Box plot of the distribution of the number of correct interpretations across experience.

Figure 3.

Box plot of the distribution of the number of correct interpretations across experience.

Descriptive Statistics and ANOVA Results for Correct Number of Interpretations of 12 Images by Level of Experience of the Observer

Table 4:

Descriptive Statistics and ANOVA Results for Correct Number of Interpretations of 12 Images by Level of Experience of the Observer

Results of Levene's test indicated that the equal variance assumption is satisfied (Levene's statistic [2, 35] = .418; P = .661), so ANOVA analysis was appropriate. The results of the ANOVA test indicated that the null hypothesis of no significant difference in mean number of correct diagnosis across the three levels of experience of the observer cannot be rejected at 0.05 level of significance (F [2, 35] = 2.074; P = .141). In other words, experience has no significant effect on mean number of correct interpretations of the 12 images.

The second ANOVA was carried out to test variability of responses by frequency of use in OCTA. A box plot of the distribution of number of correct interpretations across different frequency levels of use of OCTA is shown in Figure 4. Since only one physician reported use of OCTA for most patients, this category was grouped with use of OCTA for some patients. Descriptive statistics and ANOVA results are presented in Table 5. The mean number of correct interpretations for physicians who use OCTA for some or most patients was M = 4.500 (SD = 1.446), whereas that for physicians who use OCTA rarely was M = 4.153 (SD = 1.736). Results of Levene's test indicated that the equal variance assumption is satisfied (F = 0.779; P = .613). Results of the ANOVA test indicated that the null hypothesis of no significant difference in mean number of correct interpretations by frequency of use of OCTA cannot be rejected at 0.05 level of significance (F [1, 36] = 0.36; P = .552). It can be concluded that frequency of use of OCTA has no significant effect on mean number of correct interpretation of images.

Box plot of the distribution of the number of correct interpretations across the use of optical coherence tomography angiography (OCTA).

Figure 4.

Box plot of the distribution of the number of correct interpretations across the use of optical coherence tomography angiography (OCTA).

Descriptive Statistics and ANOVA Results for Correct Number of Interpretations of 12 Images by Frequency of Use of OCTA

Table 5:

Descriptive Statistics and ANOVA Results for Correct Number of Interpretations of 12 Images by Frequency of Use of OCTA

The third ANOVA was performed to test the possible effect of location of practice on mean number of correct interpretations. Table 6 presents descriptive statistics of the correct number of interpretations across location of practice and the associated results of ANOVA test. Results of Levene's test indicated that equal variance assumption is satisfied (Levene's statistic [2, 35] = 0.418; P = .661). Results of the ANOVA test indicated that the null hypothesis of no significant difference in mean number of correct interpretations by location of practice cannot be rejected at 0.05 level of significance (F [4, 33] = 0.87; P = .49). It can be concluded that location of practice has no significant effect on mean number of correct interpretation of images.

Descriptive Statistics and ANOVA Results for Correct Number of Interpretations of 12 Images by Location of Practice

Table 6:

Descriptive Statistics and ANOVA Results for Correct Number of Interpretations of 12 Images by Location of Practice

Chi-square and Fisher's exact tests were conducted to test possible associations between correct/incorrect interpretations of each image by levels of experience. Table 7 presents a cross table of experience and whether the interpretation of the image was correct or incorrect. Results of Chi-square and Fisher's Exact tests are also reported in the table. Since the sample size in each group is relatively small, categories of 10 to 20 years and 20 years or more of experience were grouped together to form a single category of 10 or more years of experience. This is to ensure that the assumption of minimum expected frequency of 5.0 is satisfied to be able to conduct reliable inferential analysis. Only for image No. 2 (normal) was a statistically significant association between experience and correctness of interpretation found (χ2 [1] = 0.008). Specifically, a significantly higher percentage of physicians with less than 10 years of experience gave a correct interpretation compared with those with 10 or more years of work experience. Some of the analyses were not possible due to unbalanced data (ie, all physicians providing either correct or incorrect interpretations for each image and therefore no analysis can be performed in these cases).

Cross Table of Association of Experience and Correctness of Interpretation of the 12 Images

Table 7:

Cross Table of Association of Experience and Correctness of Interpretation of the 12 Images

Discussion

The adoption of a new medical technology is inevitably tied to a learning curve. As clinicians become increasingly facile with that technology, expertise develops, and the value of the new approach increases. This learning curve may be amplified when adopting imaging technologies, as the images produced are not self-explanatory, generally produce images rather than diagnoses, and need to be interpreted. The deciphering process requires observers to review an image, analyze the pattern present, and then associate the image with a diagnosis that the pattern best represents. The unique challenge is that the association of a discreet image pattern with a particular disease state is not initially defined. Over time, the patterns seen with the new technology may become tightly aligned with a specific disease state, but that requires careful study, triangulation with other diagnostic modalities, education of end users, and experience.

Since commercial release, OCTA has become an important imaging modality allowing for the detection of flow in the retinal and choroidal microvasculature. Disease states imaged by OCTA generate reproducible patterns, and as data is produced over time, the ability to correlate the images produced to particular disease states is actively being refined. However, individual clinicians may take time to become facile with OCTA technology in a progressive fashion, reflecting both their own learning curve and the learning curve of the field in general, as development and understanding in the medical literature and in the house of medicine evolves.

This study sought to determine if, at the present time, physicians agree in their interpretation of OCTA images when considered as an independent test. Multiple statistical approaches were performed in order to test that premise. Krippendorff's alpha coefficient was utilized to analyze inter-rater agreement. This method allows for high flexibility about the measurement scale of the data and the number of raters, and, unlike other measures, it can also handle missing values. That is, Krippendorff's alpha coefficient can be used for two or more raters and categories, and can be applied for not only nominal data, but also for any measurement scale, including continuous metric data. ANOVA and Chi-square tests of independence and Fisher's exact tests were also performed to analyze the levels of agreement and the influence that additional factors might have on OCTA interpretation. These statistical treatments led to the conclusion that the overall data did not demonstrate either high agreement or reliability in interpretation between physicians.

This study had several limitations that should be considered. The study sample size was limited to 38 retina specialists who identified themselves as facile with OCTA image interpretation, and therefore the data generated may not apply to more or less facile groups or larger study cohorts. Images were provided only from the Angiovue imaging system, and other systems might provide images that could be more reliably interpreted. Finally, OCTA cases were not associated with clinical histories, but rather as images in isolation, and this may have limited the levels of agreement seen.

Perhaps surprisingly, despite significant study of OCTA in the medical literature and in clinical practice, significant variability existed in the interpretation of OCTA images in this cohort of retina specialists. The overall data did not demonstrate high reliability or agreement in interpretation of images when considered with several statistical treatments. This suggests that despite the obvious appeal and what may be an important clinical role in the future, the need for additional study of this nascent technology exists prior to its utilization as an imaging tool in isolation.

References

  1. Adhi M, Duker JS. Optical coherence tomography – current and future applications. Curr Opin Ophthalmol. 2013;24(3):213–221. doi:10.1097/ICU.0b013e32835f8bf8 [CrossRef]
  2. Jia Y, Tan O, Tokayer J, et al. Split-spectrum amplitude-decorrelation angiography with optical coherence tomography. Opt Express. 2012;20(4):4710–4725. doi:10.1364/OE.20.004710 [CrossRef]
  3. Schneider EW, Fowler SC. Optical coherence tomography angiography in the management of age-related macular degeneration. Curr Opin Ophthalmol. 2018;29(3):217–225. doi:10.1097/ICU.0000000000000469 [CrossRef]
  4. Kuehlewein L, Bansal M, Lenis TL, et al. Optical coherence tomography angiography of type 1 neovascularization in age-related macular degeneration. Am J Ophthalmol. 2015;160(4):739–748. doi:10.1016/j.ajo.2015.06.030 [CrossRef]
  5. Tanaka K, Mori R, Kawamura A, Nakashizuka H, Wakatsuki Y, Yuzawa M. Comparison of OCT angiography and indocyanine green angiographic findings with subtypes of polypoidal choroidal vasculopathy. Br J Ophthalmol. 2017;101(1):51–55. doi:10.1136/bjophthalmol-2016-309264 [CrossRef]
  6. Miller AR, Roisman L, Zhang Q, et al. Comparison between spectral-domain and swept-source optical coherence tomography angiographic imaging of choroidal neovascularization. Invest Ophthalmol Vis Sci. 2017;58(3):1499–1505. doi:10.1167/iovs.16-20969 [CrossRef]
  7. Costanzo E, Miere A, Querques G, Capuano V, Jung C, Souied EH. Type 1 Choroidal neovascularization lesion size: Indocyanine green angiography versus optical coherence tomography angiography. Invest Ophthalmol Vis Sci. 2016;57(9):307–313. doi:10.1167/iovs.15-18830 [CrossRef]
  8. Gao SS, Liu L, Bailey ST, et al. Quantification of choroidal neovascularization vessel length using optical coherence tomography angiography. J Biomed Opt. 2016;21(7):76010. doi:10.1117/1.JBO.21.7.076010 [CrossRef]
  9. Hayes AF, Krippendorff K. Answering the call for a standard reliability measure for coding data. Commun Methods Meas. 2007;1(1):77–89. doi:10.1080/19312450709336664 [CrossRef]
  10. Feinstein AR, Cicchetti DV. High agreement but low kappa: I. The problems of two paradoxes. J Clin Epidemiol. 1990;43(6):543–549. doi:10.1016/0895-4356(90)90158-L [CrossRef]
  11. Thompson WD, Walter SD. A reappraisal of the kappa coefficient. J Clin Epidemiol. 1988;41(10):949–958. doi:10.1016/0895-4356(88)90031-5 [CrossRef]

Interpretation of OCTA Images of Retinal Vasculature by Different Physicians

Image
Interpretation 1 2 3 4 5 6 7 8 9 10 11 12
Normal Normal wAMD MacTel CRVO DR BRVO BRVO CSR/CNV dAMD DR wAMD
Normal 20 (52.6) 17 (44.7) 0 (0.0) 0 (0.0) 0 (0.0) 0 (0.0) 0 (0.0) 0 (0.0) 0 (0.0) 6 (15.8) 0 (0.0) 2 (5.3)
Neovascular AMD 1 (2.6) 0 (0.0) 38 (100) 1 (2.6) 23 (60.5) 0 (0.0) 0 (0.0) 0 (0.0) 21 (55.3) 2 (5.3) 0 (0.0) 22 (57.9)
Juxtafoveal telangiectasia 4 (10.5) 7 (18.4) 0 (0.0) 21 (55.3) 0 (0.0) 7 (18.4) 0 (0.0) 0 (0.0) 1 (2.6) 9 (23.7) 2 (5.3) 4 (10.5)
CRVO 0 (0.0) 0 (0.0) 0 (0.0) 1 (2.6) 0 (0.0) 5 (13.2) 7 (18.4) 1 (2.6) 4 (5.2) 0 (0.0) 3 (7.9) 1 (2.6)
Diabetic retinopathy 7 (18.4) 1 (2.6) 0 (0.0) 7 (18.4) 14 (36.8) 22 (67.9) 31 (71.6) 37 (97.4) 0 (0.0) 3 (7.9) 29 (76.4) 6 (15.8)
BRVO 4 (10.5) 8 (21.1) 0 (0.0) 1 (2.6) 0 (0.0) 1 (2.6) 0 (0.0) 0 (0.0) 14 (36.8) 6 (15.8) 0 (0.0) 0 (0.0)
Chronic central serous retinopathy 2 (5.3) 5 (13.2) 0 (0.0) 7 (18.4) 1 (2.6) 2 (5.4) 0 (0.0) 0 (0.0) 0 (0.0) 12 (31.6) 4 (10.5) 3 (7.9)
Non-neovascular AMD 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0)

Krippendorff's Alpha Coefficients of Agreement

Data Category Alpha 95% CI

Overall 0.366 (0.31–0.47)

Years of practice 0 to 10 0.359 (0.33–0.38)
10 to 20 0.396 (0.34, 0.46)
20 or more 0.350 (0.31–0.39)

Use of OCTA Most patients
Some patients 0.354 (0.315–0.396)
Rarely 0.375 (0.348–0.403)

Location of practice Midwest 0.363 (0.298–0.425)
Northeast 0.443 (0.404–0.486)
Southeast 0.359 (0.277–0.442)
Southwest 0.492 (0.302–0.683)
West coast 0.313 (0.261–0.366)

Descriptive Statistics and ANOVA Results for Correct Number of Interpretations of 12 Images by Level of Experience of the Observer

N Mean SD 95% CI for Mean F P Value
0 to 10 Years 19 4.789 1.583 4.026 5.552 2.074 .141
10 to 20 Years 8 3.750 1.388 2.589 4.911
20 Years or More 11 3.727 1.737 2.560 4.894
Total 38 4.263 1.638 3.724 4.801

Descriptive Statistics and ANOVA Results for Correct Number of Interpretations of 12 Images by Frequency of Use of OCTA

Use of OCTA n Mean SD F P Value
For some or most patients 12 4.500 1.446 0.36 .552
Rarely 26 4.153 1.736

Descriptive Statistics and ANOVA Results for Correct Number of Interpretations of 12 Images by Location of Practice

Location n Mean SD F P Value
Midwest 8 4.625 1.767 .609 .659
Northeast 12 4.000 1.414
Southeast 6 5.000 1.549
Southwest 3 3.666 3.055
West Coast 9 4.000 1.500

Cross Table of Association of Experience and Correctness of Interpretation of the 12 Images

Experience

Image 0 to 10 Years 10 Years or More χ2 (P Value) Fisher's Exact P Value

1 Not correct 8 (44.4) 10 (55.6) .422 (.516) .746
Correct 11 (55.0) 9 (45.0)

2 Not correct 6 (28.6) 15 (71.4) 8.622 (.003) .008
Correct 13 (76.5) 4 (23.5)

3 Not correct 0 (0.0) 0 (0.0)
Correct 19 (50.0) 19 (50.0)

4 Not correct 9 (52.9) 8 (47.1) .106 (.744) .999
Correct 10 (47.6) 11 (52.4)

5 Not correct 6 (40.0) 9 (60.0) .991 (.319) .508
Correct 13 (56.5) 10 (43.5)

6 Not correct 8 (36.4) 14 (63.6) 3.886 (.049) .099
Correct 11 (68.8) 5 (31.2)

7 Not correct 19 (50.0) 19 (50.0)
Correct 0 (0.0) 0 (0.0)

8 Not correct 19 (50.0) 19 (50.0)
Correct 0 (0.0) 0 (0.0)

9 Not correct 19 (50.0) 19 (50.0)
Correct 0 (0.0) 0 (0.0)

10 Not correct 19 (50.0) 19 (50.0)
Correct 0 (0.0) 0 (0.0)

11 Not correct 5 (45.5) 6 (64.5) .128 (.721) .999
Correct 14 (51.9) 13 (48.1)

12 Not correct -- --
Correct

Krippendorff's Alpha Coefficients of Agreement Using Correct/Incorrect Data

Data Category Alpha 95% CI

Overall 0.466 (0.281–0.652)

Years of practice 0 to 10 0.502 (0.315–0.668)
10 to 20 0.465 (0.263–0.655)
20 or more 0.452 (0.259–0.629)

Use of OCTA Most patients
Some patients 0.457 (0.262–0.642)
Rarely 0.459 (0.273–0.648)

Location of practice Midwest 0.459 (0.268–0.624)
Northeast 0.497 (0.329–0.665)
Southeast 0.469 (0.274–0.647)
Southwest 0.236 (−0.146–0.554)
West Coast 0.401 (0.197–0.577)
Authors

From the Department of Ophthalmology, Rutgers — Robert Wood Johnson Medical School, New Brunswick, New Jersey.

The authors report no relevant financial disclosures.

Address correspondence to Howard F. Fine, MD, MHSc, Department of Ophthalmology, Rutgers — Robert Wood Johnson Medical School, 10 Plum Street, Suite 600, New Brunswick, NJ 08901; email: hfine@njretina.com.

Received: July 31, 2018
Accepted: January 09, 2019

10.3928/23258160-20190605-02

Sign up to receive

Journal E-contents