Journal of Nursing Education

Major Article Open Access

Relationship of Multiple Attempts on an Admissions Examination to Early Program Performance

Michelle Dunham, PhD; Joshua MacInnes, PhD

  • Journal of Nursing Education. 2018;57(10):578-583
  • https://doi.org/10.3928/01484834-20180921-02
  • Posted October 3, 2018

Abstract

Background:

When a student makes multiple attempts at an admissions assessment, an institution must decide how to incorporate the resulting information into its admissions decision. However, little research exists to guide this decision within a nursing admissions context.

Method:

This article examines patterns in retesting scores from a nursing admissions assessment and the correlations of four retesting score treatments with scores on an early program assessment.

Results:

Although test scores do increase with each subsequent attempt, the average of all attempt scores is more highly correlated, in almost all instances, with future performance in a nursing program than any one test score.

Conclusion:

These results indicate that nursing programs presented with multiple scores from an examinee would be best served by using an average of all available scores when making admissions decisions. [J Nurs Educ. 2018;57(10):578–583.]

Abstract

Background:

When a student makes multiple attempts at an admissions assessment, an institution must decide how to incorporate the resulting information into its admissions decision. However, little research exists to guide this decision within a nursing admissions context.

Method:

This article examines patterns in retesting scores from a nursing admissions assessment and the correlations of four retesting score treatments with scores on an early program assessment.

Results:

Although test scores do increase with each subsequent attempt, the average of all attempt scores is more highly correlated, in almost all instances, with future performance in a nursing program than any one test score.

Conclusion:

These results indicate that nursing programs presented with multiple scores from an examinee would be best served by using an average of all available scores when making admissions decisions. [J Nurs Educ. 2018;57(10):578–583.]

As nursing education programs establish criteria to admit qualified students, many use scores from standardized admissions examinations as a source of information about student preparedness. Realizing the importance of these scores to the admissions decision, prospective students may choose to take an examination multiple times, either to maximize their score to better their chances or to meet a program's required minimum score. When a student presents multiple scores from an examination, programs may have difficulty in determining how to use all available information in making a decision about the student. Which is the best indicator of a student's readiness to succeed in the program: The first test score? The most recent score? The highest score? The average of all available scores?

Although many educators have anecdotal impressions of how to address this practical problem, the question bears empirical investigation. This article examines several common treatments of multiple test scores and their relationship to early performance in a nursing program, with the goal of providing practical recommendations for programs making admissions decisions. For the purpose of this analysis, four treatments of scores from a nationally available nursing entrance examination, the Test of Essential Academic Skills (TEAS®) examination from Assessment Technologies Institute, LLC (ATI®) are investigated: first attempt, most recent attempt, highest attempt, and average of all attempts. Early program performance is operationalized as the score on a nationally available standardized assessment typically given during the first semester of a nursing program: ATI's Fundamentals of Nursing examination. The research questions are as follows:

  1. How do the first, most recent (last), highest (maximum), and average of all attempts (mean) scores of individuals with multiple TEAS attempts compare with scores of individuals with a single attempt?

    1. How do score patterns vary by number of TEAS attempts?

    2. How do score patterns vary for Associate's (ADN) and Bachelor's (BSN) degree program types?

    3. What is the correlation of each of the score types (first, last, maximum, and mean) with early program performance on the Fundamentals of Nursing assessment?

      1. How does the pattern of correlations vary by number of TEAS attempts?

      2. How does the pattern of correlations vary by program type (ADN and BSN)?

      3. Literature Review

        Admissions personnel are tasked with selecting prospective students who are likely to be successful in nursing programs. Many selection criteria are often required when applying for admission to a program, as no single criterion is a perfect predictor of success. Standardized test scores' use has grown in popularity in recent years. Standardized tests have been shown to be a predictor of nursing program success when considered alongside other factors, such as grade point averages, achievement in science courses, and admissions essays (Cunningham, Manier, Anderson, & Sarnosky, 2014; Schmidt & MacWilliams, 2011).

        However, approximately half of students retake standardized tests, which complicates the admissions decision (Patterson, Mattern, & Swerdzewski, 2012; Roszkowski & Spreat, 2016). Why would a student retake a standardized test? Although there are many reasons why an examinee might decide to retake an admissions test, the only definitive rationale is to raise a score (Lane & Feìtz, 1976; Roszkowski & Spreat, 2016; Wolkowitz, 2011a; Wightman, 1990). If students are attempting to raise their scores, what does that tell us about the student, and how are admissions officers supposed to treat multiple scores on an examination?

        An applicant's decision to retake an examination may tell us something about motivation. For instance, Roszkowski and Spreat (2016) discuss Simon's (1955) theories of satisficing and optimizing when retaking the SAT college admissions test. Examinees who are optimizers are trying to improve their score to be the best it can be, while those who are satisficers are trying to reach some minimal threshold. Other researchers consider re-taking an examination to be an indicator of grit or determination (Roszkowski & Spreat, 2016; Zhang & Patterson, 2010). Zhang and Patterson (2010) examined the persistence of those taking the general education diploma (GED) and found that motivation played a role. They found a positive relationship between retesting and passing the GED when an examinee's goal was to gain entrance into a 2-year college after earning a GED. Regardless of examinees' motivation, the end goal of raising scores is the same.

        To evaluate validity concerns related to admissions policies, many studies have described the nature of taking an admissions examination multiple times. With repeated attempts, scores typically increase and retesters tend to have lower initial scores, compared with students who test only once (Roszkowski & Spreat, 2016; Wolkowitz, 2011a). Villado, Randall, and Zimmer (2016) examined this issue and found that even though scores tend to increase over multiple attempts, single and repeat attempt scores show similar levels of correlation with the criterion. Repeat testers also tend to take their final attempt closer to program entrance, compared with those who only test once (Patterson, Mattern, & Swerdzewski, 2012). These concerns must be considered when comparing scores for students who take entrance examinations multiple times.

        Consider policies that admissions offices could implement when prospective students have attempted an examination multiple times. They might choose to use the first attempt, most recent attempt, highest single attempt, average of all attempts, or some combination of the highest scores across multiple sections of an examination (Patterson et al., 2012; Roszkowski & Spreat, 2016). Patterson et al. (2012) examined this issue in relation to the SAT and found that none of the methods undermined the predictive validity of the test. Currently, no consensus exists in the literature for the proper treatment of multiple scores, although using the average of all attempts is often recommended (Dalessandro & McLeod, 1999; Roszkowski & Spreat, 2016; Wightman, 1990; Zhao, Oppler, Dunleavy, & Kroopnick, 2010).

        Specialty programs, such as nursing, attract a considerably more homogenous group of examinees than general college admissions tests. However, little exists in the nursing literature related to this topic. Therefore, policies from other specialty programs, such as medical and law schools, are useful for comparison. These specialty programs incorporate various guidelines for multiple attempts of the MCAT and LSAT, which are consistent with the aforementioned polices used for general college admission (Dalessandro et al., 1999; Wightman, 1990; Zhao et al., 2010).

        The College Board (2015) also provides some insight into how nursing programs handle multiple attempts of the SAT by disclosing the practices of institutions that have chosen to share their policies. A search for institutions with “nursing” in the title yielded 13 institutions which consider the highest sections for each examinee, six that consider all submitted examinee scores, two that request that examinees contact the institution for the policy, and one that considers the highest single sitting for the SAT.

        Many nursing programs use more specific admission tests, such as the TEAS V developed by ATI, for use in program entrance decisions. The TEAS V results provide a recommended set of thresholds for institutions to consider when evaluating students: Developmental, Basic, Proficient, Advanced, and Exemplary (ATI, 2010; Wolkowitz, 2010, 2011b). Nursing admissions programs choose to implement the recommended thresholds, with the Proficient often being the lowest level of acceptable performance for admission. Therefore, it is reasonable to conclude that many nursing students retaking the TEAS V may fall into the “satisficing” category in that they are trying to reach a particular level required for entrance.

        Other entrance examinations, such as the SAT, ACT, MCAT, and LSAT, do not have official cut scores. For those examinations, the process of devising acceptable thresholds for admission is left up to institutions (Albanese, Farrell, & Dottl, 2005; Briggs, 2009; Kreiter, 2007). A study by Briggs (2009) surveyed institutions to assess their use of entrance examination scores and how small differences in scores increase the likelihood of acceptance. The study found that 20% to 25% of institutions set internal cut scores for the SAT and ACT, and those intuitions were more likely to agree that small increases, such as 10 to 20 points on the SAT, would increase a student's likelihood of admission. Therefore, institutions are using examination scores similarly, regardless of the presence or absence of official cut scores.

        As this literature review shows, recent research addresses the issue of multiple attempts for college entrance examinations. However, nursing program entrance has received much less scholarly attention. Other than the study by Wolkowitz (2011a), which examined score increases with respect to form taken, studies have not examined the influence of retake policy on early nursing program success. This study applies similar methodology used with the SAT (Briggs, 2009; Roszkowski & Spreat, 2016), ACT (Briggs, 2009), MCAT (Zhao et al., 2010), and LSAT (Dalessandro et al., 1999; Wightman, 1990) to examine admissions policies for multiple attempts of the TEAS V examination. A goal of this study is to provide guidance to nursing program admissions offices.

        Method

        Sample

        Data for use in the analysis were available in the ATI database and deidentified before analysis. Scores from the first six attempts for each student were queried for all students taking the TEAS between January 1, 2013, and December 14, 2016. Using a numeric variable randomly assigned to all students, students' sets of TEAS scores were matched with their RN Fundamentals of Nursing 2013 scores where available. Table 1 shows the number of students by program type (ADN and BSN) having each number of TEAS attempt scores available.

        Number of Examinees by Program Type and Number of Test of Essential Academic Skills (TEAS®) Examination Attempts

        Table 1:

        Number of Examinees by Program Type and Number of Test of Essential Academic Skills (TEAS®) Examination Attempts

        Table 1 shows that although the majority of students (approximately 75% in each program type in this data set) take the TEAS only once, the remaining students (n = 11,076 ADN; n = 7,796 BSN) take the assessment multiple times. The large number of examinees with multiple attempts underscores the utility of this article's focus. It should be noted that there are students who have taken the TEAS more than six times; however, due to small sample sizes in these groups, their scores are not reported in this article.

        Instrumentation

        Admissions Test. The admissions test of interest was the TEAS V. The TEAS V assesses objectives in the content areas of Reading, Math, Science, and English & Language Usage and reports subscores for each of these areas, as well as a composite score. Because of equating adjustments made to account for differences in difficulty across multiple forms, the composite score is referred to on the TEAS score report as an Adjusted Composite Score, which is reported as a percentage correct. Reliability estimates for composite scores from the TEAS V forms A and B are reported at .93 and .92, respectively (ATI, 2011). Because ATI recommends that only the Adjusted Composite Score be used for admissions decisions, this is the only score used for these analyses.

        Outcome Measure. The RN Fundamentals of Nursing assessment by ATI is typically given in the first semester of a nursing program to measure students' mastery of fundamental nursing concepts. This 70-item standardized multiple choice assessment contains 60 scored items and 10 unscored pilot test items. The reliability of scores from the RN Fundamentals of Nursing 2013 examination is reported at .670 (ATI, 2015). The Fundamentals examination was selected over other potential measures of early program performance such as grade point average or instructor rating because it is standardized, objective, and comparable across institutions.

        Analysis

        To determine how the first, last, maximum, and mean scores of individuals with multiple TEAS attempts compare with scores of individuals with a single attempt, the four score types were calculated for all examinees. From these scores, the arithmetic mean of each type was then calculated for each group, defined by number of TEAS attempts and program type.

        To find the relationships of each of the score types (first, last, maximum, and mean) with early program performance on the Fundamentals of Nursing assessment, the correlation of each score type with Fundamentals score was calculated separately for each group of examinees, defined by number of TEAS attempts and program type.

        Results

        Table 2 provides the mean first, last, maximum, and mean scores for ADN students with multiple TEAS attempts, separated by the number of TEAS attempts. The mean TEAS score for students with a single attempt is shown for comparison.

        Means of Test of Essential Academic Skills (TEAS®) Examination Score Types by Number of Examinee Attempts, Associate's Degree Program Type

        Table 2:

        Means of Test of Essential Academic Skills (TEAS®) Examination Score Types by Number of Examinee Attempts, Associate's Degree Program Type

        For ADN students, the mean first attempt score for students with only one attempt is 72.3% correct and decreases for groups with additional attempts. The mean first attempt score for students taking the TEAS six times is 55.3% correct, which is a mean difference of −17 points. However, a comparison of six-time-testers' last and maximum scores to single-testers' first-time scores reveals dramatically reduced mean differences of −5.2 and −3.1 points, respectively.

        Table 3 provides the mean first, last, maximum, and mean scores for BSN students with multiple TEAS attempts, separated by the number of TEAS attempts. The mean TEAS score for students with a single attempt is shown for comparison.

        Means of Test of Essential Academic Skills (TEAS®) Examination Score Types by Number of Examinee Attempts, Bachelor's Degree Program Type

        Table 3:

        Means of Test of Essential Academic Skills (TEAS®) Examination Score Types by Number of Examinee Attempts, Bachelor's Degree Program Type

        The pattern for BSN students is similar to ADN students; the mean first attempt score for students with only one attempt is 75.5% correct and decreases for groups with additional attempts. The mean first attempt score for students taking the TEAS six times is 59.2% correct, a mean difference of −16.3 points, comparable to that in the ADN sample. Again, a comparison of six-time-testers' last and maximum scores to single-testers' first-time scores reveals dramatically reduced mean differences of −3.9 and −2.4 points, respectively. As might be expected, the mean of the all scores option produces score differences that are less extreme than either the first attempt or last and maximum options for both ADN and BSN students.

        To answer research question two, the correlation between each score type (first, last, maximum, and mean) and Fundamentals score was calculated for groups of examinees with each number of TEAS attempts. Tables 45 present these correlations for ADN and BSN students, respectively.

        Correlation of Test of Essential Academic Skills (TEAS®) Examination Score Types With Fundamentals, Associate's Degree Program Typea

        Table 4:

        Correlation of Test of Essential Academic Skills (TEAS®) Examination Score Types With Fundamentals, Associate's Degree Program Type

        Correlation of Test of Essential Academic Skills (TEAS®) Examination Score Types With Fundamentals, Bachelor's Degree Program Typea

        Table 5:

        Correlation of Test of Essential Academic Skills (TEAS®) Examination Score Types With Fundamentals, Bachelor's Degree Program Type

        With one exception for six-time testers applying to ADN programs, the score type most highly correlated with Fundamentals scores is the average of all TEAS attempts (mean).

        Discussion

        The analyses presented here lead to several conclusions of practical import for institutional decision makers, especially as they seek to make decisions about examinees presenting scores from multiple attempts on an admissions assessment.

        Question one explored the pattern of mean scores by number of TEAS attempts for first, last, maximum, and average scores. Results revealed that for both BSN and ADN students, the first attempt score is highest for those examinees who take the TEAS only once and decreases successively for groups with additional TEAS attempts. In the extreme case, individuals who ultimately take the TEAS six times have a mean first attempt score as much as 17 points (ADN; 16.3, BSN) below their single-attempt peers.

        However, by the time they complete all TEAS testing, individuals with six TEAS attempts appear much more like single-testers than they did at first attempt. This finding is consistent with prior research indicating that, on average, students' admissions test scores increase with each additional attempt (Wolkowitz, 2011a). For both ADN and BSN students, the gap between single-attempt scores and multiple-attempt last and maximum scores is meaningfully less than for first-attempt scores. Because of this, an institution might make different admissions decisions based on its approach to multiple test scores. Understanding which score treatment is most correlated with future performance is a matter of practical significance to institutional decision makers.

        Question two asked which approach to multiple test scores was most highly correlated with future performance early in a nursing program, as measured by a nationally available standardized test of nursing fundamentals knowledge. The data in Tables 45 show that although the correlations between all TEAS score treatments and Nursing Fundamentals scores are significant for most groups, the correlation is generally greatest for the mean test score approach. This finding is consistent with literature from other fields recommending that programs use the average of all of an examinee's test scores in making decisions (Dalessandro & McLeod, 1999; Roszkowski & Spreat, 2016; Wightman, 1990; Zhao et al., 2010). Conceptually, use of the average of examination scores seems to strike a happy medium; it provides an opportunity for examinees to remediate and improve while taking into account the full history of their examination attempts. Taken as a whole, the data presented here suggest that the score treatment most strongly related to future performance in a nursing program is the average of all an examinee's test attempts.

        Conclusion and Limitations

        The analyses here are presented with two noted limitations. The first is that the data included may be assumed to represent a restriction of range; only students having both TEAS and Fundamentals scores were included, meaning that they had to have been admitted to a nursing program and completed at least the first semester (to the point of administration of the Fundamentals assessment). Presumably, students with very low TEAS scores would not have met these criteria. As a result, the correlations presented between TEAS and Fundamentals likely constitute an underestimate of the true relationship between scores on the two tests. However, in this case, the emphasis of this article is on the relative magnitude of correlations between each score treatment and Fundamentals, which would all be equally affected by the restriction of range.

        The second limitation is that these analyses do not consider the admissions policy of the schools to which students applied, specifically whether the institution has a minimum threshold score necessary for acceptance. Prior research has discussed the concept of satisficers and maximizers. Without knowing the admissions policies of the institutions to which individuals applied, it is not possible to examine any potential differences in the relationship between entrance examination score and future performance. Future research should incorporate such institution-level admissions policy information.

        The question of how to incorporate the resulting information into the admissions decisions is a practical and important one for nursing education programs. Indeed, because examinee scores increase, on average, with each attempt, the approach an institution takes to handling multiple test scores has a real impact on applicants' futures. This article used data from a nursing admissions examination to examine this question and found that, on balance, the score treatment most highly correlated with future performance early in a nursing program is the average of scores from all examination attempts. This finding held true regardless of the number of test attempts and for both BSN and ADN program types, with the exception of six-time-testers in ADN programs. These conclusions are of practical import for institutional decision makers and should provide evidence-based guidance for using scores from multiple attempts at an admissions assessment.

        References

        • Albanese, M.A., Farrell, P. & Dottl, S. (2005). Statistical criteria for setting thresholds in medical school admissions. Advances in Health Sciences Education, 10, 89–103. doi:10.1007/s10459-004-1122-6 [CrossRef]
        • Assessment Technologies Institute. (2010). TEAS V National Standard Setting Study 2010 Executive Summary. Leawood, KS: Author.
        • Assessment Technologies Institute. (2011). Technical Manual for the Test of Essential Skills – Version V. Leawood, KS: Author.
        • Assessment Technologies Institute. (2015). RN content mastery series (CMS) 2013 technical manual. Leawood, KS: Author.
        • Briggs, D.C. (2009). Preparation for college admission exams: 2009 NACAC discussion paper. National Association for College Admission Counseling.
        • College Board. (2015). SAT® score-use practices by participating institution. New York, NY: Author.
        • Cunningham, C.J., Manier, A., Anderson, A. & Sarnosky, K. (2014). Rational versus empirical prediction of nursing student success. Journal of Professional Nursing, 30, 486–492. doi:10.1016/j.profnurs.2014.03.006 [CrossRef]
        • Dalessandro, S.P. & McLeod, L.D. (1999). The validity of law school admission test scores for repeaters: A replication (Report No. 98-05). Newton, PA: Law School Admission Services.
        • Kreiter, C.D. (2007). A commentary on the use of cut-scores to increase the emphasis of non-cognitive variables in medical school admissions. Advances in Health Sciences Education, 12, 315–319. doi:10.1007/s10459-006-9003-9 [CrossRef]
        • Lane, M.S. & Feìtz, R.H. (1976). Cross-year comparison of test-retest MCAT performance. Academic Medicine, 51, 582–586. doi:10.1097/00001888-197607000-00010 [CrossRef]
        • Patterson, B.F., Mattern, K.D. & Swerdzewski, P. (2012). Are the best scores the best scores for predicting college success?Journal of College Admission, 217, 34–45.
        • Roszkowski, M.J. & Spreat, S. (2016). Retaking the SAT may boost scores but this doesn't hurt validity. Journal of the National College Testing Association, 2, 1–16.
        • Schmidt, B. & MacWilliams, B. (2011). Admission criteria for undergraduate nursing programs: A systematic review. Nurse Educator, 36, 171–174. doi:10.1097/NNE.0b013e31821fdb9d [CrossRef]
        • Simon, H.A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69, 99–118. doi:10.2307/1884852 [CrossRef]
        • Villado, A.J., Randall, J.G. & Zimmer, C.U. (2016). The effect of method characteristics on retest score gains and criterion-related validity. Journal of Business and Psychology, 31, 233–248. doi:10.1007/s10869-015-9408-7 [CrossRef]
        • Wolkowitz, A.A. (2010). TEAS® V national standard setting study. Leawood, KS: Author.
        • Wolkowitz, A.A. (2011a). Multiple attempts on a nursing admissions examination: Effects on the total score. Journal of Nursing Education, 50, 493–501. doi:10.3928/01484834-20110517-07 [CrossRef]
        • Wolkowitz, A.A. (2011b). Technical manual for the test of essential academic skills: Version V, forms A and B. Leawood, KS: Author.
        • Wightman, L.F. (1990). The validity of law school admission test scores for repeaters: A replication (Report No. 90-02). Newton, PA: Law School Admission Services.
        • Zhang, J. & Patterson, M.B. (2010). Repeat GED[R] tests examinees: Who persists and who passes? (GED Testing Service Research Studies, 2010-2). Washington, DC: American Council on Education.
        • Zhao, X., Oppler, S., Dunleavy, D. & Kroopnick, M. (2010). Validity of four approaches of using repeaters' MCAT scores in medical school admissions to predict USMLE Step 1 total scores [Supplemental material]. Academic Medicine, 85, S64–S67. doi:10.1097/ACM.0b013e3181ed38fc [CrossRef]

        Number of Examinees by Program Type and Number of Test of Essential Academic Skills (TEAS®) Examination Attempts

        No. of TEAS AttemptsAssociate's Degree ProgramBachelor's Degree Program


        n%n%
        132,66774.6824,95676.20
        28,50319.445,87117.93
        31,8284.181,3964.26
        45231.203541.08
        51630.371230.38
        6590.13520.16
        Total43,74310032,752100

        Means of Test of Essential Academic Skills (TEAS®) Examination Score Types by Number of Examinee Attempts, Associate's Degree Program Type

        No. of TEAS AttemptsnFirstLastMaximumMean




        MeanSDMeanSDMeanSDMeanSD
        132,66772.39.3
        28,50364.98.869.98.870.78.267.48.1
        3182861.38.568.79.270.08.164.77.9
        452358.28.067.39.269.27.562.47.2
        516356.47.467.48.069.07.461.76.7
        65955.37.667.18.269.26.761.26.5

        Means of Test of Essential Academic Skills (TEAS®) Examination Score Types by Number of Examinee Attempts, Bachelor's Degree Program Type

        No. of TEAS AttemptsnFirstLastMaximumMean




        MeanSDMeanSDMeanSDMeanSD
        124,95675.59.8
        25,87168.89.873.69.774.49.271.29.2
        31,39666.19.773.79.574.68.869.69.0
        435464.09.973.310.374.69.368.49.3
        512362.69.473.79.375.18.968.18.8
        65259.29.971.611.173.110.165.310.1

        Correlation of Test of Essential Academic Skills (TEAS®) Examination Score Types With Fundamentals, Associate's Degree Program Typea

        No. of TEAS AttemptsnFirstLastMaximumMean
        132,6670.364
        28,5030.307*0.286*0.300*0.321*
        31,8280.285*0.239*0.251*0.297*
        45230.335*0.186*0.218*0.337*
        51630.331*0.213*0.243*0.366*
        6590.2220.2360.1960.209

        Correlation of Test of Essential Academic Skills (TEAS®) Examination Score Types With Fundamentals, Bachelor's Degree Program Typea

        No. of TEAS AttemptsnFirstLastMaximumMean
        124,9560.399
        25,8710.386*0.383*0.387*0.407*
        313960.437*0.424*0.438*0.470*
        43540.318*0.301*0.309*0.351*
        51230.426*0.455*0.464*0.477*
        6520.336*0.2670.2720.368*
      Authors

      Dr. Dunham is Senior Research Scientist, Research and Applied Psychometrics, Ascend Learning, LLC, Leawood, Kansas; and Dr. MacInnes is Psychometrician, Castle Worldwide, Inc., Morrisville, North Carolina

      Dr. Dunham is employed by Ascend Learning, whose subsidiary (Assessment Technologies Institute) produces the assessments discussed in this article. The author's compensation is not dependent on research findings. Research design, analysis, and results reporting are conducted independently from the business unit.

      The authors have disclosed no other potential conflicts of interest, financial or otherwise.

      Address correspondence to Michelle Dunham, PhD, Senior Research Scientist, Research and Applied Psychometrics, Ascend Learning, LLC, 11161 Over-book Road, Leawood, KS 66211; e-mail: michelle.dunham@ascendlearning.com.

      This is an Open Access article distributed under the terms of the Creative Commons Attribution-Noncommercial 4.0 International (https://creativecommons.org/licenses/by-nc/4.0). This license allows users to copy and distribute, to remix, transform, and build upon the article noncommercially, provided the author is attributed and the new work is noncommercial.
      Received: December 19, 2017
      Accepted: April 12, 2018

      10.3928/01484834-20180921-02

      Sign up to receive

      Journal E-contents