Journal of Nursing Education

Major Article 

The Lasater Clinical Judgment Rubric: Implications for Evaluating Teaching Effectiveness

Kristin C. Lee, PhD, RN, CNE

Abstract

Background:

Concern with patient safety necessitates valid and reliable measures to evaluate clinical judgment. The purpose of this article is to describe how the Lasater Clinical Judgment Rubric (LCJR) has been used to evaluate the effectiveness of educational interventions to promote clinical judgment and its psychometric properties.

Method:

Search terms included nurse, student, clinical judgment, and Lasater Clinical Judgment Rubric in Scopus, ERIC, and CINAHL with EBSCOhost databases. The final review included 20 studies.

Results:

Researchers reported alphas for total scales as .80 to .97, subscales as .89 to .93, and students' self-scored as .81 to .82. Themes were: Individual Versus Group Evaluations, Clinical Judgment Scenarios, and Adaptation for Nonobservation Activities.

Conclusion:

Results of this review indicate that the LCJR can be used to evaluate clinical judgment, but educators need to consider inter- and intrarater reliability, individual versus group evaluation, clinical judgment scenarios, and adapting the rubric for nondirect observation activities. [J Nurs Educ. 2021;60(2):67–73.]

Abstract

Background:

Concern with patient safety necessitates valid and reliable measures to evaluate clinical judgment. The purpose of this article is to describe how the Lasater Clinical Judgment Rubric (LCJR) has been used to evaluate the effectiveness of educational interventions to promote clinical judgment and its psychometric properties.

Method:

Search terms included nurse, student, clinical judgment, and Lasater Clinical Judgment Rubric in Scopus, ERIC, and CINAHL with EBSCOhost databases. The final review included 20 studies.

Results:

Researchers reported alphas for total scales as .80 to .97, subscales as .89 to .93, and students' self-scored as .81 to .82. Themes were: Individual Versus Group Evaluations, Clinical Judgment Scenarios, and Adaptation for Nonobservation Activities.

Conclusion:

Results of this review indicate that the LCJR can be used to evaluate clinical judgment, but educators need to consider inter- and intrarater reliability, individual versus group evaluation, clinical judgment scenarios, and adapting the rubric for nondirect observation activities. [J Nurs Educ. 2021;60(2):67–73.]

Increases in the complexity of the health care environment, coupled with increasing patient acuity, require new graduates to be more job ready than at any time in the history of health care (Tanner, 2010). However, the preparation–practice gap has been well established (Hickerson et al., 2016). Health care employers expect new graduates to possess not only psychomotor skills to deliver safe care, but also cognitive skills, such as clinical judgment, to guide the delivery of safe care. However, only 10% of nurse executives think new graduates are ready for practice (Berkow et al., 2009) and only 23% of newly graduated nurses meet skilled clinical judgment competencies (Kavanagh & Szweda, 2017). This is a decrease of 25% to 35% from the previous decade (del Bueno, 2005). Calls for nursing education reform to address patient safety concerns by strengthening educational interventions (Benner et al., 2009) require valid and reliable methods that evaluate the development of clinical judgment skills in nursing students.

Many assessment tools to measure critical thinking and decision-making skills are available. However, many are paper and pencil, designed to be completed by participants, and do not evaluate clinical judgment. The Creighton Simulation Evaluation Instrument (C-SEI™; Todd et al., 2008) evaluates clinical judgment, assessment, patient safety, and communication skills during simulation. However, the C-SEI was developed to evaluate group performance, not individual performance. Seacrist and Noell (2016) developed a clinical judgment tool for chart reviews of patient medical records to assess the nurses' role in patient care. Although this tool can assist in evaluating individual clinical judgment abilities in nursing practice, it is designed to assess nurses' charting in medical records, which may be incomplete. The Lasater Clinical Judgment Rubric (LCJR; Lasater, 2007a) is emerging as an effective tool to directly measure clinical judgment based on a standardized language specific to nursing practice. Furthermore, the LCJR measures clinical judgment through observation of individual student performance during patient care. Yet, Victor-Chmil and Larew (2013) reported that at the time of their review, reliability and validity of the LCJR had been reported exclusively in research in the simulation environment and in group simulation settings. Furthermore, no review has been conducted with regard to using the rubric to evaluate the effects of educational interventions on clinical judgment. To improve clinical judgment skills in nursing students, researchers and educators need valid and reliable tools. However, designing educational interventions to improve clinical judgment also requires understanding implementation considerations of the LCJR that could influence study results. The purpose of this review is to describe how the LCJR has been used to evaluate the effectiveness of educational interventions to promote clinical judgment and its psychometric properties. Furthermore, implementation recommendations are discussed.

Background

Clinical reasoning involves information processes from intuitive and deductive methods and is considered critical thinking in the clinical environment. Clinical judgment is the conclusion drawn from the reasoning process and results in a planned response to the clinical situation (Tanner, 2006). Therefore, clinical reasoning is the process, whereas clinical judgment is the outcome. Clinical judgment is an essential skill to provide safe patient care (Manetti, 2019). To develop clinical judgment, students must have opportunities to directly practice what they have learned. Experiences that connect theory to practice have emerged as a key factor in the development of clinical judgment (Ashley & Stamp, 2014; Cappelletti et al.; 2014, Tanner, 2006). Ensuring that students are developing skills for safe practice requires valid and reliable assessment tools to evaluate clinical judgment development.

In practicing nurses, clinical reasoning and judgment are influenced by previous experience, knowing the patient, situational context, reasoning patterns, and reflection (Cappelletti et al., 2014, Tanner, 2006). However, practicing nurses have experience, which students and new graduates lack. Ashley and Stamp (2014) examined clinical reasoning and judgment, comparing outcomes between sophomore and junior students. They found differences in clinical judgment skills between novice and more experienced students, suggesting the importance of experience in patient care situations. Because experience is required to master clinical judgment skills (Tanner, 2006), it is understandable that nursing students, with their lack of clinical experience, are still developing this skill. However, nursing students are expected to enter the workforce prepared with adequately skilled clinical judgment for safe practice.

Tanner's Clinical Judgment Model

Curricular infusion of teaching pedagogies that promote development of clinical judgment should be theoretically based. Tanner's (2006) clinical judgment model provides a framework to facilitate learning and evaluation of clinical judgment skills in nursing. The model includes four processes inherent in clinical judgment: Noticing, Interpreting, Responding, and Reflecting. Noticing requires nurses to obtain a “perceptual grasp of the situation at hand” (Tanner, 2006, p. 208). For example, nurses experienced in caring for postoperative patients can anticipate physiological and emotional needs. This expectation draws on textbook knowledge, knowledge of the patient, clinical or practice knowledge gained from previous patients, and from experience. Other factors include nurses' values and perceptions of quality of practice, culture and patterns of care on the unit, and complexity of the environment.

Once nurses notice and grasp the situation, Interpreting and Responding, the second and third phases, require one or more reasoning patterns. Interpreting requires nurses to give meaning to the data (Tanner, 2006, p. 208). This step may result in the generation of hypotheses or the intuition to know how to respond. Additional assessments may need to be performed to intervene appropriately. In either situation, the act of interpreting and responding are part of the clinical reasoning process.

Finally, Reflection includes both on-action and in-action components (Tanner, 2006, p. 209). Reflection-in-action refers to nurses' ability to assess how patients are responding to the intervention and adjust accordingly. Reflection-on-action describes the knowledge gained from the experience through a reflective process. Both reflective processes develop clinical judgment skills to be applied in future situations, which, in turn, enhance nurses' ability to apply Noticing in future patient encounters.

LCJR

The LCJR (2007a) represents Tanner's model of clinical judgment (2006), which is specifically related to nursing practice (Table A; available in the online version of this article). Lasater's rubric describes performance criteria from Tanner's four processes (Noticing, Interpreting, Responding, and Reflecting), resulting in 11 separate dimensions. Within each dimension, performance is scored on a 4-point scale ranging from 1 (beginning) to 4 (exemplary). Content validity of the initial development was judged by a panel of experts and rated as good to very good, but expanding the rubric to include more dimensions has been suggested (Victor-Chmil & Larew, 2013). Interrater reliability has been reported as 0.89 using interclass correlation; percent agreement has been reported to be 92% to 96% and 57% to 100% (Adamson et al., 2012). Cronbach's alpha was reported as .95 for the overall tool consistency, and the domain estimates ranged between .86 to .88 (Jensen, 2010).

Lasater Clinical Judgment Rubric Lasater Clinical Judgment Rubric

Table A:

Lasater Clinical Judgment Rubric

Method

Using review methods described by Russell (2005), both quantitative and qualitative studies, including dissertations and Doctor of Nursing Practice projects, were included in the literature search. Program evaluation reports were included if they were original research. The search began with a keyword search using the Scopus®, ERIC, and CINAHL® with EBSCOhost databases without date limits. Search terms included nurse, student, clinical judgment, and Lasater Clinical Judgment Rubric. This resulted in 151 articles. All 151 abstracts were examined. Duplicates and those that did not meet the purpose based on abstract review were removed. Studies were included if they were written in English and empirically examined the use of the LCJR to evaluate the effectiveness of educational interventions. Studies were excluded if the aim of the research study did not include evaluating the effectiveness of educational interventions because the intent of this review was to examine how the LCJR had been used to evaluate educational interventions that impact clinical judgment, not results. From there, 44 studies were examined in their entirety. During this review, another five studies were identified through ancestry search. After full review of all 49 studies, an additional 29 were excluded due to lacking at least one research aim that examined educational interventions to influence clinical judgment using the LCJR. Twenty studies met criteria and were included in this review.

Results

Description of Samples

Ten of the studies were published, eight were dissertations, and two were Doctor of Nursing Practice projects (Table B; available in the online version of this article). Four were experimental studies, 12 were quasi-experimental, and four were mixed-methods. Seventeen of the 20 research teams recruited students from a single site. Degree programs of the samples included two associate, 11 baccalaureate, and two diploma programs. In five studies, researchers did not specify the degree program; in three, researchers described samples as university students. Sample sizes ranged from 14 to 134 students, with the average being 58 students. Nine of the 20 research teams had sample sizes less than 50, raising concern that many were underpowered to control for a type II error.

Table of Evidence Table of Evidence Table of Evidence Table of Evidence Table of Evidence Table of Evidence Table of Evidence

Table B:

Table of Evidence

Psychometric Properties of the LCJR in Reviewed Studies

Psychometric analysis reported in the included studies supports previously reported reliability of the LCJR (Adamson et al., 2012; Victor-Chmil & Larew, 2013). Six of the 18 quantitative research teams reported an internal consistency ranging from .80 to .97 for the total scale (Blum et al., 2010; Fawaz & Hamdan-Mansour, 2016; Kulju, 2013; Mariani et al., 2013; Marcyjanik, 2016; Rodriguez, 2014). Internal consistency for subscales ranged between .89 to .93 (Gubrud-Howe, 2008). When students self-scored, the internal consistencies were .81 (Blum et al., 2010) and .82 (Marcyjanik, 2016) for the total scale. However, Coram (2016) postulated that the nonsignificant results in her study could have been related to inadequate training of students in how to use the instrument. Overall, internal consistency was supported for both faculty and student self-assessments.

Fifteen research teams used multiple raters to score the LCJR and the majority included discussion concerning the processes taken to reduce variability among raters. Interrater reliability, reported as percent agreement, was variable among the six reporting research teams (Coram, 2016; Gubrud-Howe, 2008; Kulju, 2013; Mann, 2010; McMahon, 2013; Rodriguez, 2014; Yuan et al., 2014). Generally, researchers reported agreements ranging from 76% to 96%, with the majority reporting 80% agreement or higher and interrater correlations ranging from .78 to .92 (Kulju, 2013; Mariani et al., 2013). Gubrud-Howe (2008) compared interratings and reported no significant differences between raters. However, a noted limitation of several studies indicated that poor interrater training (McMahon, 2013), inconsistent scores (McMahon, 2013; Rodriguez, 2014), and subjectivity (Ferguson, 2012; Kulju, 2013) could have affected the study results.

How the LCJR Has Been Used

Across the 20 studies, researchers used the LCJR to evaluate the effectiveness of nine different educational interventions. These include simulation, objective structured clinical evaluation, grand rounds, expert role modeling, developing nurses thinking model, How People Learn framework, the debriefing of a meaningful learning model, a problem-based learning intervention, and the use of community volunteers. How the LCJR was used to evaluate clinical judgment in association with these interventions varied greatly depending on the design of the study.

Overall, how the LCJR was used to evaluate educational interventions identified three major themes with implications for study results: individual versus group evaluations, clinical judgment scenarios, and scoring nonobservation activities. Many research teams evaluated clinical judgment in group settings, in a single complex scenario, and through nondirect observation activities, which could influence the reliability and validity of results.

Individual Versus Group Evaluations. The LCJR was originally developed to directly observe and evaluate individual students' ability to make a clinical judgment in a single clinical situation (Lasater, 2007b). However, in this review, nine of the 20 research teams evaluated students' clinical judgment in group activities, of which eight of the nine were through simulations. The ninth research team to evaluate clinical judgment in groups used a case study group method. Blum et al. (2010) reported an internal consistency of .88 for the total subscale when faculty scored students in a group; Gubrud-Howe (2008) reported internal consistencies across the subscales to range between .89 to .93. Internal consistency for individual scores ranged between .80 to .97 (Fawaz et al., 2016; Kulju, 2013; Mariani et al., 2013; Rodriguez, 2014). Evaluating students in a group setting was not discussed as a potential limitation by any of the researchers.

Clinical Judgment Scenarios. Scenarios used to evaluate clinical judgment ranged from basic health assessments to critically deteriorating patients. First- or second-semester students were seemingly involved in complex patient care scenarios including cardiac arrhythmias, congestive heart failure, postoperative hemorrhage, and ruptured diverticula. Other scenarios used to evaluate clinical judgment amongst the research teams included a bowel obstruction with a pulmonary embolism, geriatric hip fracture care, pediatric and adult assessments, cardiac arrest, pain management, diabetic ketoacidosis, and pneumonia with an ischemic stroke. The type of scenario or level of educational preparation of the students was not listed as a limitation in any of the studies.

Scoring Nondirect Observation Activities. The majority of research teams scored the LCJR through direct observation activities during simulation, whether through task trainers or human-patient simulators, or clinical practice. However, clinical judgment was evaluated through nondirect observation activities including written case studies, recordings of case study discussions, reflective journaling, and a self-assessment survey. Furthermore, all but two research teams scored all the LCJR domains and dimensions. Blum et al. (2010) used a condensed version of the rubric, assessing only the following four subscale dimensions: “expected patterns” and “information seeking” in Noticing, “prioritizing data” in Interpreting, and “clear communication” in Responding. Kubin and Wilson (2017) assessed only the Noticing and Responding domains but did include all dimensions of each. In both studies, researchers omitted the Reflecting domain. Of note, scoring the rubric during nondirect observation activities was not discussed as a limitation in any of the studies.

Discussion

The LCJR was originally developed to directly observe and evaluate individual students' ability to make a clinical judgment in a single clinical situation (Lasater, 2007b). Results of this review indicate that the LCJR has been used to evaluate the effectiveness of teaching strategies to improve clinical judgment; however, implications for validity and reliability related to individual verses group evaluations, the clinical judgment scenario, and scoring the rubric for nondirect observation activities were not considered. Nonetheless, the LCJR shows promise in providing nursing education with a much-needed tool to standardize language and evaluation.

In theory, evaluating clinical judgment in the simulation environment should provide similar findings as in the clinical setting, but this is not always the case. Frequently, simulations are conducted in groups of students, typically four to six, to conserve monetary resources and time. Commonly during group simulations, students take turns as the primary nurse in the scenario, acting as the team leader to direct others' actions. The other students undoubtedly influence the primary nurse during the scenario. For example, secondary students assessing the patient, as when taking vital signs, could think aloud their interpretations, which could influence intervention planning by the primary nurse. Although nurses can work in teams to troubleshoot a patient situation, rarely are there multiple nurses present for the initial assessment and early planning, particularly in a medical–surgical unit. Thus, evaluating an individual student's clinical judgment is influenced by the group dynamics potentially resulting in falsely elevated clinical judgment scores. Furthermore, the internal consistency of scores between group settings versus individual settings needs to be better understood. Although internal consistency for total scale and subscales indicate acceptable reliability, few researchers reported the psychometric properties of the instrument, a reported limitation of the Skoronksi (2018) study. Better understanding is needed between differences in reliability when students are assessed in group settings versus individual settings.

The complexity of patient care scenarios should increase as students progress through the nursing curriculum to ensure students are educationally prepared to perform. Novice students have difficulty with cue recognition and performing assessments (Burbach & Thompson, 2014). Scoring novice students during complex scenarios could result in low scores, resulting in a floor effect (Adamson et al., 2012). For example, under the Noticing domain, a performance that scores as “beginning” in the observation dimension indicates the student was confused by the situation, was disorganized in collecting data, potentially missed some data, and/or made assessment errors. In a complex situation, this is likely how a novice student would perform. In addition, a limited range in scores would likely reduce correlations between variables (Goodwin & Leech, 2006), a noted limitation in the Mariani et al. (2013) study. Educators might interpret performance as poor clinical judgment when, in reality, students are not prepared for the complexity of the scenario. Furthermore, although the LCJR is meant to score clinical judgment with a single episode, assessment of students' abilities should occur over multiple clinical judgment scenarios (Lasater, 2007a). Not only is this an important consideration when evaluating the effectiveness of educational interventions, but also when evaluating program competencies. Clinical judgment skills in one episode in time does not necessarily translate to all situations.

Complex simulation scenarios, such as unfolding case studies, may require students to assess, intervene, evaluate, and reflect, then reassess, adjust the intervention, evaluate, and reflect again within the scenario. Tanner (2006) described this circular process, which is consistent with the realities of nursing practice. However, evaluating clinical judgment in complex scenarios that require multiple circular iterations of the clinical judgment process could make scoring the rubric difficult. Nursing students and novice nurses have difficulty differentiating cues as being important or nonimportant (Burbach & Thompson, 2014). Students may miss important cues and misinterpret the initial assessment findings, resulting in ineffective interventions, but could eventually determine the correct actions through accurate reassessments, resulting in improved outcomes. Educators may have difficulty scoring these iterative cases.

Case studies, reflective journaling, discussion recordings, individual interviews, and self-assessment surveys are nondirect observation activities. All can be useful methods to evaluate clinical judgment; however, adaptation of the LCJR rubric is required. According to Tanner's (2006) theory, without scoring all LCJR domains, clinical judgment is not evaluated. This is an important consideration if the goal is to measure clinical judgment. Because certain dimensions in the Responding domain are clearly measurable only through direct observations, scoring of these dimensions should be modified or omitted. For example, evaluating clear communication is unrealistic from a case study; this dimension is either omitted or scored through other ways. Lasater et al. (2014) evaluated clinical judgment from reflective journals using the entire LCJR rubric by including open-ended questions to score all but one dimension of the domains. Under the clear communication dimension of the Responding domain, students were asked to give an example of their best communication with the patient. In the dimension being skillful, students were asked to reflect on their care as to what was expected. For the dimension calm or confident manner, students were asked to rate themselves on a 10-point Likert scale, with 10 being the highest score. Methods like these can assist educators in using the original LCJR with minor adjustments. Another consideration would be to not score these dimensions. As long as the remaining Responding dimension of well-planned intervention/flexibility and other domains are included, clinical judgment would be evaluated.

Implementation Recommendations

Interrater and Intrarater Reliability

When using scoring instruments, interrater reliability training and techniques to lessen rater bias are essential. All potential raters should train together by scoring scenarios and then comparing results. For research purposes, this would improve the reliability of the results. In academia, improved interrater reliability assists in standardizing the evaluation process (Pufpaff et al., 2015). After rating scenarios, faculty can come to consensus. Consensus scoring reduces bias and inconsistencies (Polit & Beck, 2017). Intrarater reliability can be affected by experience with the rubric, time for training, and rater selection (Graham et al., 2012). Whether multiple or single raters are used, consistency of scores is key. To help raters use the LCJR, educators and researchers should provide exemplar cases or behaviors expected for each dimension to improve consistency in results (Polit & Beck, 2017). Furthermore, raters should have adequate time to focus on each rating and to practice using the LCJR. If more than one rater will be scoring the students, inter-rater reliability training and regular reliability checks are essential to improve and maintain consistency in scores and be fair to students (Pufpaff et al., 2015), particularly if a consensus approach is not used.

Individual Versus Group Evaluations

The LCJR is intended to score an individual. This is difficult to do in the simulation environment when group simulations are the norm. Other active-learning methods could assist educators in evaluating clinical judgment outside the simulation laboratory, including case studies, reflective journals, and debriefing interviews. Nurse educators should consider how these methods could replace or augment simulation if scheduling multiple individual simulations is not realistic. Case studies and unfolding case studies could be planned for each class session to help students connect theory to practice. For example, students could be required to complete the case study individually before or after attending class. Educators could score the case studies using an adapted LCJR, and the students could receive points based on their score. Case study questions could guide the student to describe and prioritize needed assessments/reassessments and interventions. Furthermore, students could discuss their reasoning behind decision making. As the scenario continues to unfold, students could be asked to reflect on the outcomes of the interventions and describe new assessments and actions to take. Then students bring their completed case study to class and work through it again as a group, which promotes peer learning. Points awarded based on the group score would encourage class attendance. Furthermore, educators might assign reflective journaling activities after clinical practice experiences and/or plan clinical debriefing discussions during clinical experience. The LCJR could be used to evaluate clinical judgment in each of these activities.

Clinical Judgment Scenarios

Patient care is complex, and students should be educationally prepared to handle complex scenarios. Evaluations of clinical judgment should be performed as students are prepared to perform at expected levels (Adamson et al., 2012). Consideration should be given to curricular mapping of clinical judgment assessments that support the developmental nature of students' knowledge and skill development. Educators should be realistic when considering students' abilities with planning patient care scenarios or activities. In early clinical courses, educators might evaluate students using only those clinical reasoning dimensions that are appropriate to the skill level and course content. For example, during fundamentals and assessment coursework, students could be evaluated on the dimensions of observation and information seeking in the Noticing domain. As students progress through clinical courses and specialty courses where patient scenarios become more complex, dimensions in prioritizing and making sense of data, along with well-planned interventions and reflection-in-action, should be added. Finally, in a final capstone course, reflection-on-action activities could provide the foundation for evaluation of their clinical practice as students prepare to enter the workforce. These evaluations could be used to guide student progress and as benchmarks for progression and highlight when remediation is needed. However, educators should remember that without evaluating all domains, only clinical reasoning processes are being evaluated. Thus, when evaluating clinical judgment to assess end-of-program outcomes, the Reflecting domain should be included. Furthermore, educators should design multiple opportunities to evaluate clinical judgment skills in a variety of patient care scenarios, reducing the chances that performance inferences are made based on one performance.

Nondirect Observation Activities

The LCJR would need to be altered if used to score clinical judgment in situations other than during observation activities. Research should be grounded in a theoretical framework. Considering the LCJR is based on Tanner's (2006) model of clinical judgment provides validity of the study instrument to an operational definition of clinical judgment. Any adaptations to the LCJR should continue to follow Tanner's model to ensure the instrument maintains validity. That is, validity assessment such as content validity should be considered. Furthermore, established reliability on the original instrument cannot be applied to the adapted version; therefore, studies to establish reliability should be conducted on any adapted version (Streiner & Norman, 2015). Internal consistency, for total scale and subscales, should be at a minimum of .70 if used in research and greater than .90 if used for educational purposes (Gadermann et al., 2012).

To determine the ability of the adapted instrument to collect quality data, pilot studies should be planned before study data are collected (Thabane et al., 2010). If through the pilot process adjustments are needed to the LCJR, these can be implemented before full study data are collected. Planning pilot studies through an iterative process can allow for research teams to determine the effects of the adjustments before full-scale implementation (Verstegen et al., 2006). Furthermore, institutional review board applications should be worded generally enough to allow the researcher to adjust any adapted rubric without having to return to the institutional review board each time if multiple pilot iterations are planned.

Conclusion

Clinical judgment is imperative for safe practice (Manetti, 2019), and health care providers expect new graduates to be proficient in clinical judgment (Berkow et al., 2009). Nurse educators must answer the call to improve patient safety. Clinical judgment tools with support for reliability and validity are needed to facilitate clinical judgment development.

The LCJR is specific to nursing practice and has support for validity and reliability. However, to implement the LCJR to not influence study results, educators must consider the need for rater training, individual clinical judgment evaluations, students' educational level, and the potential need to adapt the rubric for nonobservation activities. More research is needed to understand the implications of using the LCJR, or any adapted versions, to facilitate and evaluate clinical judgment in a variety of settings, across different methods, and in individuals versus groups.

References

  • Adamson, K. A., Gubrud, P., Sideras, S. & Lasater, K. (2012). Assessing the reliability, validity, and use of the Lasater Clinical Judgment Rubric: Three approaches. Journal of Nursing Education, 51(2), 66–73 doi:10.3928/01484834-20111130-03 [CrossRef] PMID:22132718
  • Ashley, J. & Stamp, K. (2014). Learning to think like a nurse: The development of clinical judgment in nursing students. Journal of Nursing Education, 53(9), 519–525 doi:10.3928/01484834-20140821-14 [CrossRef] PMID:25199107
  • Benner, P., Sutphen, M., Leonard, V. & Day, L. (2009). Educating nurses: A call for radical transformation. Jossey-Bass.
  • Berkow, S., Virkstis, K., Stewart, J. & Conway, L. (2009). Assessing new graduate nurse performance. Nurse Educator, 34(1), 17–22 doi:10.1097/01.NNE.0000343405.90362.15 [CrossRef] PMID:19104340
  • Blum, C. A., Borglund, S. & Parcells, D. (2010). High-fidelity nursing simulation: Impact on student self-confidence and clinical competence. International Journal of Nursing Education Scholarship, 7(1), 18 doi:10.2202/1548-923X.2035 [CrossRef] PMID:20597857
  • Burbach, B. E. & Thompson, S. A. (2014). Cue recognition by undergraduate nursing students: An integrative review. Journal of Nursing Education, 53(9, Suppl), S73–S81 doi:10.3928/01484834-20140806-07 [CrossRef] PMID:25102133
  • Bussard, M. (2015). The nature of clinical judgment development in reflective journals. Journal of Nursing Education, 54(8), 451–454 doi:10.3928/01484834-20150717-05 [CrossRef] PMID:26230165
  • Bussard, M. E. (2018). Evaluation of clinical judgment in prelicensure nursing students. Nurse Educator, 43(2), 106–108 doi:10.1097/NNE.0000000000000432 [CrossRef] PMID:28817474
  • Cappelletti, A., Engel, J. K. & Prentice, D. (2014). Systematic review of clinical judgment and reasoning in nursing. Journal of Nursing Education, 53(8), 453–458 doi:10.3928/01484834-20140724-01 [CrossRef] PMID:25050560
  • Coram, C. (2016). Expert role modeling effect on novice nursing students' clinical judgment. Clinical Simulation in Nursing, 12(9), 385–391 doi:10.1016/j.ecns.2016.04.009 [CrossRef]
  • del Bueno, D. (2005). A crisis in critical thinking. Nursing Education Perspectives, 26(5), 278–282 PMID:16295306
  • Douglass, K. (2014). The effect of developing nurses' thinking model on clinical judgment in nursing students (Publication No. 3642205) [DNP Capstone Project, Gardner-Webb University]. Sigma Repository.
  • Fawaz, M. A. & Hamdan-Mansour, A. M. (2016). Impact of high-fidelity simulation on the development of clinical judgment and motivation among Lebanese nursing students. Nurse Education Today, 46, 36–42 doi:10.1016/j.nedt.2016.08.026 [CrossRef] PMID:27591378
  • Ferguson, R. (2012). Critical thinking skills in nursing students: Using human patient simulation (Publication No. 3519868) [Doctoral dissertation, University of the Pacific]. ProQuest.
  • Gadermann, A. M., Guhn, M. & Zumbo, B. D. (2012). Estimating ordinal reliability for Likert-type and ordinal items response data: A conceptual, empirical, and practical guide. Practical Assessment, Research & Evaluation, 17(3), 1–13.
  • Go, D. P. (2012). High fidelity patient simulation and clinical judgment skills acquisition in BSN students (Publication No. 3510681) [DNP Project, Fairleigh Dickinson University]. ProQuest.
  • Goodwin, L. D. & Leech, N. L. (2006). Understanding correlation: Factors that Affect the Size of r. Journal of Experimental Education, 74(3), 249–266 doi:10.3200/JEXE.74.3.249-266 [CrossRef]
  • Graham, M., Milanowski, A. & Miller, J. (2012). Measuring and promoting inter-rater agreement of teacher and principal performance ratings. Center for Educator Compensation Reform. https://files.eric.ed.gov/fulltext/ED532068.pdf
  • Gubrud-Howe, P. M. (2008). Development of clinical judgment in nursing students: A learning framework to use in designing and implementing simulated learning experiences. (Publication No. 3343767) [DNP Project, Portland State University]. ProQuest.
  • Hickerson, K. A., Taylor, L. A. & Terhaar, M. F. (2016). The preparation-practice gap: An integrative review. The Journal of Continuing Education in Nursing, 47(1), 17–23 doi:10.3928/00220124-20151230-06 [CrossRef]
  • Jensen, R. (2010). Evaluating clinical judgment in a nursing capstone course. 2010 Assessment Institute. Presented at 2020 Assessment Institute, Indianapolis, IN. https://core.ac.uk/download/pdf/47233213.pdf
  • Johnson, E. A., Lasater, K., Hodson-Carlton, K., Siktberg, L., Sideras, S. & Dillard, N. (2012). Geriatrics in simulation: Role modeling and clinical judgment effect. Nursing Education Perspectives, 33(3), 176–180 doi:10.5480/1536-5026-33.3.176 [CrossRef] PMID:22860481
  • Kavanagh, J. M. & Szweda, C. (2017). A crisis in competency: The strategic and ethical imperative to assessing new graduate nurses' clinical reasoning. Nursing Education Perspectives, 38(2), 57–62 doi:10.1097/01.NEP.0000000000000112 [CrossRef] PMID:29194297
  • Kubin, L. & Wilson, C. E. (2017). Effects of community volunteer children on student pediatric assessment behaviors. Clinical Simulation in Nursing, 13(7), 303–308 doi:10.1016/j.ecns.2017.04.011 [CrossRef]
  • Kulju, L. A. (2013). The acquisition of pain, knowledge, attitudes, and clinical judgment in baccalaureate nursing students: The effect of high fidelity patient simulation. (Publication No. 3606003) [Doctoral dissertation, University of Northern Colorado]. Michner Archives.
  • Lasater, K. (2007a). Clinical judgment development: Using simulation to create an assessment rubric. Journal of Nursing Education, 46, 496–503 doi:10.3928/01484834-20071101-04 [CrossRef] PMID:18019107
  • Lasater, K. (2007b). High-fidelity simulation and the development of clinical judgment: Students' experiences. Journal of Nursing Education, 46, 269–276 doi:10.3928/01484834-20070601-06 [CrossRef] PMID:17580739
  • Lasater, K., Johnson, E. A., Ravert, P. & Rink, D. (2014). Role modeling clinical judgment for an unfolding older adult simulation. Journal of Nursing Education, 53(5), 257–264 doi:10.3928/01484834-20140414-01 [CrossRef] PMID:24716674
  • Manetti, W. (2019). Sound clinical judgment in nursing: A concept analysis. Nursing Forum, 54, 102–110 doi:10.1111/nuf.12303 [CrossRef] PMID:30380153
  • Mann, J. W. (2010). Promoting curriculum choices: Critical thinking and clinical judgment skill development in baccalaureate nursing students. [Doctoral dissertation, University of Kansas]. KU Scholar Works.
  • Marcyjanik, D. (2016). Senior baccalaureate nursing students; clinical competence and objective structured clinical examination. (Publication No. 10172478) [Doctoral dissertation, Capella University]. ProQuest.
  • Mariani, B., Cantrell, M. A., Meakim, C., Prieto, P. & Dreifuerst, K. T. (2013). Structured debriefing and students' clinical judgment abilities in simulation. Clinical Simulation in Nursing, 9, e147–e155 doi:10.1016/j.ecns.2011.11.009 [CrossRef]
  • McMahon, M. (2013). Effectiveness of a problem-based learning intervention on the clinical judgment abilities and ambiguity tolerance of baccalaureate nursing students during high fidelity simulation. (Publication No. 3537438) [Doctoral dissertation, University of Massachusetts Dartmouth]. ProQuest.
  • Polit, D. F. & Beck, C. T. (2017). Nursing research: Generating and assessing evidence for nursing practice (10th ed.). Wolters Kluwer Health.
  • Pufpaff, L. A., Clark, L. & Jones, R. E. (2015). The effects of rater training on inter-rater agreement. Mid-Western Educational Researcher, 27(2), 117–141.
  • Rodriguez, E. M. (2014). Development of clinical judgment for Hispanic and non-hispanic nursing students: A comparison of traditional and simulated clinical experiences. [Doctoral dissertation, University of Texas at Tyler]. Scholar Works. http://hdl.handle.net/10950/242
  • Russell, C. L. (2005). An overview of the integrative research review. Progress in Transplantation, 15(1), 8–13 doi:10.1177/152692480501500102 [CrossRef] PMID:15839365
  • Seacrist, M. J. & Noell, D. (2016). Development of a tool to measure nurse clinical judgment during maternal mortality case review. Journal of Obstetric, Gynecologic, and Neonatal Nursing, 45, 870–877 doi:10.1016/j.jogn.2016.03.143 [CrossRef] PMID:27665070
  • Skoronski, L. (2018). The effects of a repeating simulation experience on senior nursing students. (Publication No. 10814216) [Doctoral dissertation, University of Wisconsin-Milwaukee]. ProQuest.
  • Streiner, D. L. & Norman, G. R. (2015). Health measurement scales: A practical guide to their development and use (5th ed.). Oxford University Press. doi:10.1093/med/9780199685219.001.0001 [CrossRef]
  • Tanner, C. A. (2006). Thinking like a nurse: A research-based model of clinical judgment in nursing. Journal of Nursing Education, 45(6), 204–211 doi:10.3928/01484834-20060601-04 [CrossRef] PMID:16780008
  • Tanner, C. A. (2010). Transforming prelicensure nursing education: Preparing the new nurse to meet emerging health care needs. Nursing Education Perspectives, 31(6), 347–353 PMID:21280438
  • Thabane, L., Ma, J., Chu, R., Cheng, J., Ismaila, A., Rios, L. P., Robson, R., Thabane, M., Giangregorio, L. & Goldsmith, C. H. (2010). A tutorial on pilot studies: The what, why and how. BMC Medical Research Methodology, 10(1), 1 Advance online publication. doi:10.1186/1471-2288-10-1 [CrossRef] PMID:20053272
  • Todd, M., Manz, J. A., Hawkins, K. S., Parsons, M. E. & Hercinger, M. (2008). The development of a quantitative evaluation tool for simulations in nursing education. International Journal of Nursing Education Scholarship, 5(1), 41 doi:10.2202/1548-923X.1705 [CrossRef] PMID:19049492
  • Verstegen, D. M. L., Barnard, Y. F. & Pilot, A. (2006). Which events can cause iteration in instructional design? An empirical study of the design process. Instructional Science, 34(6), 481–517 doi:10.1007/s11251-005-3346-0 [CrossRef]
  • Victor-Chmil, J. & Larew, C. (2013). Psychometric properties of the lasater clinical judgment rubric. International Journal of Nursing Education Scholarship, 10(1), 45–52 doi:10.1515/ijnes-2012-0030 [CrossRef] PMID:23629461
  • Yuan, H. B., Williams, B. A. & Man, C. Y. (2014). Nursing students' clinical judgment in high-fidelity simulation based learning: A quasi-experimental study. Journal of Nursing Education and Practice, 4(5), 7–15 doi:10.5430/jnep.v4n5p7 [CrossRef]

Lasater Clinical Judgment Rubric

Dimension Exemplary Accomplished Developing Beginning
Effective noticing involves:
Focused observation Focuses observation appropriately, regularly observes and monitors a wide variety of objective and subjective data to uncover any useful information Regularly observes and monitors a variety of data, including both subjective and objective; most useful information is noticed; may miss the most subtle signs Attempts to monitor a variety of subjective and objective data but is overwhelmed by the array of data; focuses on the most obvious data, missing some important information Confused by the clinical situation and the amount and kind of data; observation is not organized and important data are missed, and/or assessment errors are made
Recognizing deviations from expected patterns Recognizes subtle patterns and deviations from expected patterns in data and uses these to guide the assessment Recognizes most obvious patterns and deviations in data and uses these to continually assess Identifies obvious patterns and deviations, missing some important information; unsure how to continue the assessment Focuses on one thing at a time and misses most patterns and deviations from expectations; misses opportunities to refine the assessment
Information seeking Assertively seeks information to plan intervention: carefully collects useful subjective data from observing and interacting with the patient and family Actively seeks subjective information about the patient's situation from the patient and family to support planning interventions; occasionally does not pursue important leads Makes limited efforts to seek additional information from the patient and family; often seems not to know what information to seek and/or pursues unrelated information Is ineffective in seeking information; relies mostly on objective data; has difficulty interacting with the patient and family and fails to collect important subjective data
Effective interpreting involves:
Prioritizing data Focuses on the most relevant and important data useful for explaining the patient's condition Generally focuses on the most important data and seeks further relevant information but also may try to attend to less pertinent data Makes an effort to prioritize data and focus on the most important, but also attends to less relevant or useful data Has difficulty focusing and appears not to know which data are most important to the diagnosis; attempts to attend to all available data
Making sense of data Even when facing complex, conflicting, or confusing data, is able to (a) note and make sense of patterns in the patient's data, (b) compare these with known patterns (from the nursing knowledge base, research, personal experience, and intuition), and (c) develop plans for interventions that can be justified in terms of their likelihood of success In most situations, interprets the patient's data patterns and compares with known patterns to develop an intervention plan and accompanying rationale; the exceptions are rare or in complicated cases where it is appropriate to seek the guidance of a specialist or a more experienced nurse In simple, common, or familiar situations, is able to compare the patient's data patterns with those known and to develop or explain intervention plans; has difficulty, however, with even moderately difficult data or situations that are within the expectations of students; inappropriately requires advice or assistance Even in simple, common, or familiar situations, has difficulty interpreting or making sense of data; has trouble distinguishing among competing explanations and appropriate interventions, requiring assistance both in diagnosing the problem and developing an intervention
Effective responding involves:
Calm, confident manner Assumes responsibility; delegates team assignments; assesses patients and reassures them and their families Generally displays leadership and confidence and is able to control or calm most situations; may show stress in particularly difficult or complex situations Is tentative in the leader role; reassures patients and families in routine and relatively simple situations, but becomes stressed and disorganized easily Except in simple and routine situations, is stressed and disorganized, lacks control, makes patients and families anxious or less able to cooperate
Clear communication Communicates effectively; explains interventions; calms and reassures patients and families; directs and involves team members, explaining and giving directions; checks for understanding Generally communicates well; explains carefully to patients; gives clear directions to team; could be more effective in establishing rapport Shows some communication ability (e.g., giving directions); communication with patients, families, and team members is only partly successful; displays caring but not competence Has difficulty communicating; explanations are confusing; directions are unclear or contradictory; patients and families are made confused or anxious and are not reassured
Well-planned intervention/flexibility Interventions are tailored for the individual patient; monitors patient progress closely and is able to adjust treatment as indicated by patient response Develops interventions on the basis of relevant patient data; monitors progress regularly but does not expect to have to change treatments Develops interventions on the basis of the most obvious data; monitors progress but is unable to make adjustments as indicated by the patient's response Focuses on developing a single intervention, addressing a likely solution, but it may be vague, confusing, and/or incomplete; some monitoring may occur
Being skillful Shows mastery of necessary nursing skills Displays proficiency in the use of most nursing skills; could improve speed or accuracy Is hesitant or ineffective in using nursing skills Is unable to select and/ or perform nursing skills
Effective reflecting involves:
Evaluation/self-analysis Independently evaluates and analyzes personal clinical performance, noting decision points, elaborating alternatives, and accurately evaluating choices against alternatives Evaluates and analyzes personal clinical performance with minimal prompting, primarily about major events or decisions; key decision points are identified, and alternatives are considered Even when prompted, briefly verbalizes the most obvious evaluations; has difficulty imagining alternative choices; is self-protective in evaluating personal choices Even prompted evaluations are brief, cursory, and not used to improve performance; justifies personal decisions and choices without evaluating them
Commitment to improvement Demonstrates commitment to ongoing improvement; reflects on and critically evaluates nursing experiences; accurately identifies strengths and weaknesses and develops specific plans to eliminate weaknesses Demonstrates a desire to improve nursing performance; reflects on and evaluates experiences; identifies strengths and weaknesses; could be more systematic in evaluating weaknesses Demonstrates awareness of the need for ongoing improvement and makes some effort to learn from experience and improve performance but tends to state the obvious and needs external evaluation Appears uninterested in improving performance or is unable to do so; rarely reflects; is uncritical of himself or herself or overly critical (given level of development); is unable to see flaws or need for improvement

Table of Evidence

Study Purpose Participants Design Method LCJR Psychometrics/Usage/Limitations
Blum et al. (2010) Examination of the quantitative relationship between simulation, student self-confidence, and clinical competence. N = 53 first-semester BSN students in a health assessment and skills course. Quantitative, quasi-experimental Students were assigned to control (traditional task trainers and student volunteers) and experimental (simulation-enhanced with human patient simulators) groups that met weekly during the 13-week course. Faculty and students scored clinical judgment during the midterm and final week. For those in the experimental group, clinical judgment was faculty scored in simulation scenarios of groups of two students. Selected four subscale items; recognizing deviations from expected patterns, information seeking, prioritizing data, and clear communication. Internal consistency for student self-rated was α = 0.810 and faculty α = 0.884. LCJR scored independently by multiple clinical faculty. Rater training not discussed.
Bussard (2018) Examine the differences in clinical judgment of students across four simulation scenarios. N = 70 diploma prelicensure students completing a medical–surgical course Quantitative (part of mixed-method study), quasi-experimental Students participated, it appears as an individual, in four different simulation scenarios scheduled throughout the semester. Scenarios involved diabetic ketoacidosis, leg fracture, pneumonia with ischemic stroke, and bowel obstruction with a PE. Faculty scored clinical judgment during the simulations. All dimensions and subscales used. LCJR scored independently by one rater. Psychometric data not provided.
Bussard (2015) Explore the nature of clinical judgment development as revealed in students' reflective journals after participating in four progressive high-fidelity simulation (HFS) scenarios N = 30 pre-licensure diploma nursing students in a medical–surgical nursing course. Qualitative (part of mixed-method study), interpretive description Students participated, it appears as an individual, in four different simulation scenarios scheduled throughout the semester. Scenarios involved diabetic ketoacidosis, leg fracture, pneumonia with ischemic stroke, and bowel obstruction with a PE. Clinical judgment was faculty scored through reflective journals on the experience. All dimensions and subscales used. Psychometric data not provided. LCJR scored independently by two raters. Rater training discussed.
Coram (2016) Determine the effect of the specific prebriefing strategy of expert role modeling on novice nursing students' clinical judgment scores N = 43 junior university students enrolled in a medical–surgical nursing course. Quantitative, experimental Students were randomly assigned to treatment (prebriefing strategy of expert role modeling) and control (standard prebriefing) before participating, seemingly in a group, in simulation scenarios. Faculty scored clinical judgment during the simulation scenarios. Students self-scored and peer scored after the simulations. All dimensions and subscales used. Percent agreement yielded a score of 80%. LCJR scored independently by three raters. Rater training not discussed. Limitations - The lack of significance in student scores may have been related to inadequate training of the students in how to use the LCJR.
Douglass (2014) DNP project Test the effect of the middle range theory of the Developing Nurses' Thinking (DNT) Model on clinical judgment in nursing students. N = 44 first semester senior BSN students in an adult health course Quantitative, experimental pre/posttest design Students were randomly assigned from clinical groups to receive the intervention (DNT model during clinical post conference) or control (standard clinical post conference). Faculty scored clinical judgment during a group simulation pre- and postintervention. All dimensions and subscales used. Psychometric data not provided. LCJR scored independently by one rater. Limitations - limited time for rater practice before using.
Fawaz & Hamdan-Mansour (2016) Study the impact of using high-fidelity simulation (HFS) on the development of clinical judgment and motivation for academic achievement among Lebanese nursing students. N = 56 first-year nursing students from two universities enrolled in an adult nursing course. Quantitative, quasi-experimental, posttest only Students in University A were assigned the intervention (individual simulation training on one congestive heart failure scenario) while students at university B were assigned to the control (traditional-method demonstration) group. All students then completed clinical training for patients with heart failure. Faculty scored clinical judgment at the end of clinical training. All dimensions and subscales used. Internal consistency was α = 0.93. LCJR scored independently by one rater.
Ferguson (2012) Dissertation Investigate the use of human patient simulation (HPS) as a tool to develop critical thinking knowledge and skills in undergraduate nursing students. N = 57 second-semester undergraduate BSN students Quantitative, experimental pre/posttest Students randomized into three groups; 1) lecture followed by a micro-simulation (virtual reality on cardiac arrhythmias), 2) lecture followed by a micro-simulation and group mentored simulation (narrow and wide complex tachycardia), 3) lecture followed by a microsimulation and group mentored/nonmentored simulation. Faculty scored clinical judgment for groups 2 and 3 through observation during a simulation. All dimensions and subscales used. Psychometric data not reported. LCJR scored independently by one rater. Limitations - potential scoring bias due to only 1 rater limiting inter-rater reliability.
Go (2012) DNP project Examine the effect of high-fidelity patient simulation (HFPS) on the clinical judgment acquisition N = 35 nursing BSN students enrolled in a senior level medical–surgical nursing course Quantitative, quasi-experimental, pre/posttest design All students participated in a simulation scenario about congestive heart failure as a group. Faculty scored for clinical judgment based on their individual answers on a LCJR based congestive heart failure case scenario completed before and after the simulation. All dimensions and subscales used. Psychometric data not provided. LCJR scored independently by two raters. Rater training discussed.
Gubrud-Howe (2008) Dissertation Examine the effect of the learning framework How People Learn (HPL) impact development of clinical judgment. N = 36 ASN students in final semester enrolled at a community college. Mixed-methods, quasi-experimental pre/posttest, qualitative Students assigned to control (standard debriefing) and intervention (learning activity pre-simulation and structured debriefing based on the HPL framework) group. Both groups participated in the same 4 simulation scenarios in student groups. Faculty scored for clinical judgment during the 1st and 4th simulation. All dimensions and subscales used. Internal consistency was α = .886 for “Noticing”, α = .931 for “Interpreting”, α = .887 for “Responding”, and α = .914 for “Reflecting”. Inter-rater reliability was 92% pretest and 96% posttest agreement. One-way ANOVA found no significant differences between raters. LCJR scored by two raters for each student. Rater training discussed. Limitations - lack of established reliability of the tool at the time.
Johnson et al. (2012) Determine the effect of expert role modeling on nursing students' clinical judgment in the care of a simulated geriatric hip fracture patient. N = 94 first-semester nursing students from five schools of nursing Quantitative (part of mixed-methods study), quasi-experimental Students assigned to control and treatment groups before a simulation experience. The treatment group watched a video of expert role modeling in a simulated geriatric hip fracture scenario before the group simulation experience. Faculty scored for clinical judgment during the simulation. All dimensions and subscales used. Psychometric data not reported. LCJR scored independently by two raters for each student. Average scores of the two raters were recorded. Rater training discussed.
Kubin & Wilson (2017) Examine the impact of using community voluntary children on physical assessment abilities and comfort levels among undergraduate pediatric nursing students. N = 99 undergraduate baccalaureate nursing students in the third semester pediatric course of a four-semester nursing program Quantitative, quasi-experimental Students assigned to two groups. After didactic and hands-on skills laboratory instruction on pediatric assessments, one group practiced group assessments on a high-fidelity simulator and the other practiced as a group on community volunteer children. Students self-assessed and faculty scored clinical judgment while individually performing a pediatric assessment during a hospital rotation. Only evaluated students on the noticing and responding domains. Psychometric data not reported. LCJR scored independently by multiple clinical faculty. Rater training not discussed.
Kulju (2013) Dissertation Examine the effects of high-fidelity patient simulation (HFPS) on the development of BSN students' knowledge, attitudes, and clinical judgment regarding pain management. N = 14 students enrolled in a traditional track and accelerated track BSN program Quantitative, experimental, pre/posttest repeated measures Students randomly assigned to control group (interactive case study) and treatment (one scenario HFPS) after didactic teaching on pain management. Faculty scored clinical judgment during an individual pretest entry-level pain management simulation and an individual posttest complex pain management simulation. All dimensions and subscales used. Internal consistency was α = .97 pretest and α = .96 posttest. Pearson r = .993 and percent agreement 84% pretest. Pearson r = .780 and 81% posttest. LCJR scored independently by two raters. Rater training discussed. Limitations - Concerns with some lack of agreement between raters indicated some subjectivity and potential measurement error.
Lasater et al. (2014) Determine the effect of expert role modeling on nursing students' clinical judgment in the care of a simulated geriatric hip fracture patient. N = 134 first-semester nursing students from four schools of nursing Qualitative (part of mixed-methods study), thematic and content analysis Students assigned to control and treatment groups before a simulation experience and real-life care experiences. The treatment group watched a video of expert role modeling in a simulated geriatric hip fracture scenario before the group simulation experience. Subsequently, students in both groups participated in real-life care experiences. Students completed and faculty scored a reflective journal after the simulation experience and 4 weeks after the real-life care experiences. All dimensions and subscales used. Psychometric data not reported. Qualitative data coded and research team analyzed coded data.
Mann (2010) Dissertation Evaluate the effectiveness of grand rounds as an educational strategy to develop critical thinking and clinical judgment skills in baccalaureate nursing students N = 22 second-semester baccalaureate students Mixed-methods, Quantitative, experimental, pre/posttest, Students randomly assigned to intervention or comparison group. Both groups participated in a health care dilemma case study. Students in the intervention group received a faculty-led grand rounds activity and follow up interview as an educational strategy while completing the case study. Clinical judgment was faculty scored and analyzed based on the recordings. All dimensions and subscales used. Interrater reliability was 98.49%. LCJR scored independently by two raters. Rater training not discussed.
Marcyjanik (2016) Dissertation Determine if the implementation of an objective structured clinical examination (OSCE) had an effect on senior baccalaureate nursing students' clinical self-competence scores. N = 35 senior baccalaureate capstone students in final semester Quantitative, quasi-experimental pre/posttest Students assigned to an intervention group (participated individually in a three-station OSCE) and control group (no OSCE participation). Clinical judgment was student self-scored pre- and posttest with a reflection survey of self-competence. All dimensions and subscales used. Internal consistency reliability was noted at 0.82. LCJR self-scored by students. Student rater training not discussed. Limitations - amount of information students had about the LCJR. No explanation as to how this was a limitation was provided except as a potential bias.
Mariani et al. (2013) Empirically test and compare the clinical judgment of students who participated in structured debriefing sessions using Debriefing of Meaningful Learning (DML) and of students who received unstructured debriefing. N = 86 first-semester junior-level university medical–surgical nursing course Mixed-methods, quasi-experimental Students assigned to either control or intervention group. Students participated individually in two simulations. The intervention group received the DML and control group received unstructured debriefing after each simulation. Clinical judgment was faculty scored at the conclusion of each simulation experience, prior to the debriefing. All dimensions and subscales used. Internal consistency reliability ranged from α = .80 to .97 for total scale and subscales at each time of measurement. Using Pearson product-moment correlation, the IRR was determined to be high (r = .92; p < .01). LCJR scored independently by multiple faculty. Rater training was discussed Limitations – narrow range of scores reduced ability to discriminate between groups.
McMahon (2013) Dissertation Test the feasibility and effectiveness of a problem-based learning (PBL) intervention on clinical judgment in baccalaureate nursing students during a high-fidelity simulation (HFS) experience N = 18 senior-level final capstone course; baccalaureate Quantitative, quasi-experimental pre/posttest Students randomly assigned to either control or intervention group to prepare for the HFS experience. Intervention group participated in an online, group-based, facilitator guided, PBL intervention while the control group prepared independently. The group simulation experience was recorded, and faculty scored for clinical judgment. All dimensions and subscales used. Five of the eighteen LCJR evaluations met an IRR of .70 or greater. IRR ranged from 0% to 100% agreement. LCJR scored independently by two raters. Rater training discussed. Limitations – the study only evaluated CJ one time. And poor IRR was seen as a limitation.
Rodriguez (2014) Dissertation Evaluate the effectiveness of simulation alone or in combination with traditional clinical experiences on the development of clinical judgment for all nursing students and for Hispanic nursing students specifically N = 60 completed first two semesters of associate degree program Mixed-methods, experimental, Sequential, exploratory Students were randomly assigned to one of 3 three clinical groups for four weeks: traditional, simulation, combined traditional and simulation. Clinical judgment from individual performance was faculty scored 4 times following each clinical experience. All dimensions and subscales used. The percent agreement between the data collectors ranged 76 – 95%. Internal consistency reliability across the four weeks of clinical evaluation (week 3 α = 0.93, week 4 α = 0.93, week 5 α = 0.93, and week 6 α = 0.90). LCJR scored independently by three raters. Rater training discussed. Limitations - potential for inconsistent scores.
Skoronski (2018) Dissertation Determine if participation in a repeating cardiac arrest simulation experience has an impact on senior nursing students' knowledge and clinical judgment. N = 56 senior baccalaureate students Quantitative, quasi-experimental, repeated measures Recruited students participated in two group cardiac arrest simulations. Clinical judgment was faculty scored during each simulation. All dimensions and subscales used. Psychometric data not provided. LCJR scored independently by two raters. Rater training discussed. Limitations - psychometric data was not collected.
Yuan et al. (2014) Assess nursing students' clinical judgment in high-fidelity simulation-based learning using observational measures. The research questions further explains that clinical judgment will be measured through a series of simulation sessions. N = 120 2- and 3-year baccalaureate students who had completed fundamentals of nursing, health assessment, and medical–surgical nursing Quantitative, quasi-experimental single-group repeated-measures Year two and year three students were assigned to simulation groups within their year. Each group participated in five simulations. Video recordings of the simulations were faculty scored for clinical judgment. All dimensions and subscales used. IRR ranged .833 to .910. LCJR scored independently by two raters. Rater training not discussed.
Authors

Dr. Lee is Assistant Dean for Academics and Assistant Dean for Program Evaluation, University of Missouri Kansas City, School of Nursing and Health Studies, Kansas City, Missouri.

The author has disclosed no potential conflicts of interest, financial or otherwise.

The author thanks Dr. Joanne Schneider, Crystal Peeples, and Diana Owenby.

Address correspondence to Kristin C. Lee, PhD, RN, CNE, Assistant Dean for Academics and Assistant Dean for Program Evaluation, University of Missouri Kansas City, School of Nursing and Health Studies, 2464 Charlotte Street, Kansas City, MO 64108; email: leekri@umkc.edu.

Received: March 21, 2020
Accepted: August 03, 2020

10.3928/01484834-20210120-03

Sign up to receive

Journal E-contents