Simulation provides a controlled and safe practice environment for gaining experience and self-confidence in health assessment clinical skills outside of clinical practice. A strength of using simulation is that there is no direct risk to patients. However, simulation requires considerable financial resources in terms of faculty time and effort. The need to maximize faculty efficiency while ensuring teaching effectiveness and positive learning outcomes was the impetus behind the current Senior Students as Teachers feasibility study that initiated peer-assisted learning (Pearce et al., 2012), using a train-the-trainer (TTT) model for simulation (Suhrheinrich, 2011).
In the TTT model, the trainer is taught to train, monitor, and reinforce learning in other individuals in the group, institution, or community (Pearce et al., 2012). These initial trainers teach their peers, which increases the retention of the material. Peer-assisted learning, which is also called peer teaching (Priharjo & Hoy, 2011) or peer mentoring (Dennison, 2010), offers pedagogical advantages. Partnering students who are more advanced in a subject or skill area with those who are novice learners can encourage active learning for both participants; in addition, it reinforces the professional responsibility of the senior-level student for the less experienced student. The partnership can also encourage collaboration between the two peer colleagues (McKenna & French, 2011; Priharjo & Hoy, 2011). Senior-level students reported enhanced self-confidence and improved leadership and interpersonal and teaching skills in the peer-led teaching experience. Novices in the peer-led teaching experience reported increased confidence, decreased anxiety, and improved clinical practice (Giordana & Wedin, 2010; Harmer, Huffman, & Johnson, 2011; Zentz, Kurtz, & Alverson, 2014).
The purpose of the current feasibility study was to determine the effectiveness of using senior-level nursing students as teachers of junior-level students of health assessment clinical skills in the simulation laboratory in a prelicensure nursing program across two areas—enhanced peer-assisted learning and efficiency of faculty resources, including reduced facilitator cost. The study aims were to:
- Compare the implementation and evaluation of clinical performance of junior students taught by senior students with those taught by nursing faculty.
- Compare the postsimulation debriefing assessment of junior students taught by senior students with those taught by nursing faculty.
- Compare junior students’ satisfaction with the simulation learning experience led by senior students or nursing faculty.
A quasi-experimental study design was used to evaluate whether junior nursing students who received instruction from senior students differed from those who received instruction from nursing faculty in (a) clinical performance in simulated physical assessment modules, (b) debriefing effectiveness, and (c) satisfaction with the simulation learning experience. The study recruited a convenience sample of 20 senior nursing students and 60 junior nursing students from an accelerated baccalaureate program in an urban teaching hospital in the southeastern United States in 2011. Inclusion criteria for senior students was a grade point average greater than 3.0 and a history of satisfactory clinical performance; for junior students, the criterion was enrollment in third-semester courses. Junior students were randomly assigned to receive either a faculty-led or senior student–led simulation experience. Informed consent was obtained from the convenience sample. A procedure was in place for faculty to reteach content to the investigational group learners immediately following assessment if the study results had not been as positive as they ultimately were.
Tools were developed by a panel of nurse experts to measure student outcomes using adult and pediatric human simulators. These faculty determined the domain and content of the adult and pediatric assessments and generated an inclusive list of health assessment items. A clinical performance checklist was then developed by the university’s simulation center, using well-documented, evidence-based practice criteria. The checklist is a method to record successful completion of pediatric and adult simulation performance items (35 pediatric items, 42 adult items), with nearly half of the items identified as being critical and required elements. For the current feasibility study, interreliability between the live simulation assessment and video recording was not established. Therefore, assessment using synchronous evaluation was eliminated; two faculty subsequently viewed the video recordings. These faculty together reviewed a subset of the recordings, compared the results, discussed why the checklists were evaluated as they were, and reached an informal consensus on assessment of the performance behaviors for grading. These faculty then reviewed the student performance recordings asynchronously.
Nurse experts validated the appropriateness of the performance behaviors of the nursing students by piloting the tool with 12 volunteer nursing students who were not participating in the study. A questionnaire assessed whether the postsimulation debriefing provided feedback of value to the learner and whether the simulation instructor or facilitator explained difficult concepts clearly and demonstrated respect for the learners. Relevant items from the university’s Simulation Experience Evaluation Questionnaire measured student satisfaction with the learning experience.
Outcome variables were collected via the clinical performance checklist, video recording of each debriefing, and student evaluations for the fall and spring semesters of a single academic year. Measures included the (a) change in the number of behavior items and critical items missed on the preinstruction simulation clinical performance behavior checklist and subsequent postinstruction checklist, (b) mean responses on items in a postsimulation debriefing questionnaire, and (c) mean responses of relevant Simulation Experience Evaluation Questionnaire items. Questionnaire items were rated using a 4-point Likert-type scale (1 = strongly disagree to 4 = strongly agree).
Senior Students’ Intervention
Senior students received a 5-hour formal orientation from expert nursing faculty. The simulation specialist introduced the students to the computer equipment, the software program to run the scenarios, and the B-Line data capture system (B-Line Medical, 2014) to provide feedback to students and faculty. The B-Line system provides data capture, visualization, and analysis of simulated clinical situations and skills. A simulation faculty member, who is expert in the content knowledge and clinical experience, facilitated a review of the assessment performance criteria, the debriefing process, and a mock trial run of a simulation. Senior students were assigned to teach junior students for a 3-hour block of time. The same faculty member served in the role of supervisor and coach and was always present in the simulation center to facilitate simulations and provide support for all learners. Senior students earned 8 hours of clinical credit (3.6% of total clinical hours) toward the leadership management portion of their capstone clinical course. Prior to implementation of the study, the researchers received approval from the institutional review board, and compliance to all regulations was maintained.
Junior Students’ Intervention
Junior students performed three videotaped simulation assessment modules (SAM), with performance assessment conducted by a senior student as teacher or faculty after the first and third simulation.
Baseline SAM #1. Junior students completed a 10-minute, graded baseline simulation prior to course content delivery. Faculty scored the students by viewing the first videotaped assessment and using the clinical performance checklist. Checklist criteria were presented to junior students before study participation through a faculty-prepared adult and infant simulation video.
Learning SAM #2. Clinical groups of nine junior students were split into triads composed of a primary nurse, surrogate parent or spouse, and observer to complete their simulation module. Each student triad entered and remained in the simulation room for the entire experience, rotating through the three roles. After each primary nurse performed the assessment, the triad received a debriefing evaluation from either a faculty member or a senior student.
Assessment SAM #3. For the final simulation, participants repeated their role as primary nurse. Faculty subsequently scored the students by viewing video recordings to assess change.
Descriptive statistics and the Mann-Whitney test were used to analyze the study variables. The statistical significance level was set at alpha = 0.05, with Bonferroni correction.
Participants were predominantly women (82% of junior students, 90% of senior students), primarily aged between 20 and 29 years (70% of junior students, 75% of senior students), and most had prior baccalaureate degrees (94% of junior students, 95% of senior students).
Overall scores for junior students on the clinical performance checklist improved over the course of the adult and pediatric simulations. The change in the clinical performance checklist scores of junior students in pediatric simulations taught by senior students was significantly better than those taught by faculty for total items (senior students, M = 17.9, standard error [SE] = 0.746, median = 18.5; nursing faculty, M = 15.7, SE = 0.75, median = 16.0; p = 0.03) and critical items (senior students, M = 6.6, SE = 0.379, median = 7.0; nursing faculty, M = 5.1, SE = 0.471, median = 5.0; p = 0. 01). No significant difference was noted for change in checklist score in adult simulations for total items (senior students, M = 20.7, SE = 0.539, median = 21.0; nursing faculty, M = 20.7, SE = 1.24, median = 22.0; p = 0.45) or for critical items (senior students, M = 10.6, SE = 0.60, median = 11.0; nursing faculty, M = 10.8, SE = 0.60, median = 11.0; p = 0.51) for these data.
The average responses on postsimulation debriefing questionnaire items were similar for senior students and faculty for items 1 (Debriefing provided feedback of value to the learner [pediatric: seniors 3.8, faculty 3.7, p = 0.38; adult: seniors 3.6, faculty 3.8, p = 0.21]), 2 (Simulation instructor/facilitator displayed ability to explain difficult concepts clearly [pediatric: seniors 3.6, faculty 3.6, p = 0.79; adult: seniors 3.6, faculty 3.5, p = 0.62]), and 3 (Simulation instructor/facilitator demonstrated respect for learners [pediatric: N/A—item not included; adult: seniors 3.9, faculty 3.8, p = 0.79]). No significant difference was shown for any of the items for these data.
Junior Students’ Satisfaction Assessment
The average responses on the Simulation Satisfaction Survey items were similar for senior students and faculty for items 1 (Overall satisfaction with the learning experience [pediatric: seniors 3.6, faculty 3.7, p = 0.93; adult: seniors 3.6, faculty 3.7, p = 0.48]), 2 (Simulation instructor/facilitator demonstrated enthusiasm for teaching [pediatric: seniors = 3.7, faculty = 3.5, p = 0.3; adult: seniors = 3.6, faculty = 3.6, p = 0.92]), and 3 (Sessions were appropriately paced [pediatric: seniors = 3.7, faculty = 3.7, p = 1.0; adult: seniors = 3.7, faculty = 3.7, p = 0.91]). The null hypothesis was not rejected for any of the items for these data.
Discussion, Limitations, and Recommendations
Similar mean responses for adult simulations in both groups support that clinical performance of junior students taught by senior students was as effective as that taught by faculty. Student efficiency improved because the junior students were able to demonstrate an increase in the number of checklist items, with minimal additional time on adult and pediatric simulations. Therefore, aim 1 addressing comparable assessment of simulation instructor/facilitator effectiveness in clinical performance of juniors was supported. Similar mean responses in both groups on postsimulation debriefing items suggest that aim 2, addressing comparable junior students’ assessment, was supported. Likewise, similar mean responses for both groups for all items addressing comparable junior students’ satisfaction suggest that aim 3 was also supported.
During review of the recorded video assessments and debriefings, it was noted that more questions were asked and more discussion took place among the junior students and the senior students than with faculty. Both junior and senior students expressed appreciation for the learning experience. One junior student commented, “I really enjoyed being taught by a senior student. It was good to get feedback about working in the hospital during clinical in relation to the simulation lab.” One senior student commented, “I learned that teaching other students is just as valuable a learning experience as being on the receiving end.” Thus, the dual active knowledge acquisition premise of peer learning is actualized in this project.
Although the current study is novel in the pairing of junior students with senior students as teachers, other studies focused on peer-assisted mastery of health clinical skills reported peer learning to be beneficial. In a meta-analysis to provide a framework for peer teaching and learning, Secomb (2008) stated that “peer teaching and learning increased development in learning outcomes and has implications for clinical practice” (p. 715). Harmer et al. (2011) reported improved self-confidence, prioritization, and time management for both sophomore learners and senior-year mentors in peer-assisted learning. They also noted that many of the student mentors expressed an interest in nursing education.
The literature suggests that potential ethical issues may arise with the implementation of peer teaching. Walsh et al. (2011) reported that peer-assisted learning was ineffective in teaching a new psychomotor skill when the participants were at the same level of education and competence. In the Senior Students as Teachers study, control was achieved with the hierarchical selection of senior- and junior-level participants. Validation of assessment skills for all students occurred in a health assessment course or laboratory, as well as in clinical rotations, prior to the study. The inexperience of the senior students in providing constructive feedback presented a potential ethical issue as well. Jeffries and Rizzolo (2006) cited debriefing as a critical element in developing critical self-evaluation skills. This was mitigated by (a) an objective list of performance behaviors to be evaluated by the student, (b) the essentials of debriefing in a standardized format that were included as a component of required senior student training, and (c) formative evaluation with no significance to academic progression. The SSAT study facilitated the practice of these concepts for all student learners, supervised by experienced faculty.
One limitation was that only a single year of junior students participated in the study. In addition, 20 different senior instructors participated and, although the inclusion criteria were strong, that number could add considerable variability. Also, 95% of the senior students had a prior non-nursing baccalaureate degree that could potentially provide them with teaching skills.
Another limitation was that interrater reliability estimates were indeterminate. The variation in the assessment of whether the checklist behaviors were performed by the students using the (eliminated) synchronous evaluation was associated with faculty perceptions. These discrepancies focused on students’ communication techniques, the video rewind option, and varying experience in the field as a nursing clinical evaluator. This experiential component was evident in the evaluation of vital signs. For example, one faculty rated a glance at the monitor as having met the criteria. A second faculty expected a verbal indication that the blood pressure was normal. Video rewind capability allowed for more accurate counts in the asynchronous assessment, whereas the live performance permitted no such option.
It is important for protocols to be established and stated prior to study implementation for addressing both issues. Although the macro view of the data is supportive of the Senior Students as Teachers approach, a second study is necessary for additional support.
The use of senior students to teach simulations can present a cost benefit to nursing programs. It is estimated that one full-time equivalent faculty teaching simulations to the same number of students as in the current study would require 50 hours, at a cost of $4,263. Utilizing senior students, this same faculty full-time equivalent can supervise three simulation rooms, decreasing their time to 17.5 hours, at a cost of $1,491. Thus, the semester-based clinical savings for one course would be $2,772. Were this model incorporated into all clinical courses in the curriculum, the annual cost savings could be $22,200. This estimate is based on the 2014–2015 Salaries of Instructional and Administrative Nursing Faculty in Baccalaureate and Graduate Programs in Nursing, of $88,684 for a doctoral-prepared assistant professor in the southeast United States (American Association of Colleges of Nursing, 2015). Walsh et al. (2011) also found that peer-assisted learning could be delivered in a time- and cost-efficient manner.
The workload of faculty was decreased in terms of actual time required in the simulation center. Anecdotal comments of faculty indicate that time spent in the simulation center is less physically demanding than in the hospital rotation, less stressful, more flexible, and is viewed as a viable alternative to retirement—an important consideration in light of the current faculty shortage. The low-fidelity patient experience is standardized, and all students have an equal opportunity to manage both simple and complex needs.
The effectiveness of using Senior Students as Teachers for simulation education demonstrates potential benefits for nursing programs, including the evaluation of the effectiveness of clinical skills and fostering the development of mentoring and problem-solving skills between levels of students. The current project suggests a fiscally sound method to further the expansion of simulation teaching–learning capacity.
- American Association of Colleges of Nursing. (2015). 2014–2015 salaries of instructional and administrative nursing faculty in baccalaureate and graduate programs in nursing. Retrieved from http://www.aacn.nche.edu/research-data/standard-data-reports
- B-Line Medical. (2014). SimCapture. Retrieved from http://www.blinemedical.com/simcaptureoverview.aspx#HealthcareEducationInstitutions
- Dennison, S. (2010). Peer mentoring: Untapped potential. Journal of Nursing Education, 49, 340–342. doi:10.3928/01484834-20100217-04 [CrossRef]
- Giordana, S. & Wedin, B. (2010). Peer mentoring for multiple levels of nursing students. Nursing Education Perspectives, 31, 394–396.
- Harmer, B.M., Huffman, J. & Johnson, B. (2011). Clinical peer mentoring: Partnering BSN seniors and sophomores on a dedicated education unit. Nurse Educator, 36, 197–202. doi:10.1097/NNE.0b013e3182297d17 [CrossRef]
- Jeffries, P.R. & Rizzolo, M.A. (2006). Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national, multi-site, multi-method study. New York, NY: National League for Nursing.
- McKenna, L. & French, J. (2011). A step ahead: Teaching undergraduate students to be peer teachers. Nurse Education in Practice, 11, 141–145. doi:10.1016/j.nepr.2010.10.003 [CrossRef]
- Pearce, J., Mann, M.K., Jones, C., van Buschbach, S., Olff, M. & Bisson, J.I. (2012). The most effective way of delivering a train-the-trainers program: A systematic review. The Journal of Continuing Education in the Health Professions, 32, 215–226. doi:10.1002/chp.21148 [CrossRef]
- Priharjo, R. & Hoy, G. (2011). Use of peer teaching to enhance student and patient education. Nursing Standard, 25(20), 40–43. doi:10.7748/ns2011.01.25.20.40.c8275 [CrossRef]
- Secomb, J. (2008). A systematic review of peer teaching and learning in clinical education. Journal of Clinical Nursing, 17, 703–716 doi:10.1111/j.1365-2702.2007.01954.x [CrossRef]
- Suhrheinrich, J. (2011). Examining the effectiveness of a train-the-trainer model: Training teachers to use pivotal response systems [Abstract]. Available from ERIC database. (ED518863)
- Walsh, C.M., Rose, D.N., Dubrowski, A., Ling, S., Grierson, L. & Backstein, D. (2011). Learning in the simulated setting: A comparison of expert-, peer-, and computer-assisted learning. Academic Medicine, 86(10, Suppl.), S12–S16. doi:10.1097/ACM.0b013e31822a72c7 [CrossRef]
- Zentz, S.E., Kurtz, C.P. & Alverson, E.M. (2014). Undergraduate peer-assisted learning in clinical setting. Journal of Nursing Education, 53(3, Suppl.), S4–S10. doi:10.3928/01484834-20140211-01 [CrossRef]