Quality and patient safety have been a major focus of the U.S. health care system for almost two decades since the Institute of Medicine (IOM) released two reports discussing the incidence of medical errors that have occurred in the United States (IOM, 2001; Kohn, Corrigan, & Donaldson, 2000). Medical errors cost between $17 billion to $29 billion and cause an estimated 98,000 deaths annually. More people die each year from medical errors than they do from motor vehicle accidents, breast cancer, or AIDS. With the IOM noting the incredible costs associated with medical errors and high mortality rates, many efforts were made to enhance quality and safety, including an evaluation of nursing curriculum. It was clear that nursing curriculum must adapt to accommodate the needs noted by the IOM. In 2005, nursing responded by developing the Quality and Safety Education for Nurses (QSEN) initiative, which introduced six competencies that describe essential features of what it means to be a competent nurse. The QSEN initiative has been used to help redesign curriculum to better prepare nurses to reduce poor patient outcomes (Armstrong, 2013; Didion, Kozy, Koffel, & Oneail, 2013; Disch, 2012; Disch, Barnsteiner, & McGuinn, 2013; Dolansky, Druschel, Helba, & Courtney, 2013; QSEN, 2013).
Even with QSEN being introduced over the previous decade, the extent, if any, of curricular redesign to incorporate the six QSEN competencies' KSAs within an undergraduate medical–surgical clinical course was unknown. A microsystems analysis was performed with six clinical faculty to determine whether any specific QSEN KSAs were targeted as part of their clinical instruction, which revealed significant variation. Some clinical instructors used aspects QSEN to design postconference discussions, whereas others did not. There was only one question on the clinical preparation tool that asked students to identify a quality or safety issue observed during the clinical experience, and most of the QSEN competencies were not being directly assessed. As part of the microsystem analysis, baseline student QSEN competencies were assessed using the QUISKA2 QSEN Assessment Tool (Bradley, 2011) to identify the most critical performance area relevant to the six QSEN competencies. The QUISKA2 is a valid and reliable questionnaire, with Cronbach's α = .94 (Sig. < .001), that includes 73 items: 17 knowledge, 45 skills, and 11 attitudes. Example items from the QUISKA2 are presented in the Table. A key to the QUISKA2, provided by the author, identified each item with one of the six QSEN competencies and distinguished whether the item was relevant to knowledge, skills, or attitude. All 73 items were used for this needs assessment. Items were then grouped based on their QSEN competency with regard for knowledge, skills, and attitude, and percentiles were calculated. This assessment identified that scores for knowledge of safety were the lowest of all six competencies, scoring 45.95%. Because the clinical curriculum was found to have variation on meeting competencies outlined by QSEN, a quality improvement (QI) project was needed to increase students' KSAs focused on safety. The Model for Improvement utilized the Plan-Do-Study-Act (PDSA) QI process.
Sample of QUISKA2 Assessment Tool
The problem and baseline data to support the QI project was outlined above. Three specific goals guided this project:
- Improve baseline QUISKA2 scores (45.9%) by 10% (to at least 55.9%).
- Determine whether the intervention group, students who conducted a student-designed simulation project (SDSP) and who incorporated a Good Catch and Error Reporting (GCER)
- Determine whether the interventions are satisfactory, feasible, and potentially valuable to enhance QSEN quality and safety factors by conducting a postproject student evaluation and satisfaction survey that will be administered in week 15 of the semester.
Literature Review. There have been multiple publications that delineate various strategies to implement QSEN competencies into clinical coursework. Several describe the use of simulation (Disch et al., 2013; Forneris et al., 2012). In practice settings, a GCER program has been used to improve patient safety (Herzer et al., 2012).
Simulation. Many consider simulation to be an ideal education technique to implement QSEN competencies because of its effectiveness and because multiple levels of KSAs can be practiced and evaluated in each simulation activity (Forneris et al., 2012; Norman, 2012; Piscotty, Grobbel, & Huey-Ming, 2012; Sexton, Stobbe, & Lessick, 2012). Piscotty et al. (2012) program in their clinical settings, had different outcomes in their QUISKA2 scores compared with the nonintervention group. performed a quasi-experimental of 141 baccalaureate nursing students to determine whether a student-led simulation was effective in increasing students' quality and safety KSAs of the six QSEN competencies. Results of the project revealed statistically significant increases in students' self-rated quality and safety competency, as well as their posttest scores regarding knowledge and safety. Norman (2012) performed a systematic review of the literature on simulation in nursing education and found that simulation is useful in creating a learning environment that contributes to knowledge, skills, safety, and confidence. The review revealed a gap pertaining to the transfer of these outcomes to the clinical setting; therefore, recommendations have been made to further research on outcomes specific to simulation in nursing education.
GCER Program. For the past decade, evaluating near-misses in health care has been emphasized. Near-miss tracking is vital to providing safe patient care but can be very difficult data to capture. Historically, when errors or near-misses occurred, the blame fell on individuals instead of looking at the complex systems that allowed the failure. One way in which health care organizations are changing the culture of evaluating near-misses is by implementing a GCER program, which rewards individuals for identifying near-misses instead of reprimanding them. Organizations have found that this program strengthens the culture of quality and safety, allows staff to be recognized for their contributions thereby creating a culture of safety, and builds staff confidence in the quality improvement process through a nonpunitive environment, taking action to address the issues and concerns identified (Jeffs, Lingard, Berta, & Baker, 2012; “Reporting,” 2011).
Just Culture. According to Boysen (2013), “The just culture is a learning culture that is constantly improving and oriented toward patient safety” (p. 400). In a just culture, it is safe to report errors because errors are studied with the intent to prevent reoccurrence. Error reports initiate an evaluation of processes or procedures so that preventive measures can be developed (Boysen, 2013). The GCER program is one intervention that organizations have used in response to the IOM in effort to create a just culture, where staff is rewarded for identifying risks before errors occur (Hart, 2016). Simulation is another means to create a just culture because it is done in a safe environment where students are able to make errors, identify risks, and evaluate decision making before patients are affected (Sexton, Stobbe, & Lessick, 2012).
After evaluating the literature supporting various teaching strategies for implementing QSEN competencies with the practice stakeholders, a decision was made to incorporate two interventions into the curriculum in order to help students' meet the QSEN competency of safety: a GCER program, and an SDSP. The two interventions were also presented to the clinical faculty, and both interventions were viewed as feasible to implement by the faculty. A student-designed simulation, as supported by Piscotty et al. (2012), was chosen over a faculty-designed simulation because this format for simulation had been studied and proven successful in improving KSAs specific to safety at another institution, which was the goal of this project. In addition, institutional review board approval was obtained prior to implementation of the project.
During week one of the 16-week semester, I met with all students enrolled in the clinical course to inform them of the project. Students were not randomly assigned into either intervention or nonintervention groups. Rather, they signed up for their chosen clinical section; however, students did not know about this project prior to clinical assignment. There were six clinical sections for which students randomly chose to enroll; four of the six sections were offered an opportunity to participate in the project intervention in order to provide at least 30 participants in the intervention group. The four sections chosen as intervention groups were selected based on faculty, with only two faculty members (two groups each) having intervention groups in order to promote interrater reliability. Participation was completely voluntary. The nonintervention group was informed they would receive the traditional clinical content, whereas the intervention group would have two additional assignments. I received student consent to participate in the project using a signed consent form and then performed a pretest evaluation of their QSEN competency using the QUISKA2. After the pretest was completed, oral and written instructions were given regarding the GCER program and the SDSP to the intervention group (n = 33). The nonintervention group (n = 16) was used as the comparison group for this project. All student identities were blinded to the project author on pre- and posttests. Postintervention evaluation would be collected three ways: posttest using the QUISKA2 as an evaluation of knowledge; student satisfaction survey as an evaluation of attitude; and completion of the SDSP as an evaluation of skill. There are no funding sources to disclose.
GCER Program. Students were asked to seek out and identify near-misses or errors as they occur in the clinical experiences. An additional page was added to the end of their clinical preparation form that asked them to report a good catch or any errors that occur in the clinical setting. The clinical instructor reviewed the reports and determined whether the students' perceptions were valid. If a student's perception of the good catch was invalid, this was used as a teaching point. At the completion of each clinical day, all of the good catch and error reports from the previous week were shared with the entire clinical group. Students were given the opportunity to discuss the near-miss and errors and examine ways to improve practice in order to prevent an error from occurring in the future. Students with valid reports also shared the report with the floor manager so that process improvement could be evaluated.
SDSP. For this clinical exercise, students were voluntarily paired up and randomly assigned a patient safety issue from a topic list created by the project author. Students were asked to design and video tape a simulation to share with the clinical section at the end of the semester. Students were required to submit a script to their clinical instructor for review and coaching prior to filming. A clinical day was dedicated to reviewing all simulation videos, and the students who performed the simulation facilitated a debriefing discussion with the other member of the clinical group.
An evaluation plan (Study) was created:
- Obtained institutional review board approval through Southeast Missouri State University.
- Obtained 49 pretest assessments using the QUISKA2.
- Analyzed baseline student data.
- Good catch reports were shared with the clinical groups on a weekly basis, and all information gathered was shared with the unit managers as part of a QI process.
- Completed the presentations of the SDSP and performed posttest evaluation using the QUISKA2 for the intervention group. Posttest evaluations were given separately to the nonintervention group for comparison.
- Recorded the results in an Excel® spreadsheet.
Analysis of student posttest performance on the QUISKA2 was compared with their pretest scores. The mean pretest score for the intervention group (n = 33) was 42%, whereas the mean pretest score for the nonintervention group (n = 16) was only 41%. The t statistic for related samples, or dependent sample t test, using a pre- and posttest design was used for statistical analysis. A repeated-measures design allowed data to be evaluated on the same subjects prior to implementing the designed interventions and then again after the interventions have been implemented (Gravetter & Wallnau, 2013). Data were analyzed with an alpha level of .05, and Excel software was used to compute the data.
The intervention group had overall improvement in all categories measured by the QUISKA2, but items that specifically addressed knowledge of safety were pulled out for statistical analysis. For the sample (n = 33) the project interventions were found to be statistically significant, t(32) = 6.14, p, .05, and the mean score for the intervention group increased by 37% after implementing the project interventions. Following statistical significance with the dependent sample t test, an independent sample t test was performed to compare the intervention group to the nonintervention group. Both groups increased their scores; the intervention group improved their mean score from 42.42% to 80.30%, whereas the nonintervention group had less of an increase from 40.63% to 53.13%. Project goals one and two were achieved.
Student satisfaction of their clinical experience was used to assess for change in attitude. The standard student evaluation tool was used. The overall satisfaction score for the intervention group was 4.98 (99.5%) on a 5-point scale, whereas the nonintervention group had an overall satisfaction score of 3.38 (67.7%). The intervention group was asked four additional questions regarding the two project interventions using a 5-point Likert scale:
- The student-designed simulation project was meaningful and helpful.
- The student-designed simulation project was a burden.
- The GCER program was meaningful and helpful.
- The GCER program was a burden.
Although 62.5% of students thought that the SDSP was a burden, all students (100%) agreed that the SDSP was meaningful and helpful. In addition, all students thought the GCER program was meaningful and helpful, and none considered it to be a burden. The third project goal was achieved by students expressing their satisfaction with the course and the interventions. The interventions were deemed feasible and proved valuable to enhance QSEN quality and safety factors.
Clinical courses at this institution are not assigned a letter grade, so all clinical assignments are considered on a pass-or-fail basis. All students in the intervention group successfully completed the SDSP at a passing level. Passing the SDSP was used as evidence of skill acquisition.
The examined question of whether chosen interventions that are incorporated into the course curriculum could increase students' KSAs of safety was answered, and this project was successful at meeting its goals. KSAs for safety were increased for the intervention group. Several good catches were identified by students during the project, many of which were regarding medication administration procedures. For example, medications were being left in the wrong patient room, insulin was not consistently being verified per protocol, and intravenous medication bags arrived in the unit with the wrong patient label attached. Students were able to present their findings to the unit manager and then participate in the initiation of the QI process. Even in just one semester, students were able to witness policy changes that occurred secondary to the good catches that were found.
The aim of this project, for student scores on the QUISKA2 to improve by 10% over the baseline assessment for knowledge of safety, was exceeded, with mean scores increasing over 37%, whereas the control group increased by only 12%. The goal to improve the educational opportunities for students, allowing them to obtain knowledge, skills, and attitudes appropriate to meet the QSEN competency of safety was also achieved. In addition, all students felt that the project interventions were meaningful and helpful despite some feeling the SDSP was a burden. Results were distributed to the stakeholders for individual review.
Due to the project achieving its established goals, all stakeholders mutually agreed to implement the project interventions for all future sections of the clinical course. The SDSP and GCER program were incorporated into all current and future sections; however, there was no plan to continue assessing how the outcome will be measured over time.
Although not providing all students with access to the interventions was unfortunate, the QI project team deemed it important to have a comparison group to first verify whether the two interventions were effective prior to recommending standardizing them within the clinical experience.
Limitations of this project include a small sample size. To determine the full effect of the project interventions, a future project should be conducted with a larger sample size. In addition, the QUISKA2 may not be the best pre- and posttest assessment for safety alone because the QUISKA2 was designed to briefly assess all six of the QSEN competencies, not just safety. A more comprehensive assessment tool regarding the competency of safety may be more beneficial in future projects. In addition, groups were not randomized and there were different clinical instructors for the intervention and nonintervention groups, which may have led to a discrepancy in the pre- and posttest scores that cannot be accounted by the project interventions. The author also being the clinical instructor for two of the intervention groups may have posed a conflict of interest given that the author may inadvertently focus teaching strategies on safety in effort to increase scores; however, there was no difference in QUISKA2 scores between clinical sections taught by the author versus the other clinical instructor teaching the intervention group. Future projects may consider having clinical instructors who are not directly invested into the project. In addition, this project implemented two interventions at the same time, so it is unknown whether the same results could have occurred with either of the interventions alone. Finally, formal faculty evaluation was not included in the project evaluation.
The GCER program allowed students to openly identify and discuss near-misses or errors in a nonpunitive environment. Instead of being reprimanded, students were encouraged to find near-misses, and through this process of revealing near-misses the health care organization had the opportunity to review and improve practices to prevent errors from occurring in the future. Implementing this program helped to create a culture of safety within this clinical section, and, by sharing our findings with unit managers, students were able to engage in the organizations quality improvement process. Students were able to watch how a good catch that was identified by a student could actually implement change in organizational policy to promote patient safety. Students verbalized that these experiences helped build their confidence in the quality improvement process, as prior research had suggested (Herzer et al., 2012).
The SDSP was also beneficial in improving students' understanding of quality and safety in a health care environment. Student's designed scenarios that addressed medication errors, critical team communication, prioritization of patient care, and handoff reporting and used their own creativity to deliver valuable information to the entire clinical group. The simulation videos were enhanced further by students leading the debriefing discussions.
This project examined the effectiveness of two evidence-based interventions: the GCER program and the SDSP. The results were found to be statistically significant regarding increased student competency of safety. The beneficial project interventions were not specific to only this one medical–surgical clinical course but could be used in almost any setting. The author shared the interventions with all clinical faculty and encouraged them to use the interventions as appropriate in each course. As nursing education improves by providing students with curriculum designed to achieve the six competencies outlined by QSEN, the better prepared our graduate nurses will be able to provide safe care that results in improved patient outcomes.
- Armstrong, G. (2013). Fundamentally updating fundamentals. Journal of Professional Nursing, 29, 82–87. doi:10.1016/j.profnurs.2012.12.006 [CrossRef]
- Boysen, P.G. (2013). Just culture: A foundation for balanced accountability and patient safety. The Ochsner Journal, 13, 400–406.
- Bradley, K. (2011). Quality safety assessment/application for nurses (QSAAN)-QUISKA2 QSEN assessment tool. Unpublished manuscript.
- Didion, J., Kozy, M.A., Koffel, C. & Oneail, K. (2013). Academic/clinical partnership and collaboration in quality and safety education for nurses education. Journal of Professional Nursing, 29, 88–94. doi:10.1016/j.profnurs.2012.12.004 [CrossRef]
- Disch, J. (2012). QSEN? What's QSEN?Nursing Outlook, 60, 58–59. doi:10.1016/j.outlook.2012.01.001 [CrossRef]
- Disch, J., Barnsteiner, J. & McGuinn, K. (2013). Taking a “deep dive” on integrating QSEN content in San Francisco Bay area schools of nursing. Journal of Professional Nursing, 29, 75–81. doi:10.1016/j.profnurs.2012.12.007 [CrossRef]
- Dolansky, M.A., Druschel, K., Helba, M. & Courtney, K. (2013). Nursing student medication errors: A case study using root cause analysis. Journal of Professional Nursing, 29, 102–108. doi:10.1016/j.profnurs.2012.12.010 [CrossRef]
- Forneris, S.G., Crownover, J.G., Dorsey, L., Leahy, N., Maas, N.A., Wong, L. & Zavertnik, J. (2012). Integrating QSEN and ACES: An NLN simulation leader project. Nursing Education Perspectives, 33, 184–187. doi:10.5480/1536-5026-33.3.184 [CrossRef]
- Gravetter, F.J. & Wallnau, L.B. (2013). Statistics for the behavioral sciences. Belmont, CA: Wadsworth Cengage Learning.
- Hart, S. (2016). Just culture: Improve reporting of near misses and errors in the clinical experience. Graduate Research Projects, 80. Retrieved from http://knowledge.e.southern.edu/gradnursing/80
- Herzer, K.R., Mirrer, M., Xie, Y., Steppan, J., Li, M., Jung, C. & Mark, L.J. (2012). Patient safety reporting systems: Sustained quality improvement using a multidisciplinary team and “Good Catch” awards. The Joint Commission Journal on Quality and Patient Safety, 38, 339–348. doi:10.1016/S1553-7250(12)38044-6 [CrossRef]
- Institute of Medicine. (2001). Crossing the quality chasm: A new health care system for the 21st century. Washington, DC: National Academies Press.
- Jeffs, L.P., Lingard, L., Berta, W. & Baker, G.R. (2012). Catching and correcting near misses: The collective vigilance and individual accountability trade-off. Journal of Interprofessional Care, 26, 121–126. doi:10.3109/13561820.2011.642424 [CrossRef]
- Kohn, L.T., Corrigan, J.M. & Donaldson, M.S. (Eds.). (2000). To err is human: Building a safer health system. Washington, DC: National Academies Press.
- Norman, J. (2012). Systematic review of the literature on simulation in nursing education. The ABNF Journal, 23(2), 24–28.
- Piscotty, R. & Grobbel, C. (2012). Quality and safety education in nursing group project summary. Unpublished manuscript.
- Piscotty, R., Grobbel, C. & Huey-Ming, T. (2012). Integrating quality and safety competencies into undergraduate nursing using student-designed simulation. Journal of Nursing Education, 50, 429–436. http://dx.doi.org/10.3928/01484834-20110429-04 doi:10.3928/01484834-20110429-04 [CrossRef]
- Quality and Safety Education for Nurses Institute. (2013). Project overview. Retrieved from http://qsen.org/about-qsen/project-overview/
- Reporting program a “good catch.” (2011). Healthcare Traveler, 19(4), 16. Retrieved from http://healthcaretraveler.modernmedicine.com/healthcare-traveler/news/modernmedicine/modern-medicine-news/reporting-program-good-catch
- Sexton, M., Stobbe, B. & Lessick, M. (2012). Advancing medical-surgical nursing through simulation. Med-Surg Matters, 21(3/4), 24–27.
Sample of QUISKA2 Assessment Tool
|Multiple Choice Question: Please Circle the BEST Answer to the Following|
Which of the following strategies can help nurses learn about the outcomes of care in their area of clinical practice?
Collecting data on infection rates.
Monitoring staff satisfaction.
Implementing an education plan.
Discussing potential action plans with the surgeon.
Understanding the source of practice variation is important because:
It determines the type of action required.
It identifies the root cause of the problem.
All variation, regardless of source, must be eliminated to achieve quality.
It is the first step to increasing variation.
Which source provides the strongest level of support for evidence-based practice?
Randomized control trials.
Opinion of respected authorities.
Evidence-based practice is defined as:
Promoting the publication of research findings among practicing nurses.
Dissemination of research findings at conferences.
Collecting data from subjects using measurement devices.
Integrating best research practices with clinical expertise and patient values.
A reliable source for locating clinical practice guidelines for a new chemotherapy protocol is:
State Board of Nursing.
Internet nursing blog.
Oncology Nursing Society.
All of the following contribute to increased patient safety except:
Consideration of human factors processes in the design of medical devices and technology.
Use of abbreviations for common medications.
Systems and processes that limit or prevent workarounds.
Computerized physician order entry.
Actions immediately following a near-miss medication error indicating a culture of safety include:
Congratulating the person that caught the error.
Identifying how the error was detected.
Reprimanding the person who made the error.
Reporting the incident to the physician.
A potential drawback of using only automatic bed alarms to prevent falls is:
Not all nurses know how to use bed alarms.
Other strategies to prevent falls may not be tried.
Families may not like the bed alarms.
There are no drawbacks with bed alarms.
All of the following elements are important for creating and sustaining a culture of health care safety except:
Structures and systems that ensure an organization-wide awareness of patient safety performance gaps.
Job descriptions that require direct accountability of leaders, managers, and frontline caregivers for closing performance gaps in patient safety.
Leaders embrace a culture where safety and quality are openly discussed.
Staff are reprimanded when they make two or more medication errors within a 6-month period.
Which of the following is an example of a culture of safety in a health care organization?
No more than 50% of the staff are agency.
Near-misses are reported.
Nurses routinely work double shifts.
Most patient transfers occur during shift change.