Journal of Nursing Education

The articles prior to January 2011 are part of the back file collection and are not available with a current paid subscription. To access the article, you may purchase it or purchase the complete back file collection here

Major Articles 

Use of Simulation in Teaching and Learning in Health Sciences: A Systematic Review

B. Nicole Harder, RN, MPA

Abstract

The use of simulation as an educational tool is becoming increasingly prevalent in health care practice. Institutions have adopted simulations to help educate their students and health care professionals; however, intervention effectiveness evaluation continues to be an area requiring research. With use of this technology, it has become necessary to evaluate this method of educating health care professionals. As simulation use has increased, so has the literature related to evaluation of the innovative teaching method. A systematic review of the literature examined the effectiveness of simulation as a teaching tool. The aim was to evaluate current literature on the use of clinical simulation in health care education. The findings identify themes in the evaluation literature, highlight gaps in the literature as it pertains to evaluating the effectiveness of using simulations as a teaching tool, and support the need for further research into the evaluation of simulation as a teaching tool.

Abstract

The use of simulation as an educational tool is becoming increasingly prevalent in health care practice. Institutions have adopted simulations to help educate their students and health care professionals; however, intervention effectiveness evaluation continues to be an area requiring research. With use of this technology, it has become necessary to evaluate this method of educating health care professionals. As simulation use has increased, so has the literature related to evaluation of the innovative teaching method. A systematic review of the literature examined the effectiveness of simulation as a teaching tool. The aim was to evaluate current literature on the use of clinical simulation in health care education. The findings identify themes in the evaluation literature, highlight gaps in the literature as it pertains to evaluating the effectiveness of using simulations as a teaching tool, and support the need for further research into the evaluation of simulation as a teaching tool.

Ms. Harder is Coordinator, Learning Laboratories, Faculty of Nursing, University of Manitoba, Winnipeg, Manitoba, Canada.

The author thanks Drs. M. Carbonaro and S. King, University of Alberta, for their patience and feedback in conducting this review.

Address correspondence to B. Nicole Harder, RN, MPA, Coordinator, Learning Laboratories, Faculty of Nursing, University of Manitoba, Winnipeg, Manitoba, Canada R3T 2N2; e-mail: nicole_harder@umanitoba.ca.

Received: August 09, 2008
Accepted: February 04, 2009
Posted Online: January 04, 2010

The use of simulation in health care education is not a new phenomenon; however, it is one that is increasing in popularity. Simulation increases safety and decreases errors, improves clinical judgment, and is useful for teaching and evaluating specific clinical skills (Bearnson & Wiker, 2005). Combined with the continued pressures on clinical practice sites in many of the health care disciplines, alternative methods and means of teaching clinical education to health care professionals have been explored. With the pressure of increasing acuity of patients, there is considerable effort that goes into preparing students to be as ready as possible for their clinical practice rotations. Although not a panacea, simulation can help prepare clinically proficient health care professionals. In health care, simulation has been historically used for particular individual tasks prior to students applying these skills in practice; however, as technology advances so do the abilities of the simulators. These abilities are not the only identified advantage. Simulation appeals to technology-savvy students to whom lecture, passive information gathering, and linear thinking may not provide full engagement (Aldrich, 2003; Pardue, Tagliareni, Valiga, Davison-Price, & Orehowsky, 2005).

Many health professional schools and faculties are now using simulation in their curriculum, and with this increased use of technology, there is also the need to evaluate the intervention to ensure that simulations are producing the expected outcomes. The purpose of simulation use in health care is to prepare students for clinical situations they may encounter. Simulations attempt to create as realistic an environment as possible and ask students to perform a combination of skills in the context of this environment. The students’ responses and actions are then evaluated to determine their preparedness for the situation (Gaba, 2004). This review was conducted with the initial hypothesis that using simulation as a teaching tool influences student learning. To evaluate the available research and to determine the degree of influence on learning, effect size and other quantitative data were used. Tables for each article were created and used to summarize the results.

In much of the simulation literature, simulations are described in varying situations. These include three types:

  • Low-fidelity simulation: task trainers, noncomputerized.
  • Mid-fidelity simulations: standardized patients, computer programs, video games.
  • High-fidelity simulations: computerized human patient simulator manikins.

For the purpose of this review, the simulations that were reviewed were high-fidelity simulations and are characterized by the use of human patient simulators that are computer-based manikins with physiological responses (Bradley, 2006). These high-fidelity simulators have become more visible in health professional education institutions in the past 3 to 4 years and are the newest type of computerized manikin used to teach skills to students. Other types of simulations were not addressed in this review. In both the included and excluded literature, there was an inconsistent use of the term simulation or simulators. This disparity created confusion about the type of simulation studied and thus supported the decision to specify the inclusion of only high-fidelity simulations in this review.

Method

Studies that were included in the systematic review adhered to the following inclusion criteria:

  • Examined participant performance with simulation use.
  • Measured outcomes of the simulation use or identified specific learning outcomes.
  • Published between 2003 and 2007.
  • Addressed simulations related to health sciences studies.
  • Favored quantitative studies, comparative research, or both, followed by studies that addressed simulation solely as a tool for health care education.

Exclusion criteria included:

  • Published earlier than 2003.
  • Were purely descriptive in nature on individual simulation use.
  • Pertained to low-fidelity and mid-fidelity simulations.

A search for published studies was conducted. Initially, a broad search of English language studies was undertaken and included all areas where simulations are conducted, including the airline industry, leadership and management, and health care. The search was conducted in Medline through PubMed, the Cumulative Index of nursing and Allied Health Literature (CINAHL), and the Cochrane Collaboration databases. It was then further focused to include health care studies only. Due to the extensive systematic review done by Issenberg, McGaghie, Petrusa, Gordon, and Scalese (2005) in the area of simulation in health education, this search was limited to articles from 2003 to 2007. In 2005, Issenberg et al. published a comprehensive review that spanned 34 years between 1969 to 2003. The objectives of that review were to review and synthesize existing evidence in educational science that addresses the question, “What are the features and uses of high-fidelity medical simulations that lead to most effective learning?” This review attempted to build on the comprehensive review done by Issenberg et al. in the area of evaluation of simulation in health care studies. The search included various combinations of the text words simulat*, high-fidelity, clinical, teaching and learning, evaluat*, and educat*. The search terms are provided in Table 1. This search was conducted during a 4-month period and included the monthly updated searches of the literature. The search was not restricted to any one particular health care profession and included literature in all areas of health care education.

Search Terms Used in the Reviewa

Table 1: Search Terms Used in the Review

This search resulted in the retrieval of 61 papers. A scan of these results assisted with further refining the search term and strategies for retrieval. All studies were initially assessed for inclusion by independent perusal of the title. Abstracts of the included studies were retrieved and assessed for eligibility according to the inclusion criteria. If the study met the inclusion criteria or if the eligibility was unclear, the full text of the article was retrieved. This process resulted in the retrieval and assessment of 32 articles. Of these articles, 9 were excluded because they did not evaluate the intervention or meet the simulation description defined at the beginning of the review. This resulted in 23 articles being included in the systematic review.

Results and Categorization

After further evaluation of the eligible articles, some main differentiations emerged. This led to the creation of categories and each article was further dissected. Tables were created to track these differences and information from each article was inputted into their own table. This evaluation resulted in each article being categorized in the following areas:

  • Area of application.
  • Objective.
  • Methodology.
  • Effect size (if any).
  • Type of simulation.
  • Results and conclusion.
  • Suggestions and future directions.

It was important to further dissect the articles to ensure relevancy in the review, identify themes, and weight the articles. Although articles in all health disciplines were included, identifying a particular area of application or practice (e.g., medicine, nursing, interdisciplinary) and whether it applied to practicing health care professionals or students yielded interesting information regarding areas that used or studied simulations. The review found that 10 studies were conducted with practicing health care professionals and 13 were conducted with students. Of all of the studies, 16 were conducted in nursing, 6 were conducted in medicine, and 1 consisted of an interdisciplinary health care team that included nursing, medicine, and respiratory therapy (Table 2). All used high-fidelity simulators and the majority identified the simulators as being produced by either Laerdal or METI.

Practice Areas and Level of the Participants

Table 2: Practice Areas and Level of the Participants

Objectives were identified to determine whether the research evaluated the simulation intervention. Methods to determine rigor of the study were also identified. Effect size was calculated for those articles that provided sufficient data. Effect size was calculated in 39% of the studies (n = 9), with the remaining studies unable to be calculated due to insufficient data. Calculating effect size was deemed important to determine the change, if any, that was found with the students involved in simulation. It was important to know not only whether there was a change, but also the size of the observed effect. Cohen’s d was used and the effect size was calculated as either small, medium, or large. It was hoped that effect size would be able to be calculated for all studies and that this would be reported as either having no effect or a low, medium, or large effect. Because the percentage of studies with a calculated effect size was only 39%, this was not possible.

The type of simulation was identified as either high, mid, or low to ensure eligibility. The results or conclusions of the simulation were summarized, as well as any future directions for research in the identified area. This was done for all articles included in the review.

The findings of the review supported the researchers’ hypothesis that using simulation as a teaching tool influences student learning. Using effect size and other quantitative data, the degree of influence was evaluated. Articles evaluated the influence using various methods to evaluate the simulation intervention; these are indicated in Table 3. The majority of the studies used pretest and posttest scores as an evaluation of the simulation (n = 10), as well as Objective Structured Clinical Examination (OSCE) performance (n = 7). There were other types of evaluation conducted (n = 4), and these were varied.

Methods of Evaluation of Simulation Studies

Table 3: Methods of Evaluation of Simulation Studies

Several articles conducted qualitative studies about the student level of confidence and perceived competence. Others were combination studies of both clinical skills competence and confidence scores. These are included in Table 4 and are further separated into the practice area. Table 5 combines the areas of discipline and includes the objective or area of research of the study. Both tables describe the influence of simulation on student learning.

Types of Studies Conducted and Area of Study

Table 4: Types of Studies Conducted and Area of Study

Influence of Simulation on Student Performance

Table 5: Influence of Simulation on Student Performance

With the results of the pretest and posttest scores, as well as the OSCE results, the majority of the studies (n = 20) indicated an increase in assessment and clinical skills performance. Three identified no difference, and none indicated that students who engage in simulations had decreased clinical skills, compared with those who engaged in traditional education. In 91% (n = 21) of the studies, students evaluated their confidence levels and perceived competence higher than those who did not participate in a simulation. Two indicated no difference, and none stated that there was a decrease in the confidence level.

Discussion

Teaching clinical skills to students is a core component of health care education. In relation to clinical skills, the primary modality of teaching these skills has been “see one, do one, teach one.” This old adage has long been the accepted method of teaching skills, and it still persists in many education and training settings (Henneman & Cunningham, 2005). Not surprising, this training methodology has drawbacks for both students and patients. Changes in the landscape of health care and health care education have been the major influence on the use of simulations as a new teaching modality. With the decrease in available clinical sites, a decrease in adequately prepared clinical faculty, and the demand to prepare health care students to be work ready, alternative methods and means of teaching clinical education to health care professionals have been explored (Bearnson & Wiker, 2005). Simulation has been used more often as method to teach clinical skills. Since the comprehensive systematic review published by Issenberg et al. in 2005, simulation use has increased significantly. Simulation technology has been determined to be a practical and successful model to use in teaching a variety of skills, both psychomotor and clinical reasoning skills (Issenberg & Scalese, 2007).

Simulation Use and Clinical Skills Performance

The use of simulation, as opposed to other education and training methods, increased the students’ clinical skills in the majority of the studies. Other training methods included standardized patients, traditional psycho-motor skills laboratory sessions with task trainers, computer-based programs, and lecture classes (Alinier, Hunt, Gordon, & Harwood, 2006; Clark, 2006; Coiffi, Purcal, & Arundell, 2005; Curran, Aziz, O’Young, & Bessell, 2004; Feingold, Calaluce, & Kallen, 2004; Jamison, Hovancsek, Clochesy, & Bolton, 2006; Owen, Mugford, Follows, & Plummer, 2006; Scherer, Bruce, & Runkawatt, 2007; Wayne et al., 2005; Voll, 2007). These students were also better able to manage scenarios that were not previously encountered, compared with groups that did not engage in a high-fidelity simulation (Owen et al., 2006).

Although there were three studies that indicated there was no difference between the simulation and traditional teaching modalities, there were none that stated there was a decrease in the simulation group. Some of the studies speculated on why they believed this to occur. The primary thought that emerged was the obvious lack of structured assessment tools that evaluate the simulations. In using items such as the OSCE or other clinical evaluation tools, the researchers acknowledged that they were using tools developed to assess clinical skills in the practice setting and were adapting and using them to assess clinical skills in the simulation setting (Aliner, Hunt, & Gordon, 2004; Clark, 2006). Poststudy, it became evident that in comparing clinical performance using a variety of methods, researchers found they did not control for this variance in evaluation methods. As this surfaced repeatedly, several others (Hoffmann, O’Donnell, & Kim, 2007; Lasater, 2007) called for the development of evaluation tools designed specifically for simulation use. These are yet to be developed and efforts are still in the germinal stages.

One other potential reason for this significant increase in student performance is the manner in which some studies were conducted. Schwid, Rooke, Michalowski, and Ross (2001) identified that they were using the human patient simulator (HPS) as both the intervention and the evaluation and listed this as a limitation of their study. Others (Alinier et al., 2004; Wayne et al., 2005) identified that they were using the HPS in the evaluation component; however, those in the control groups were perhaps unable to perform as well because they were unfamiliar with the HPS, not the scenario content. Some studies (Girzadas, Clay, Caris, Rzechula, & Harwood, 2007; Leflore, Anderson, Michael, & Anderson, 2007) compensated for this and gave the control group an orientation to the HPS prior to engaging in the simulation to ensure it was their clinical performance, not their familiarity with the HPS, that was being evaluated.

Simulation Use and Confidence/perceived Competence Scores

It was identified in some articles that although the increase in the students’ clinical skills was not statistically significant, these same students frequently scored statistically higher on self-confidence and perceived competence scores (Scherer et al., 2007). Although this may not translate into immediate results relating to clinical practice, this significant increase in confidence scores should be addressed similar to the clinical skills. Several studies in the review included this component along with their evaluation of the clinical skills performance (Alinier et al., 2006; Coiffi et al., 2005; Feingold et al., 2004; Jamison et al., 2006; Scherer et al., 2007).

Self-efficacy beliefs have diverse effects on the psychosocial functioning of the health care practitioner. They can determine whether coping behaviors will be initiated, how much effort will be expended, and how long effort will be sustained in the face of obstacles and aversive experiences. They can also affect vulnerability to emotional distress and depression (Bandura, 1997). When looking at health care professionals and students, it is important to recall the environment in which they practice. Health care institutions are stressful and require practitioners to remain focused under difficult situations. Identifying the relationship between self-confidence scores and clinical skills performance was viewed as an important aspect of simulations.

An unexpected discovery was made during the review: It was noticed that studies that included both the clinical performance (quantitative) and the self-confidence scores (qualitative) were conducted in the discipline of nursing. The studies in medicine addressed the issue of clinical skills performance (quantitative) and did not appear to explore this other aspect of learning. This is important discovery but not altogether surprising. Each discipline appears to have common research methods, and in reviewing research conducted in various disciplines, it is expected that there will be variations in methods. Future researchers would benefit from using this information and exploring less developed areas. In an emerging area such as the use of simulation, it is prudent to examine work done in all disciplines where simulation is conducted. This ensures a more rounded and holistic approach to the issue and allows for a rich body of knowledge to be produced.

Limitations

The author acknowledges that there are a number of limitations to this review. This review examined data from 2003 to 2007 as an extensive previous systematic review was conducted on the literature published between 1969 and 2003. There may be other rigorous studies that addressed the confidence scores in medicine that were not included in this review. This review focused entirely on the effectiveness of using high-fidelity patient simulators as an education tool for clinical skills and performance. This eliminated studies that included low-fidelity or mid-fidelity simulators (e.g., task trainers or standardized patients) as the comparison group. Several articles exist that compare lecture versus low-fidelity simulations; however, these were not included because the focus of this review was on high-fidelity simulations. As there was no active search of the grey literature (i.e., conference proceedings or unpublished theses or dissertations), there potentially were other relevant studies that were not included in this review.

Future Directions

The results of this review highlighted some gaps in the evaluation of simulation as an education intervention. There appears to be a lack of formal evaluation tools available to evaluate simulations. In order to evaluate the students, researchers frequently used pretest and posttest scores, as well as OSCE scores, to evaluate the outcomes. These are testing methods that have been typically used for low-fidelity simulations, as well as for some clinical assessments. These measurement tools were not specifically designed for simulation. One potential area for growth in this area would be the development of measurement tools that were designed particularly for a high-fidelity simulation.

Simulations are variable in themselves and this creates some incongruity by what constitutes a simulation. Simulations may be as simple or complex as the user determines necessary, which makes evaluation of the simulation a challenge. Should an evaluation tool be created, there will need to be a type of leveling of the simulation as well as of the evaluation. Researchers and users of simulation would benefit from identifying the specific type of simulation that is being conducted. Simply because a computerized high-fidelity manikin is being used does not necessarily mean that a high-fidelity simulation is being used. The difference between simulations and simulators needs to be explicit.

Although beyond the scope of this review, literature regarding simulation is quickly emerging in various disciplines. New literature is being published monthly and is produced in several health care disciples (e.g., nursing, medicine, pharmacy, dentistry). Developing interdisciplinary research cluster groups would considerably benefit all users of simulations and simulation literature.

Conclusion

Many institutions develop simulation programs on the basis of a narrow understanding of the technology and teaching potential of this tool. The purchase of the equipment often precedes the development of a sound program “vision” and plan (Shinn, 2006). The demands on simulations are very different; however, there exists a kind of common denominator of expectations. The aim of this review was to systematically analyze the existing evidence on the influence of simulations on teaching and learning with health care practitioners and students, with a focus on studies that looked at outcomes of the simulation. Very few studies have objectively evaluated the outcomes of simulation use, hence the call for a measurement tool that is designed specifically for simulations. Further research should be conducted in this area and energies should be directed toward the development of evaluation tools particular to simulation use.

References

  • Aldrich, C. (2003). Simulations and the future of learning: An innovative (and perhaps evolutionary) approach to e-learning. San Francisco: Pfeiffer.
  • Alinier, G., Hunt, B., Gordon, R. & Harwood, C. (2006). Effectiveness of intermediate-fidelity simulation training technology in undergraduate nursing education. Journal of Advanced Nursing, 54, 359–369. doi:10.1111/j.1365-2648.2006.03810.x [CrossRef]
  • Alinier, G., Hunt, W.B. & Gordon, R. (2004). Determining the value of simulation in nurse education: Study design and initial results. Nurse Education in Practice, 4, 200–207. doi:10.1016/S1471-5953(03)00066-0 [CrossRef]
  • Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W.H. Freeman.
  • Bearnson, C.S. & Wiker, K.M. (2005). Human patient simulators: A new face in baccalaureate nursing education at Brigham Young University. Journal of Nursing Education, 44, 421–425.
  • Bradley, P. (2006). The history of simulation in medical education and possible future directions. Medical Education, 40, 254–262. doi:10.1111/j.1365-2929.2006.02394.x [CrossRef]
  • Clark, M. (2006). Evaluating an obstetric trauma scenario. Clinical Simulation in Nursing Education, 2(2), 1–6.
  • Coiffi, J., Purcal, N. & Arundell, F. (2005). A pilot study to investigate the effect of a simulation strategy on the clinical decision making of midwifery students. Journal of Nursing Education, 44, 131–134.
  • Curran, V.R., Aziz, K., O’Young, S. & Bessell, C. (2004). Evaluation of the effect of a computerized training simulator (ANA-KIN) on the retention of neonatal resuscitation skills. Teaching and Learning in Medicine, 16, 157–164. doi:10.1207/s15328015tlm1602_7 [CrossRef]
  • Feingold, C.E., Calaluce, M. & Kallen, M.A. (2004). Computerized patient model and simulated clinical experiences: Evaluation with baccalaureate nursing students. Journal of Nursing Education, 43, 156–163.
  • Gaba, D. (2004). The future vision of simulation in health care. Quality and Safety in Health Care, 13, 2–10. doi:10.1136/qshc.2004.009878 [CrossRef]
  • Girzadas, D., Clay, L., Caris, J., Rzechula, K. & Harwood, R. (2007). High fidelity simulation can discriminate between novice and experienced residents when assessing competency in patient care. Medical Teacher, 29, 472–476. doi:10.1080/01421590701513698 [CrossRef]
  • Henneman, E.A. & Cunningham, H. (2005). Using clinical simulation to teach patient safety in an acute/critical care nursing course. Nurse Educator, 30, 172–177. doi:10.1097/00006223-200507000-00010 [CrossRef]
  • Hoffmann, R., O’Donnell, J. & Kim, Y. (2007). The effects of human patient simulators on basic knowledge in critical care nursing with undergraduate senior baccalaureate nursing students. Simulation in Healthcare, 2, 110–114.
  • Issenberg, B., McGaghie, W., Petrusa, E., Gordon, D. & Scalese, R. (2005). Features and uses of high-fidelity medical simulations that led to effective learning: A BEME systematic review. Medical Teacher, 27, 10–28. doi:10.1080/01421590500046924 [CrossRef]
  • Issenberg, B. & Scalese, R. (2007). Best evidence on high-fidelity simulation: What clinical teachers need to know. The Clinical Teacher, 4, 73–77. doi:10.1111/j.1743-498X.2007.00161.x [CrossRef]
  • Jamison, R., Hovancsek, M., Clochesy, J. & Bolton, F. (2006). A pilot study assessing simulation using two simulation methods for teaching intravenous cannulation. Clinical Simulation in Nursing Education, 2(1), 1–7.
  • Lasater, K. (2007). Clinical judgment development: Using simulation to create an assessment rubric. Journal of Nursing Education, 46, 496–503.
  • Leflore, J., Anderson, M., Michael, J. & Anderson, J. (2007). Modeling versus self-directed simulation with debriefing: Is there a difference?Simulation in Health Care, 2, 47.
  • Owen, H., Mugford, B., Follows, V. & Plummer, J.L. (2006). Comparison of three simulation-based training methods for management of medical emergencies. Resuscitation, 71, 204–211. doi:10.1016/j.resuscitation.2006.04.007 [CrossRef]
  • Pardue, K., Tagliareni, M., Valiga, T., Davison-Price, M. & Orehowsky, S. (2005). Substantive innovation in nursing education: Shifting the emphasis from content coverage to student learning. Nursing Education Perspectives, 26, 55–57.
  • Scherer, Y.K., Bruce, S.A. & Runkawatt, V. (2007). A comparison of clinical simulation and case study presentation on nurse practitioner students’ knowledge and confidence in managing cardiac event. International Journal of Nursing Education Scholarship, 4, Article 22. doi:10.2202/1548-923X.1502 [CrossRef]
  • Schwid, H.A., Rooke, G.A., Michalowski, P. & Ross, B.K. (2001). Screen-based anesthesia simulation with debriefing improves performance in a mannequin-based anesthesia simulator. Teaching and Learning in Medicine, 13, 92–96. doi:10.1207/S15328015TLM1302_4 [CrossRef]
  • Shinn, M. (2006). External, not internal challenges to interdisciplinary research. American Journal of Community Psychology, 38, 27–29. doi:10.1007/s10464-006-9057-0 [CrossRef]
  • Voll, S. (2007). Utilizing computer-based simulators and standardized patients for nurse practitioner students instruction of pelvic examination. Simulation in Healthcare, 2, 72.
  • Wayne, D.B., Butter, J., Siddall, V.J., Fudala, M.J., Lindquist, L.A. & Feinglass, J. et al. (2005). Simulation-based training of internal medicine residents in advanced cardiac life support protocols: A randomized trial. Teaching and Learning in Medicine, 17, 210–216. doi:10.1207/s15328015tlm1703_3 [CrossRef]

Search Terms Used in the Reviewa

SearchTerms
1Simulat* and high-fidelity (S1)
2S1 and clinical
3S1 and teaching and learning
4S1 and educat*
5S1 and evaluat*

Practice Areas and Level of the Participants

Area of ApplicationNo. of StudiesPractitioner/Students
Nursing164/12
Medicine65/1
Interdisciplinary11/0

Methods of Evaluation of Simulation Studies

EvaluationNo. of Studies
Pretest and posttest10
Objective Structured Clinical Examination (OSCE)7
Pretest, posttest, and OSCE2
Other4

Types of Studies Conducted and Area of Study

Practice DisciplineClinical Skill CompetenceConfidence/Perceived CompetenceCombination of Clinical Skills and Confidence Scores
Nursing835
Medicine700

Influence of Simulation on Student Performance

Student PerformanceTotal
IncreaseNo DifferenceDecrease
Confidence scores and feelings of competence21 (91%)20
Ability to assess and perform clinical skills11 (49%)*, 9**30
Authors

Ms. Harder is Coordinator, Learning Laboratories, Faculty of Nursing, University of Manitoba, Winnipeg, Manitoba, Canada.

Address correspondence to B. Nicole Harder, RN, MPA, Coordinator, Learning Laboratories, Faculty of Nursing, University of Manitoba, Winnipeg, Manitoba, Canada R3T 2N2; e-mail: .nicole_harder@umanitoba.ca

10.3928/01484834-20090828-08

Sign up to receive

Journal E-contents