Q-methodology is a research approach that provides investigators with the ability to investigate and measure subjectivity (Brown, 1980). When Q-methodology is used, the aim is to reveal patterns of thinking, rather than the proportion of people holding a particular perspective. Q-methodology uses a combination of qualitative and quantitative techniques with unique terminology and particular methodological processes (Watts & Stenner, 2012). In brief, Q-studies begin with a comprehensive collection of opinions on a topic of interest (Brown, 1980; Stephenson, 1953). The opinion statements (termed a concourse) become the population of interest. However, the concourse can potentially contain hundreds of statements. Consequently, it is necessary to reduce the concourse to a workable and representative subset (a Q-sample). Typically, 40 to 60 statements sampled from the concourse are sufficient in number to elicit existing points of view (Brown, 1980). Recruiting people for Q-studies is purposeful and based on participants’ potential to hold differing points of view, with relatively small numbers of individuals (typically 40 to 60) needed (Brown, 1980). The recruited participants (a P-set) rank and order the Q-sample statements into a quasinormal distribution grid. This rank-ordering process (Q-sorting) forces participants to judge each statement relative to all other statements and, in so doing, reveal underlying values, beliefs, and ways of thinking. Participants’ individual Q-sorts (each a unique arrangement of Q-sample statements) undergo correlation and factor extraction–rotation procedures. Interpretation of the resulting factors proceeds by converting factor z scores into factor array scores. A factor array displays a reconfigured Q-sort based on the composite and weighted z scores from all participants who define a particular factor (Watts & Stenner, 2012). During factor interpretation, distinguishing statements (i.e., statements placed in statistically different positions, compared with all other factors) and characterizing statements (i.e., statements placed at the polar ends) are given special attention (Watts & Stenner, 2012). Likewise, factor interpretation considers post-sort explanations offered by participants on their reason for placement of statements within the distribution grid. Together, these factor analytic procedures and interpretive approaches determine how individuals group together to reveal differing points of view and patterns of thinking (Brown, 1980).
Concourse and Q-Sample
In the current study, 392 opinion statements about simulation design gathered from 35 nurse educators and the simulation literature base were used to populate the concourse. Guided by the NLN-JSF, a 3×5 factorial design (teacher, student, educational practices × the five simulation design characteristics) provided the structure to construct a 60-statement Q-sample. The steps in Q-sample construction undertaken for the current study are reported elsewhere (Paige & Morin, 2014). Prior to conducting the Q-study, a pilot study tested the Q-sample, Q-sorting process, and recruitment strategies.
Participant Selection (P-Set)
Nursing students’ experience with simulation and the enrollment size of nursing programs are possible demographics that could influence students’ opinions (perspectives) about simulation design. To reduce the risk of missing viewpoints, it was important to recruit students with varying levels of experience with simulation. Students who are first-time participants, as well as students who have participated in multiple simulation experiences, have opinions that need to be heard. Thus, to locate variation in possible points of view, a 3×3 (nine-cell) P-set matrix provided the sampling frame to recruit 45 nursing students (Table 1).
P-Set Matrix for Recruitment and Recruitment Results
Nursing Student Recruitment
The National Student Nurses Association (NSNA), with more than 60,000 members (NSNA, 2012), provided the vehicle for accessing nursing students. Following institutional review board approval, recruitment memorandums were posted in the NSNA weekly newsletter—one in September 2012 (48 replies of interest) and a second in March 2013 (47 replies of interest). Given the aim to recruit five participants per each of the nine-cell P-set matrix, study packets were mailed to 58 responders, and 32 were returned (55% return rate). Because nursing student respondents were still lacking from P-set matrix cells, a second recruitment strategy accessed students in attendance at the February 2013 Wisconsin Student Nurses Association conference. These two recruitment strategies resulted in a P-set of 45 nursing students. As evident in Table 1, participant recruitment for each of the nine cells ranged from two to six nursing students. Although five participants per each of the nine matrix cells was desired, according to Brown (1980), it was unnecessary to achieve a completely balanced P-set matrix because the matrix provides only a guide to locate diverse views. Demographics descriptors of the P-set are presented in Table 2.
Demographics of Nursing Student P-Set
Nursing students received a $5 gift card as an incentive, a letter stating that completion of the card-sorting activity served as their consent to voluntarily participate in the study, and the following study items: (a) a deck of 60 cards (each card included one opinion statement from the Q-sample and was randomly numbered from 1 to 60), (b) conditions of the instructions for the card sort, (c) a distribution grid (Figure 2), and (d) a tabulation sheet. Students sorted the cards according to the question, “What would you most recommend, or most not recommend, in the design of a simulation activity in nursing education?” Students provided written explanations for cards placed at the −5 and +5 ends of the distribution grid.
Distribution grid of the card sort.
By-person factor analysis (i.e., factoring individuals by their thinking patterns, rather than by traits, as in conventional factor analysis) involved the sequential application of correlations, factor extraction–rotation, and computation of weighed factor arrays (McKeown & Thomas, 2013). A principal component extraction method with varimax rotation (Watts & Stenner, 2012) located the best factor solution that explained the maximal amount of variance in the correlation matrix, minimized the number of confounding and nonsignificant sorts, and avoided significant interfactor correlations. A 0.01 significance level determined factor loading. A free software program, PQMethod 2.33 (Schmolck, 2012), specifically created for Q-methodology, facilitated the statistical calculations and generation of factor arrays. The qualitative techniques applied a constant comparative process, where the resulting factor arrays were compared for differences and similarities (Brown, 1980). Students’ written explanation for cards placed at the ends of the distribution grid and attention to distinguishing and characterizing statements contributed interpretative value (Brown, 1980). Together, these procedures facilitated a gestalt approach to factor interpretation.
Inspection of results revealed five distinct factors (perspectives) held by nursing students that explained 42% of the study variance. Twenty-seven of the 45 nursing students loaded solely on one of the five factors, 15 students loaded (confounded) on two factors, and three students did not load on any factor (Table A; available in the online version of this article). Nonsignificant interfactor correlations (p > 0.01) indicated that each factor represented a distinct perspective. To avoid obscuring factor clarity, Q-sorts confounded on more than one factor were excluded from computation of the factor array and subsequent factor interpretation (Watts & Stenner, 2012). Factor descriptions follow, ex-emplified with Q-sample statements (card number [numbered 1 to 60], array score [range −5 to +5]) and written explanations (quotes) from students explaining why they placed the cards at −5 and +5. Factor array tables (Tables 3–7) compare the ranking of statements across factors.
Nursing Student - Factor Loadings
Factor Array for the “Let Me Show You” (Factor 1) Perspective
Factor Array for the “Stand By Me” (Factor 2) Perspective
Factor Array for the “Agony of Defeat” (Factor 3) Perspective
Factor Array for the “Let Me Think It Through” (Factor 4) Perspective
Factor Array for the “I’m Engaging and So Should You” (Factor 5) Perspective
Factor 1: The “Let Me Show You” Perspective
Four nursing students loaded solely on factor 1, which explained 11% of the study variance (Table 3). Students holding this perspective want to figure things out on their own (#20, +4), want to receive minimal assistance and cueing (#22, +4), and let the simulation happen as it happens (#57, +3). These students want to talk during the debriefing to figure out what they know (#40, +4). They prefer verbal debriefing, rather than written debriefing (#50, −5), which is most likely related to their comfort level with talking. They are indifferent regarding whether learning objectives are specific (#17, 0) or that cues are scripted and consistent among students (#47, −4). They expect all students to prepare for all simulation roles (#13, +5). They are not interested in playing non-nursing roles (#25, +5). They see no benefit in mixing students across different levels within the program (#54, −5).
Factor 2: The “Stand By Me” Perspective
Eleven nursing students loaded solely on factor 2, which explained 10% of the study variance (Table 4). Students holding this perspective want structure to and guidance in their learning. Students want an orientation and opportunity to practice with the manikins (#23, +4). They desire specific learning objectives (#17, −5) and find it helpful when the learning objectives are verbally reviewed (#16, +3). If they are uncertain what to expect, mistrust may happen. Students recommend that simulations follow theoretical content (#29, +4). They are least interested in nonnursing role-playing (#25, +5; #15, −4), as this “reduces the reality” of the simulation and could “confuse the student” if the role is not well “scripted.” These students clearly prefer interacting with actual patients in the clinical setting, rather than with simulated patients (#56, −4), in part because, as indicated by one student, “two less hours spent in a simulation is cheating the student out of learning time they paid for.” Students appreciate working “together as it calms anxiety” and they are okay with the educator being present in the simulation room (#9, −4) to offer direction on use of equipment and guidance in figuring out the situation (#20, −5; #58, +3). They consider it acceptable to stop a simulation to correct mistakes as they happen (#57, −2). During the debriefing, students count on the educator to ask questions (#6, +5; #40, −2) to help them understand their own thinking process.
Factor 3: The “Agony of Defeat” Perspective
Five nursing students loaded solely on factor 3, which explained 8% of the study variance (Table 5). Students holding this perspective are most concerned about how they feel following the simulation experience, as one student stated: “It is very important that everyone feels like a ‘super’ nurse when they leave.” Students want to leave the simulation feeling good about themselves, as opposed to feeling defeated (#60, +5). In part, a feeling of defeat relates to whether grading of simulations occurs (#30, +5). Instead, students recommend that points be allocated for “showing up prepared and participating” or as “a pass-or-fail” assessment. Compared with other perspectives, students are least likely to value presimulation assignments (#42, −2) or review of learning objectives (#16, −2), perhaps because they can rely on each other to get through the simulation (#10, +4). These students do not recommend singling out weaker students (#8, −5) because “it puts too much pressure on them and could be embarrassing.” It is okay to stop a simulation to offer guidance (#57, −4). Students consider the use of humor to be important (#39, +4) and value the opportunity to role-play nonnursing characters (#25, −4).
Factor 4: The “Let Me Think It Through” Perspective
Three nursing students loaded solely on factor 4, which explained 7% of the study variance (Table 6). Students holding this perspective see greater value from simulation if educators are properly trained in simulation technology (#38, +5; #4, +3) and understand how to use it (#46, +4), “as it doesn’t help us learn when the main piece of equipment (manikin) is broken and no one can fix it.” Students may see a connection between educators’ level of training and teaching expertise with their feelings of defeat (#60, +5) or being singled out if struggling (#31, −5). For example, a preference exists in not being interrupted to provide assistance with equipment (#58, −4) or redirected by cueing (#41, −5) because it throws off one’s train of thought, as indicated by one student: “I don’t like it when my thoughts are stopped, it makes me feel stupid and makes me more nervous.” Students prefer not stopping a simulation (#57, +3) or having others think aloud (#7, −3) because it could interfere with independent thought, as “students need to learn on their own without someone else putting the idea in their head.” Diverging from other perspectives, these students recommend written, in addition to verbal, debriefings (#50, +4) and are less interested in being questioned during debriefing (#6, +1). These students have no qualms about making things up (#33, −2), and pretending (#14, −3) during a simulation is acceptable.
Factor 5: The “I’m Engaging and So Should You” Perspective
Four nursing students loaded solely on factor 5, which explained 6% of the study variance (Table 7). Although all perspectives recommend creating a realistic simulation, students holding this perspective have the strongest feelings about realism. They see reality created in the detail and functioning of the equipment (#35, +5), as well as how seriously educators (#36, +4; #39, −4) and students (#21, +4) take the simulations. Focusing on the lack of realism is unnecessary (#24, −5) and use of the word pretend is not acceptable (#14, +5). Contrary to other student perspectives, it is acceptable to allow grading of simulations (#30, −4; #34, +2) and deliver consequences if students do not take the simulation seriously (#21, +4). Students sharing this perspective recommend viewing video recordings of the simulations (#51, −5) and having presimulation assignments (#42, +3), and they are neutral regarding whether “weaker” students are placed in roles that force them to perform (#8, 0), stating that “weak students need help! Simulation is a wake-up call for them.” Of all perspectives, those sharing this view are least concerned about students feeling defeated following a simulation (#60, −1).
Discussion of Perspectives
Study results revealed that nursing students hold five distinct perspectives about the way nurse educators design simulations. Inspection of the findings indicates that participation in simulation activities evokes different emotional responses. Several possible reasons exist for these findings. Anxiety is a common emotional response, with some of the particular circumstances contributing to anxiety revealed in the perspectives. Students holding the Stand By Me and The Agony of Defeat perspectives indicate that anxiety increases if educators are not able to offer assistance or if they feel singled out as a weaker student. These findings are comparable to other studies. Cordeau (2010) found that perceived anxiety occurs when students do not know what to expect, when they are being videorecorded, and when related to their fear of failure. Kelly, Hager, and Gallagher (2014) found what matters most to students is having academic support during simulation delivery, a practice that varies among educators. Ganley and Linnard-Palmer (2012) and Nielsen and Harder (2013) reported videotaping to be a contributor to student anxiety; however, the five perspectives revealed in the current study indicate that students had no qualms about being videotaped.
A feeling of defeat is an emotional response that exists in the Agony of Defeat perspective. Acknowledging the existence of this perspective is vital, but more important is gaining an understanding of what contributes to this defeated feeling. Written explanations for cards placed at −5 or +5 provide helpful insight into the differing accounts for this feeling. In the Agony of Defeat perspective, students indicate they want to feel good about themselves and feel bad and inadequate if they do not perform up to expectations. Conceivably, this feeling of defeat relates to the visible identification of learning gaps. During simulations, students witness each other’s floundering, as opposed to other learning activities during which another student’s performance is not as obvious. Parker and Myrick (2012) labeled this type of situation as “performing in the fishbowl” (p. 368).
A finding that deserves further investigation is the discovery that students holding the Agony of Defeat perspective are least likely to recommend use of presimulation assignments or review learning objectives. This finding calls into question whether student preparation or lack thereof influences the degree to which students experience a feeling of defeat. Furthermore, students who held the Agony of Defeat perspective, in part, associate their defeated feeling to the grading of simulations. However, it is unclear what defines a grade. Even though the topic of grading simulations is discussed in the literature (Cordeau, 2010; Sportsman, Schumacker, & Hamilton, 2011), it is unclear whether this is in reference to a team or an individual grade, or whether the grade is based on points for performance, for showing up prepared, or for participation. The student perspectives, as revealed in the current study, may reflect this variation in grading practice. Of note is the finding that the I’m Engaging and So Should You perspective considers grading of simulations acceptable and the feeling of defeat takes on little salience for them. Rather, students holding an I’m Engaging and So Should You perspective express frustration with their peers and are more likely to recommend consequences for students who do not take simulation seriously. Considering these findings, it is important to recognize that simulations are designed as a learning activity (formative assessment) or as an evaluation activity (summative or high stakes). It is possible the student perspectives, as revealed in this study, occurred with students thinking of simulation as either a learning activity or an evaluative activity.
Results of this study reveal new findings. The Let Me Think It Through perspective has not been described in the simulation literature. Students who hold this perspective need extra time to work things out in their minds and can get off track if their train of thought is interrupted. It is conceivable that students holding the Let Me Think It Through perspective may have additional difficulty recovering from an interruption in thought. What remains unknown is whether characteristics exist that place students at higher risk for this interruption in thought. Various studies have investigated task interruptions (Altmann, Trafton, & Hambrick, 2014; Brumby, Cox, Back, & Gould, 2013), including the interruptions of nurses as they work in health care environments (Grundgeiger, Sanderson, MacDougall, & Venkatesh, 2010). It may be helpful to explore whether students (and future nurses) have particular tendencies that limit their ability to recover from an interruption in their thought process. Students holding the Let Me Think It Through perspective may benefit from a written debriefing assignment that can provide this opportunity. This was actually recommended (+4) as an option by students holding this perspective. Students holding the Let Me Think It Through perspective may be a subset of students who would benefit from repeating a simulation.
In addition, a finding not found reported in the literature is the variability in how students view stopping a simulation. For example, students holding the Let Me Think It Through perspective consider that stopping a simulation could interfere with their train of thought. On the other hand, students holding The Agony of Defeat and Stand By Me perspectives expect simulations to be stopped if they are doing something wrong. At the same time, students holding the Let Me Show You perspective want the opportunity to figure things out on their own, receive minimal assistance and cueing from educators, and prefer not to stop simulations. Students holding the I’m Engaging and So Should You perspective take offense when other students are unprepared and prefer not to stop a simulation to offer help. The reasons for this diverse preference in whether to stop or not stop a simulation likely relates to each student’s unique needs, such as learning style, different level of academic ability, level of student preparation, and comfort with simulation, to name a few.
Perspectives Within the Context of NLN-JSF
The NLN-JSF (Jeffries, 2012) conceptualizes five simulation design characteristics that nurse educators are to consider when designing simulation activities. Objectives, as one design characteristic, are to be clear, concise, realistic, and correspond to students’ level of knowledge and experience (Jeffries, 2012). However, the degree of specificity for a learning objective remains unknown (Groom et al., 2014). As revealed in the current study, students who hold the Stand by Me, Agony of Defeat, or I’m Engaging and So Should You perspective recommend specifically written objectives, whereas students who hold a Let Me Show You or Let Me Think It Through perspective are indifferent as to whether objectives are specific or general.
Student support, as a design characteristic, occurs when assistance is provided to students but does not interfere with their independent thought (Jeffries, 2012). Allowing time for students to problem solve and make decisions is congruent with the perspectives revealed in the current study. However, in the NLN-JSF, student support connotes an instructional approach initially derived from use of cues (Jeffries, 2012), whereas the perspectives in the current study reveal the importance of an emotional component to support. Findings from the current study suggest that it may be necessary to reexamine student support, not only from an instructional approach but also to include an emotional approach.
Findings revealed that fidelity is an important design characteristic across all perspectives and happens when equipment is functional and educators are proficient in its operation. Therefore, in addition to creating realism, it is equally important that educators know how to maintain it by being properly educated about how to effectively use and troubleshoot the technology.
Problem solving, as a design characteristic, happens when opportunities are designed into a simulation that engage students in tasks that increase knowledge, skills, and challenge beliefs (Jeffries, 2012). Yet, student perspectives in the current study differed on their recommendation for that design characteristic. Some students wanted to problem solve independently, with minimal educator or peer assistance, whereas other students depended on others to help them along in their thinking.
Finally, debriefing, as a design characteristic, occurs when the educator facilitates students’ reexamination of the clinical encounter to foster clinical reasoning and judgment (Jeffries, 2012). This characteristic was important across perspectives, as students wanted educators to help them understand their own ways of thinking. Yet, the level of student participation expected during debriefing varied across perspectives. Conceivably, this is due to the varying level of students’ comfort with their knowledge, as well as the time individual students need to process information.
Implications for Educational Practice
Brookfield (2006) claimed that educators need constant awareness regarding how students experience learning and perceive educators’ actions. However, given that students may not be always honest, upfront, or comfortable expressing their views, uncovering beliefs underpinning student perspectives can be a challenge—hence the value that Q-methodology contributes in revealing the subjectivity inherent in perspectives (Brown, 1980). Based on the perspectives that emerged from this study, it became apparent that students experience simulations in personal and diverse ways. Understanding these perspectives is important, given the trend to incorporate simulation into nursing curricula. Prior to making curricular decisions based on student learning outcomes following simulation activities, confidence in the simulation activity as the educational intervention needs to occur. The five nursing student perspectives, as discovered in this study, offer a student-centered viewpoint on ways to design and conduct simulation activities. The following are new ideas for nurse educators to deliberate as they design, conduct, and use student feedback to evaluate and revise simulations.
First, the diversity in perspectives necessitates the need for educators to understand their particular group of students. Bearing in mind that simulations typically contain a group of students, it is likely that any particular simulation may include students holding one or more of the perspectives discovered in this study. A novel activity to assess the diversity in student views can occur by polling students on how each of the five perspectives matches their way of thinking. The first author (J.B.P.) has undertaken such activities and has found useful information. For example, if it is found that students hold a Let Me Think It Through perspective, then instructional delivery and timing of cues can be tailored to meet the needs of these students, who need more time to process information. If this group of students exists, it may be beneficial to encourage them to meet with educators at a later point following the simulation to reinforce their learning. Of note, when students read the description of the five perspectives, they acquire insight into the thinking of their peers and factors that may influence teamwork dynamics. This insight is useful and can be transferred into future practice, as members of health care teams similarly have different ways of thinking.
Second, it is important that nurse educators reaffirm that students and fellow educators understand the purpose of the simulation activity. If this does not occur, students may see incongruences among educators and, consequently, mistrust is created. In the current study, students used phrases such as “being set up to fail,” “trying to trick me,” and “sink or swim” in their explanations for card placement. These phrases indicate that students may mistrust educators’ intent behind the simulation activity. Even if students review the learning objectives, they also need to be clear about the nature of the simulation—that is, whether it is a formative, summative, or high-stakes evaluation (Sando et al., 2013). In formative assessments, students are still learning the material, and simulations help students to make connections between theory and practice. Mistakes are expected, and students need reassurance that this is okay. In such simulations, investing time up front to discuss how mistakes will be handled and how support will be provided are measures to help allay anxiety. On the other hand, summative or high-stakes evaluations assess whether students meet preestablished criteria. In these types of high-stakes simulations (which may result in student failure), it is conceivable that students feel they are “being set up to fail.” To control for this feeling, it is important that students are clear on the evaluative criteria and that the instruments used to make these determinations are valid and reliable (Robinson & Dearmon, 2013; Sando et al., 2013).
Third, requiring students to complete presimulation assignments that review knowledge and skills for the particular simulation activity can help allay anxiety and promote achievement of learning objectives (Elfrink, Nininger, Rohig, & Lee, 2009; Nielsen & Harder, 2013). Even if students claim that presimulation assignments are extra work, in retrospect, students holding four of the five perspectives in this study found presimulation activities to be beneficial.
Identification of these five perspectives generates further questions to investigate. For example, what impact would assigning students to groups based on similar or divergent perspectives have on learning outcomes or level of anxiety? If students who hold a Let Me Show You perspective were grouped together, would these students be able to reach learning objectives quicker? If students who hold a Let Me Think It Through perspective are grouped together, would these students be able to figure out and deal with the problem if given enough time? When using feedback to evaluate simulations, is it possible to correlate student perspectives about simulation design with students’ rating of simulation activities to offer more meaningful and evaluative data?
Several limitations to this study need acknowledgement. First, common in Q-methodology is conducting post-Q-sort, in-person interviews to understand the reasons behind card placement. Because this study recruited students from across the United States, the investigators relied on participants’ written explanations for cards placed at the ends of the grid. Although the written explanations offered by students were generally detailed and informative, in-person interviews may have provided additional information.
A second possible limitation was having students sort opinion statements that were gathered from nurse educators. Typically, in Q-studies, participants completing the sorting process are characteristically similar to those who provide the opinion statements (Brown, 1980; Watts & Stenner, 2012). However, in the current study, it was important to understand students’ perspectives about the actions nurse educators take during simulation design. To control for this limitation, the investigators conducted a pilot study to test the opinion statements (Q-sample) with nursing students.
Furthermore, no male nursing students participated in this study. It is possible that male students hold differing points of view that were missed. Finally, as students participate in simulation activities, attitudes toward simulation may change. Therefore, the current study provides a snapshot in time of the perspectives students hold about simulation design. As such, there is no guarantee that this one Q-study located all existing perspectives (Brown, 1980), yet the five perspectives it did discover are real and do exist.