As nurses, we pride ourselves on how experience and continued education propel us from novice to expert clinicians. As we transition from the clinical role to an educator role, we often overlook the natural learning curve that exists from novice educator to expert educator. The novice nurse educator, who has less than 2 years of didactic teaching experience and item writing, must receive adequate orientation in this complementary role to successfully evaluate teaching and learning. Although often associated with the academic setting, clinical educators wishing to confirm an understanding of content in the field and/or at the level of expert, such as with specialty certification, can use written examinations to validate competence and knowledge.
Multiple-choice questions (MCQs) are objective and allow instructors to test large numbers of learners on a varied range of topics and cognitive levels with minimal resources (Tarrant & Ware, 2012). Each question, or item, is composed of three parts: the scenario, the stem, and the options.
The scenario sets the stage for the stem, providing the story of the clinical problem to solve. It provides the information needed to answer the question. The scenario should be short and avoid extraneous information (Siroky & Di Leonardi, 2015). The stem poses the question itself and should be presented in question form. In the scenario and stem, the objective or problem of the question is defined. Finally, the options provide plausible choices from which learners should select the correct answer from the distractors or incorrect answers.
To the untrained eye, construction of these questions might appear elementary. However, learners use common construction mistakes, such as grammatical errors or implausible distractors, to increase their odds for selecting the correct response (Tarrant, Knierim, Hayes, & Ware, 2006). Low-quality test items allow learners to rely on testing skills instead of knowledge or understanding for success (Tarrant & Ware, 2012).
To ensure MCQs provide accurate representations of learning, instructors must receive training and support from educators experienced in item composition (Tarrant & Ware, 2012). Many high-stakes examinations in the health sciences community, including the National Council Licensure Examination, are composed of MCQs often intermixed with several alternate item formats. High-quality MCQs must be free of item-writing flaws and test learners on higher cognitive levels.
Professional nurses gather and analyze large amounts of patient data and information to guide clinical judgment and patient care. Medical certifying examinations test beyond factual knowledge to evaluate a learner's complex cognitive functions to safeguard the standards of practice to which the examinees will be held accountable (Tarrant et al., 2006).
Clinical experts who contribute to certifying examinations can improve the validity of the tests by composing high-quality questions that require a profound grasp of knowledge framed by application in the clinical arena (Siroky & DiLeonardi, 2015). Novice educators should begin test-item creation following nine basic principles outlined under three major themes. With time, practice, and adequate mentor support, educators can improve item writing and overall examination quality.
Novice nurse educators come from a variety of backgrounds, often without any formal training in nursing education or item writing. Using an item-writing rubric and framework will assist educators to learn skills needed to develop high-quality MCQs. Tarrant and Ware (2012) identified a framework for improving the quality of an MCQ test. Their framework included three categories: planning, test development, and test review. The focus of this article is to identify actions and interventions that are likely to lead novice educators to write better quality MCQs. The framework by Tarrant and Ware (2012) is easy to communicate and understand, and it serves as a guide for developing a single MCQ.
There are three main steps in developing a high-quality MCQ to guide novice nurse educators when item writing: planning, question development, and item review. Table 1 outlines these steps.
Nine Tips for Item-Writing Outlined by Planning, Item Development, and Review
Establishing a clear objective or concept to evaluate is the first step in developing high-quality MCQs (Naeem, Vleuten, & Alfaris, 2012). Nurse educators must consider what they want learners to know. After the objective has been established, educators must determine and understand the educational level of the learners (Tarrant & Ware, 2012). Is the testing objective a new concept or one building on previously learned knowledge? The course progression of the curriculum track also must be considered. Are first-semester nursing students, those immersed in their final practicum, or staff nurses engaging in practice updates and continued education? Understanding the educational level of the learners will help determine the cognitive level at which to write the MCQ.
Application and analysis level questions of Bloom's Taxonomy best evaluate learners' critical thinking ability and are used often in nursing education assessments (Tarrant & Ware, 2012). Novice nurse educators tend to write items that are on the lower end of Bloom's Taxonomy pyramid, such as items that test memorization and conceptualization. Although lower-level items are necessary and useful for quizzes and practice tests, nursing examinations that assess competency require items that test application, analysis, and evaluation.
For an item to assess higher levels of learning, such as applying knowledge, the item writer must decide whether learners will use the information in the scenario combined with nursing judgment to answer the question posed in the stem. Requiring learners to make a decision about what to do after assessing the information in the scenario elevates the level of learning from remembering to applying (Siroky & DiLeonardi, 2015). Just as learners are encouraged to build their skills with time and experience, novice nurse educators are encouraged to embrace where they are in their professional development, starting with the design of more concrete items and progressing to more complex items as their confidence and abilities grow.
The next step in item writing is developing a clinical scenario, a clear and direct stem, and finally, plausible and uniform answer options. The concept to test taht was identified in the planning stage should be presented clearly to learners in the stem. The lead-in should contain only the necessary information needed to correctly answer the question. Learners should be able to correctly answer the question prior to reading the options (Naeem et al., 2012; Tarrant & Ware, 2012). The clinical scenario should include the client, subjective and objective data, and a description of the concept or disease process.
Dependent on the item construct, clinical scenarios can allow for testing varied levels of understanding defined by Bloom's Taxonomy. However, as described previously, only pertinent information should be included in the clinical scenario. All options should be equally plausible, and only one correct answer should exist (Naeem et al., 2012; Tarrant & Ware, 2012). Distractors should be the most common misconceptions or misunderstandings about the objective. Items with three options, two of which are effective distractors, provide higher quality questions than items with four options where one acts as an implausible filler or simply a place holder (Tarrant & Ware, 2012). Effective distractors best assess learners' knowledge (Tarrant & Ware, 2012).
Answer options also should be uniform: equal in length, amount of detail, and homogeneous (Naeem et al., 2012; Tarrant & Ware, 2012). Therefore, all of the above and none of the above should not be used as options.
As for option ordering, by arranging all options alphabetically or by length, the correct answer will be distributed randomly among all available options (Tarrant & Ware, 2012). Furthermore, numerical options should be ordered in ascending or descending order. Testing data indicate that both item writers and test takers tend to show bias in favor of middle-positioned options (Tarrant et al., 2006). By randomly and evenly ordering answer choices, this bias may decrease and students will be less likely to find success in random guessing.
Review and Proofing
The final step to writing a high-quality MCQ is the review and proofing process. The MCQ, in its entirety, should read as a clear and complete thought. Proper grammar and correct spelling should be maintained throughout the clinical scenario, stem, and option choices (Tarrant & Ware, 2012). Option tense and verbiage should be consistent with that of the clinical scenario and stem.
Medical jargon should be limited, and abbreviations should be defined unless the abbreviation is a critical part of the concept being testing (Naeem et al., 2012). Absolutes (e.g., always and never), negative words (e.g., except), and vague terms (e.g., often) should be avoided as they are confusing and easily eliminated (Tarrant & Ware, 2012). Furthermore, unnecessary phrases such as of the following, also should be avoided. Finally, negative questions, which often use words not or except, should be limited or rephrased to be positive.
Using the outlined tips as a checklist allows novice nurse educators to focus on the construction of one high-quality item at a time. Although time-consuming, this process ensures each item meets minimum standards, which supports understanding and application of the material, rather than a “beat the test” approach. After items have been reviewed and revised by more advanced item writers, novice nurse educators and clinical educators can be confident in their attempts to provide high-quality examinations. As the examination is given the opportunity to perform, educators will be able to evolve the individual items into an exemplary product of student achievement.
- Naeem, N., Vleuten, C.V.D. & Alfaris, E.A. (2012). Faculty development on item writing substantially improves item quality. Advances in Health Sciences Education, 17, 369–376. doi:10.1007/s10459-011-9315-2 [CrossRef]
- Siroky, K. & Di Leonardi, B.C. (2015). Refine test items for accurate measurement: Six valuable tips. Journal for Nurses in Professional Development, 31, 2–8. doi:10.1097/NND.0000000000000123 [CrossRef]
- Tarrant, M., Knierim, A., Hayes, S.K. & Ware, J. (2006). The frequency of item writing flaws in multiple-choice questions used in high stakes nursing assessments. Nurse Education in Practice, 6, 354–363. doi:10.1016/j.nepr.2006.07.002 [CrossRef]
- Tarrant, M. & Ware, J. (2012). A framework for improving the quality of multiple-choice assessments. Nurse Educator, 37, 98–104. doi:10.1097/NNE.0b013e31825041d0 [CrossRef]
Nine Tips for Item-Writing Outlined by Planning, Item Development, and Review
|Item Development Phase||Tips||Considerations|
Establish a clear objective
“What do I want learners to know?”
Determine the educational level and course progression of learners.
Higher level items assessing application, analysis, and evaluation are preferred.
Clinical scenario and stem writing
The clinical scenario and stem should contain only necessary information to answer the question or objective.
Options should be equally plausible where only one answer exists; distractors should be common misconceptions.
Options should be uniform in length and detail; arranged alphabetically by length or numerical order.
Clear and complete
The clinical scenario, stem, and options should be a clear and complete thought.
Review grammar, spelling, tense, and verbiage.
Remove medical jargon and absolutes; limit negative and vague terms; avoid “of the following”