Nursing education requires both classroom and clinical proficiency (Oermann & Heinrich, 2004). Among other obligations, clinical nursing faculty have the responsibility of evaluating students both in didactic courses and clinical courses. The clinical evaluations that are prepared by clinical nursing faculty are legal documents (Boley & Whitney, 2003; Smith, McKoy, & Richardson, 2001) and serve to evaluate whether a nursing student should or should not be allowed to progress to the next level of nursing education. Although clinical evaluation of students is an important faculty responsibility in all nursing programs, there is limited information and even less research that describes how clinical nursing faculty determine students’ clinical proficiency.
Currently, many programs of nursing use checklists or clinical evaluation forms that are standardized for all students based on curriculum goals for the semester (Oermann & Heinrich, 2004). Although these evaluation forms are explicit for student performance expectations, there is little research or literature that describes how clinical nursing faculty track student performance during the semester. For example, little is known about how or if clinical nursing faculty actually use anecdotal notes in record keeping for clinical evaluation of students.
Classroom examinations and quizzes have more objective criteria for evaluation (i.e., item analysis), and written assignments and projects often have predetermined grading rubrics for grade assignments (Oermann & Gaberson, 2006). These predetermined criteria can be used for quality monitoring for grading. However, there are no established means to evaluate the appropriateness of a clinical semester grade (Rentschler, Eaton, Cappiello, McNally, & McWilliam, 2007). Clinical grading is more subjective and the responsibility falls on one individual faculty member per clinical rotation.
The Commission on Collegiate Nursing Education (2003) referred to the importance of ongoing improvements in didactic and clinical teaching. However, they did not specifically recommend how clinical faculty are to evaluate students.
Formative and summative evaluations are significant responsibilities of nursing faculty and are described in the Scope and Standards of Practice for Nursing Professional Development (American Nurses Association, 2000). Evaluations help support improvement in nursing practice, and this begins with clinical education experiences (Dickerson, 2005; Sommerfeld & Accola, 1978). Formative evaluation promotes ongoing improvements in student performance; as the term suggests, formative evaluation helps to “form” the learner. Many studies in the education and nursing literature promote the idea of formative evaluation (Bastable, 2003; Brittain, Glowacki, Van Ittersum, & Johnson, 2006; Davidson, 2005; Keating, 2006; Knight, 2002; Nolan & Hoover, 2005; Oermann, & Gaberson, 2006; Reilly & Oermann, 1985), but there is little published nursing literature describing this process.
Specific Aim of the Study
The specific aim of this study was to describe the use of anecdotal notes by clinical nursing faculty and to then develop a clinical evaluation tool based on their feedback. Describing note use is an initial step in determining the process of clinical evaluation. A questionnaire to clinical nursing faculty was used to compile information for this descriptive study.
The theoretical framework of the study was based on the Context, Input, Process, Product (CIPP) model originally developed by the Phi Delta Kappa National Study Committee on Evaluation in the 1960s, chaired by Daniel Stufflebeam (Phi Delta Kappa, 1971). Stufflebeam emphasized the role of evaluation in identifying needed change to enhance outcomes. Others also have discussed the value of ongoing evaluation for improved teaching and learning in the clinical setting (Bastable, 2003; Davidson, 2005; Knight, 2002; O’Connor, 2001; Oermann & Gaberson, 2006; Reilly & Oermann, 1985). There is no published literature on the use of the CIPP model for individual student evaluation in the clinical setting.
Stufflebeam, McKee, and McKee (2003) argued that when the model is used objectively, different evaluators should be able to derive equivalent findings related to an individual or a product. The nursing discipline has no standardized model for evaluation of clinical education. Multiple clinical nursing faculty serve as evaluators throughout a student’s nursing education. The goal of adaptation of the CIPP model to clinical nursing education is to increase the objectivity in clinical evaluation of nursing students.
In the model’s adaptation to nursing, Context includes the critical thinking behind nursing students’ decisions and actions in the clinical setting and questions how well students bridge textbook knowledge into clinical practice. Input evaluation considers the ability of the student to develop a plan of care including appropriate nursing interventions. Process evaluation questions the student’s performance of procedures, time management skills, and their ability to document pertinent patient information. Product evaluation in the nursing model considers the student’s prioritization of patient goals and their patients’ outcomes during the clinical experience. On the basis of the CIPP model, nursing faculty record anecdotal notes regarding student performance for any of the four aspects of the adapted model. To lay the groundwork for the model’s use, a questionnaire identifying the areas of the adapted model where notes are documented was developed to obtain information from clinical nursing faculty.
The main question addressed in this study was, “If used, how are anecdotal notes used by nursing faculty?” The operational definition of anecdotal notes provided to clinical nursing faculty was “a dated, student-specific notation by a clinical nursing faculty member, describing any component of the student’s clinical performance.”
A descriptive design was chosen for the study because there have been no studies published regarding clinical nursing faculty’ use of anecdotal notes in the past.
A convenience sampling design was used. Six nursing programs (three in southern Indiana and three in northern Kentucky) were invited to participate. Two of the programs were baccalaureate nursing programs and four were associate degree nursing programs. Homogeneity in number of clinical hours required for degree completion was noted among the six programs of nursing. Full-time, part-time, and adjunct undergraduate clinical nursing faculty who provided clinical instruction were eligible to participate.
An investigator-designed data collection tool was developed for the study. Three clinical nursing faculty with more than 10 years of clinical teaching experience and one faculty member with experience in instrument development were asked to assist with establishing validity, as described below. The reviewers were provided with both the conceptual and operational definition of the study variable.
Using the adapted CIPP model, 14 items were used to question faculty about their note use from the model’s four domains. The items used included three items addressing Context (theory-to-practice), five items addressing Input (development and prioritization of nursing interventions and how faculty evaluates students’ critical thinking abilities), five items addressing Process (direct faculty observation of patient procedures including medication administration), and one item addressing Product (the student’s ability to evaluate patient outcomes during the clinical week and redirect nursing interventions as needed).
As recommended by Dillman (2007), items were included to hold the participants’ attention yet target information required to form conclusions. Common historical evaluation points from a variety of nursing courses and clinical environments were considered during item development. Experts in clinical nursing education were asked to rate the items and their relationship to the adapted CIPP model to assure appropriateness. A 10-point ordinal scale was used to obtain well-defined feedback from clinical faculty (Dillman, 2007). Other items included in the instrument were related to frequency of note use and demographic information concerning clinical faculty. A total 29 items were included on the instrument.
A two-stage process was used to establish content validity. Guidelines published by Polit and Beck (2006) were followed for the establishment of both a content validity index of the individual items used in the questionnaire, along with the content validity of the overall scale.
Experts were also asked their thoughts regarding questions that were overlooked. Based on their recommendations, two questions regarding demographics and one question regarding the influence of knowledge of the student’s personal situation were added.
Reliability was calculated for the instrument from the initial responses on each of the 21 items from the project’s 64 participants (8 participants were excluded due to missing data). The homogeneity of the items was tested using a Cronbach’s alpha coefficient (Burns & Grove, 2005). A Cronbach’s alpha score of 0.82 was obtained, which supported the internal consistency of the instrument.
Human Subjects Protection
The participants’ confidentiality and privacy was protected in that no personally identifiable data were collected.
During faculty meetings, a total of 96 nursing faculty questionnaires were personally distributed by the primary investigator with a brief explanation of the project’s purpose. A total of 64 questionnaires were returned, for a 67% return rate. Time limitations or perceived futility of the study’s benefits could have prohibited the other clinical faculty from participating (Dillman, 2007). The majority (98.4%) of all faculty were female, had a master’s degree (77%), and were employed full time (91%).
Nursing Faculty Questionnaire Results
SPSS version 14 software was used to compile descriptive data obtained from faculty responses. The majority of clinical nursing faculty report weekly use of anecdotal notes in clinical evaluation (68.8%) and another 28.1% reported occasional use of notes during the clinical semester (only 96.9% of the sample reported the use of notes). The topics considered most essential for note use were medication accuracy, attention to patient safety, and professional behaviors. Notes on the student’s attention to patient safety are addressed in the Input domain of the CIPP model, and notes on medication accuracy and professional behavior fall within the model’s Process domain.
Analysis of variance (ANOVA) was used to compare group means regarding the variables anecdotal note use and faculty age. Faculty were divided into three age groups: 30 to 49, 50 to 54, and 55 and older. No significant difference was found between age groups in their use of anecdotal notes, F(0.169) = 0.845, p > 0.05.
An additional ANOVA was used to compare group means between anecdotal note use and years of faculty experience. Faculty were divided into three equal groups according to their experience: 0 to 6 years, 7 to 15 years, and 16 years or greater. No significant difference was found between experience groups in their use of anecdotal notes F(0.285) = 1.000, p > 0.05.
A t test for independent means was used to compare anecdotal note use and educational preparation of clinical nursing faculty. Faculty were divided into two groups according to their educational preparation: undergraduate (BSN) or graduate (ADN) degrees. There was not a significant difference between groups (p = 0.200); however, the undergraduate (BSN) group had only 5 participants.
A second t test of independent means was used to compare anecdotal note use and level of nursing program (ADN and BSN programs). No significant difference was found in use of anecdotal notes between groups (p = 0.71).
A possible explanation for no significant findings between groups may have been that regardless of faculty age, experience, educational preparation, or program type, all faculty felt prepared for the process of student clinical evaluation and had previous positive experience with anecdotal note use. Another reason could have been the historical importance of the items used in the instrument, and the social desirability to agree that these items were noteworthy.
The study raised a number of additional questions for future study, such as how the note is used (i.e., formative, summative, or both), whether it is kept in the student record, how the faculty and student interact based on the notes (i.e., whether they are used for conferencing).
There may have been response bias in that each topic regarding the note use was rated toward the strongly agree (that would be an ideal note) end of the scale. Conversely, perhaps having the flexibility of having multiple areas in which to remark was seen as valuable.
Regarding faculty practice, it is of some concern that accuracy in medication administration was rated highest of all reasons to use an anecdotal note. Although medication administration is obviously central to patient safety, other aspects of care may have even more significant implications for overall patient well-being. For example, discharge planning not only encompasses medication teaching, but also covers many more aspects of patient needs in their home environment.
Altering the instrument to pair items with their corresponding domain in the adapted CIPP model could strengthen the theoretical model’s usefulness for evaluation. This could also support faculty in recognizing the importance of each of the model’s four domains for student evaluation. To further support the reliability and validity of the faculty questionnaire, a larger number of clinical faculty should be asked to complete the questionnaire. Additional reliability testing could then be performed. The sample could be larger and selected at random from nursing programs in a larger geographic area of the United States.
An area for qualitative information from clinical nursing faculty could also be considered to identify missing information that was not included in the original instrument. The qualitative piece should include information from clinical faculty regarding their note use within the four domains of the CIPP model. The initial instrument developed to collect faculty opinions regarding notes should continue to be shaped to provide clearer understanding about how clinical faculty use notes.
Because limited published literature about the process of clinical evaluation in nursing and anecdotal note use was found, the project used a descriptive design with a newly developed instrument. These factors could limit the perceived scientific value of the study’s findings (Burns & Grove, 2005).
Clinical nursing education and the evaluation of nursing students in the clinical environment has occurred since the initiation of nursing programs (Howse, 2007; Wall, 2005), yet little published literature exists to support how this is accomplished. Although nursing education has awarded substantial amounts of curriculum hours to clinical education, nursing holds no published standardized framework to evaluate nursing students in the clinical environment.
The use of anecdotal notes appears to be a common tool for evaluation, and this study provided preliminary information about some aspects of use. However, there are many additional components to the evaluation process that warrant investigation. With future studies, the adapted CIPP model of student evaluation can provide a useful framework for further exploration and development of standards.
- American Nurses Association. (2000). Scope and standards of practice for nursing professional development. Washington, DC: Author.
- Bastable, S.B. (2003). Nurse as educator: Principles of teaching and learning for nursing practice (2nd ed.). Boston: Jones & Bartlett.
- Boley, P. & Whitney, K. (2003). Grade disputes: Consideration for nursing faculty. Journal of Nursing Education, 42, 198–203.
- Brittain, S., Glowacki, P., Van Ittersum, J. & Johnson, L. (2006). Podcasting lectures. Educause Quarterly, 29(3), 24–31.
- Burns, N. & Grove, S.K. (2005). The practice of nursing research: Conduct, critique, and utilization (5th ed.). St. Louis: Elsevier Saunders.
- Commission on Collegiate Nursing Education. (2003). Standards for accreditation of baccalaureate and graduate nursing programs. Retrieved July 17, 2007, from http://www.aacn.nche.edu/Accreditation/standards.htm
- Davidson, E.J. (2005). Evaluation methodology basics. Thousand Oaks, CA: Sage.
- Dickerson, P.S. (2005). Evaluation: Part I: Evaluating learning activities. The Journal of Continuing Education in Nursing, 36, 191–192.
- Dillman, D.A. (2007). Mail and internet surveys: The tailored design method (2nd ed.). New York: Wiley & Sons.
- Howse, C. (2007). “The ultimate destination of all nursing”: The development of district nursing in England, 1880–1925. Nursing History Review, 15, 65–94. doi:10.1891/1062-8061.15.65 [CrossRef]
- Keating, S.B. (2006). Curriculum development and evaluation in nursing. Philadelphia: Lippincott Williams & Wilkins.
- Knight, P.T. (2002). Being a teacher in higher education. Philadelphia: The Society for Research into Higher Education & Open University Press.
- Nolan, J. & Hoover, L.A. (2005). Teacher supervision and evaluation: Theory into practice. Hoboken, NJ: Wiley & Sons.
- O’Connor, A.B. (2001). Clinical instruction and evaluation: A teaching resource. Boston: Jones & Bartlett.
- Oermann, M.H. & Gaberson, K.B. (2006). Evaluation and testing in nursing education (2nd ed.). New York: Springer.
- Oermann, M.H. & Heinrich, K.T. (Eds.). (2004). Annual review of nursing education (Vol. 2). New York: Springer.
- Phi Delta Kappa National Study Committee on Evaluation. (1971). Educational evaluation & decision making. Itasca, IL: Peacock.
- Polit, D.F. & Beck, C.T. (2006). The content validity index: Are you sure you know what’s being reported? Critique and recommendations. Research in Nursing & Health, 29, 489–497. doi:10.1002/nur.20147 [CrossRef]
- Reilly, D.E. & Oermann, M.H. (1985). The clinical field: Its use in nursing education. East Norwalk, CT: Appleton-Century-Crofts.
- Rentschler, D.D., Eaton, J., Cappiello, J., McNally, S.F. & McWilliam, P. (2007). Evaluation of undergraduate students using Objective Structured Clinical Evaluation. Journal of Nursing Education, 46, 135–140.
- Smith, M.H., McKoy, Y.D. & Richardson, J.R. (2001). Legal issues related to dismissing students for clinical deficiencies. Nurse Educator, 26, 33–38. doi:10.1097/00006223-200101000-00015 [CrossRef]
- Sommerfeld, D.P. & Accola, K.M. (1978). Evaluating students’ performance. Nursing Outlook, 26, 432–436.
- Stufflebeam, D.L., McKee, B. & McKee, H. (2003). The CIPP Model for evaluation. Proceedings from Oregon Program Evaluators Network (OPEN). Retrieved January 27, 2007, from http://www.wmich.edu/evalctr/pubs/CIPP-ModelOregon10-03.pdf
- Wall, B.M. (2005). Unlikely entrepreneurs: Catholic sisters and the hospital marketplace, 1865–1925. Columbus: Ohio State University Press.