Evaluation of faculty members in higher education has become an increasingly important aspect of the academic world due to the continuing or increasing financial constraints. Nursing faculty, therefore, must make efforts to document its effectiveness in order to garner its share of limited education resources.
Traditionally, comprehensive faculty evaluation is based on the assessment of three areas of responsibility: teaching, scholarship, and services. Although different sets of criteria are used to measure performance in each of the three domains, the fundamental principle underlining faculty evaluation includes using various sources of valid quantitative and qualitative information.
Purposes of the evaluation of faculty teaching competence are many-fold, including collecting data to be used in decisions concerning promotion, tenure, renewal, and merit awards; improving the quality of teaching; assisting faculty members to self-evaluate; improving accountability in education; meeting the criteria for the approval of the academic institution; and identifying the content areas for faculty development programs.
Key Sources for Data Collection on Teaching Competence in Nursing
Teaching in nursing encompasses classroom, clinical area, and other related functions such as informal interaction with the students before and after teaching sessions, individual student counseling and tutoring, input in course or curriculum development, and other related activities. Thus, evaluation of teaching effectiveness should be based on several sources of data such as student ratings of classroom and clinical teaching, peer evaluations, and the analysis of related materials.
Student Evaluation of Teaching
Despite various myths and dissenting opinions, student ratings of faculty teaching effectiveness, when properly obtained, are reliable and valid. Research facts isolated from more than 500 published studies spanning a 61 -year period have dispelled most of the common myths (Bell, Miller, and Bell, 1984 Centra, 1982; Coleman & Thompson, 1987 Miller, 1987; Millman, 1981; Morton, 1987 University of Arizona, 1985; Witley, 1984). However, Miller cautioned that students can only be expected to evaluate four aspects of faculty teaching competence: the teaching method or the presentation of classroom content, the fairness of the faculty member in the evaluation process, faculty interest in the student, and faculty enthusiasm in the subject content. Students are not able to evaluate the accuracy of the content or the depth, scope, and sequence of material presented. The latter are best evaluated by colleagues or experts in the field.
Selection of Student Evaluation Assessment Tool
Evaluation is a complex process. Thus, a perfect tool for evaluation has not been designed, nor will it ever be. There is as yet no universal criterion of effective teaching. However, some guidelines could be used in selecting or developing a tool to meet the needs of a particular institution. Marsh ( 1982, 1984) suggested that the tool for evaluation of classroom teaching should be:
* Multidimensional. It should measure as many desirable behaviors as possible of an effective teacher. Marsh (1984) isolated nine distinct components of classroom teaching effectiveness that have been identified in both student ratings and faculty self-evaluation. Others have identified descriptors considered as important characteristics of an effective clinical teacher (Coleman & Thompson, 1987; Irby, 1978; Windsor, 1987).
* Reliable. A reliable tool dispels random error in student rating. An alternative determination of reliability is coefficient alpha that considers the relative agreement among different items designed to measure the same factor.
* Stable, with high correlation between end-of-term and retrospective ratings at various course levels and across different course types.
* Primarily an evaluation of the instructor who teaches a course, rather than the course that is taught.
* Relatively valid against faculty selfevaluation and against a variety of indicators of effective teaching.
* Relatively unaffected by variables hypothesized as potential biases, such as workload difficulty, prior subject interest, and expected grades. Even though these three variables have been identified as most influential in affecting students' ratings of faculty, each variable has only a small impact on student ratings (Marsh, 1982, 1984), accounting for less than 5% of the variance in any of the student evaluation factors.
* Seen to be useful by faculty as feedback about their teaching, by students for use in course selection, and by administrators for use in personnel decisions.
In addition, a desirable tool should be short (no longer than one page), which takes no more than 10 to 15 minutes to complete to avoid evaluation fatigue and boredom for students. It should be tabulated by computer and have space for additional comments to cover areas not mentioned on the form.
Each institution could develop its own tool using a pool of items generated by various studies or use commercial forms such as the Educational Testing Services (ETS) Student Instructional Report (SIR), the Instructional Development and Effectiveness Assessment System (IDEA) (Centra, 1982) or the Students Evaluation Educational Quality (SEEQ) (Marsh, 1982, 1984).
Process of Evaluation by Students
If ratings are to be used for tenure, promotion, or salary decisions, the process of collecting evaluation data should be standardized to avoid disparity among faculty members. The frequency of evaluation, as well as whether it should be done on a voluntary or compulsory basis should be decided by all involved. In general, it is done at the end of the course or (in team teaching) at the end of the content component taught by the faculty member who is being evaluated. The forms should be kept anonymous and are to be administered by someone other than the involved faculty, since student ratings are influenced by the presence in the classroom of the involved instructor (Centra, 1982; Coleman & Thompson, 1987). Instructions for administering the form should be done in writing and clearly explained before it is completed. Students are encouraged to complete the form individually, as accurately as possible, and to make use of the comments section. All the completed or incompleted forms should be collected, sealed in an envelope in the presence of the students, and delivered to the designated person who summarizes the results, which should be completed within 1 or 2 weeks following the administration of the forms. Delay in tabulating the results and providing the feedback to the faculty would weaken the system. The original data should be stored for verification, and the method of their eventual disposal should be decided by all involved.
To be more valid, evaluation should be based on data collected from at least five courses taught by a specific teacher in different semesters. If the number of students in the course is fewer than 10, then data should be collected from more courses (Centra, 1982). The proportion of a class that rates an instructor is also important. If less than two thirds of the enrolled students in a course respond, the results may not represent the reaction of the entire class and bias may be present.
Peer Assessment of Teaching
Colleagues play an important part in evaluating faculty performance for promotion and tenure purposes. It is generally acceptable for colleagues to assess the accuracy, the depth and scope of course content, and the quality of research related to teaching. This assessment can be done outside of the classroom if materials are available. However, peer assessment based on classroom visitation should be used with caution because it may be distorted by mutual backscratching or by professional jealousy. Colleague rating is usually more generous and is affected by office location, degree of friendship, popularity of the faculty member, as well as by sexual and racial bias. It would not be statistically reliable unless several visits to each class are made by at least a dozen colleagues, which would be a time investment that many faculty members would be unwilling or unable to make. Therefore, peer ratings based primarily on classroom observation would not be sufficiently reliable to use in making tenure, promotion and salary decisions. It could be used, however, for faculty development (Centra, 1982; BeU et al., 1984).
Other Possible Evidences Substantiating leaching Effectiveness
In addition to evaluation of teaching by students and colleagues, the faculty could submit, whenever possible, the following suggested items to make the best possible case for teaching effectiveness and to be used for self-evaluation and improvement (Shore et al, 1986). These include, but are not limited to:
* Course materials prepared for students, assignments, tests, and examinations.
* Students' performance on national exams, their publications, and creative works, honours, awards, and career choices.
* Evidences of innovative teaching methods, involvement in scholarly works and research about teaching, participation in workshops, or seminars to improve teaching.
* Information from students, their parents, their employers, and others on how well students have been prepared.
* Invitations to teach for outside agencies, contribution to the teaching literature, as well as recognitions by students and others.
* Any other teaching-related activity deemed important and evaluable by the individual and the institution.
Strategies to Decrease Resistance to Evaluation of Teaching
Evaluation of faculty teaching has been and will continue to be a sensitive issue. If student ratings of teaching competence are so well supported by research findings, why then are they so controversial and so widely criticized?
One of the reasons for the highly emotional reaction to evaluation is that university faculty have little or no formal training in teaching, in evaluation of student learning, and in curriculum development, yet they find themselves in a position where their salary or even their jobs may depend on their teaching skills. The threat is further exacerbated by the lack of clearly defined criteria of effective teaching.
To reduce resistance to evaluation, it is suggested that:
* A faculty development program should be in place to help both the new faculty and the experienced faculty to improve or to revitalize their teaching skills.
* Accessibility to consultation with capable faculty members or with instructional experts should be available.
* Faculty workload and expectations should be as clearly defined as possible.
* Faculty should be encouraged to have a systematic voice in the interpretation of their student ratings.
* Faculty should have input in the design, development, implementation, and modification of the evaluation system.
* Students should be adequately oriented to the process of evaluation, with emphasis on objectivity and responsibility.
Even though student ratings are generally supported by research, they also have limitations. For example, class size and subject areas of the course may affect ratings. A small class generally rates the teacher higher than the larger class of the same faculty member. Furthermore, the interpretation of the numerical responses of the evaluation instrument should be done with care (Centra, 1982). Since it is quantified, it is easy to assign these numbers a precision they do not possess. Small variations between teachers should not be over-interpreted. Student ratings should neither outweigh other criteria nor be considered in isolation from the total picture in making decisions.
In conclusion, evaluation of faculty teaching effectiveness will remain an important part in the overall faculty evaluation, and a sensitive, controversial issue. It is much more difficult than the evaluation of research and scholarship. However, if various evidences and processes outlined above are used, an effective evaluation of teaching is possible. These measures provide more systematic and open procedures to collect objective information for decisionmaking than the hearsay, rumors, and subjective observations of a few. Therefore, they are generally supported by the teachers unions (Newstead, 1989).
To reduce resistance to evaluation, several strategies were suggested. Thus, not only the faculty member is accountable for improving teaching learning, but so are the students and the institution itself.
- Bell, D.F., Miller, R.I., & Bell, D.L. (1984). Faculty evaluation: Teaching, scholarship, and services. Nurse Educator, 18-27.
- Centra, J.A. (1982). Determining faculty effectiveness. San Francisco: Jossey-Bass.
- Coleman, E. A., & Thompson, P.J. (1987). Faculty evaluation: The process and the tool. Nurse Educator, 12(4), 27-32.
- Irby, D.M. (1978). Clinical teacher effectiveness in medicine. Journal of Medical Education, 53, 808-815.
- Marsh, H.W. (1982). SEEQ: A reliable, valid, and useful instrument for collecting students' evaluations of university teaching. British Journal of Educational Psychology, 52, 77-95.
- Marsh, H. W. (1984). Students' evaluations of university teaching: Dimensionality, reliability, validity, potential biases, and utility. Journal of Education Psychology, 76, 707754.
- Miller, R.I. (1987). Evaluating faculty for promotion and tenure. San Francisco: Jossey-Bass.
- Millman, J. (Ed.) (1981). Handbook of teacher evaluation. Beverly Hills, CA: Sage Publications.
- Morton, P.G. (1987). Student evaluation of teaching: Potential and limitations. Nursing Outlook, 35(2), 86-88.
- Newstead, S.E. (1989). Staff evaluation in American universities. Psychologist, 3, 95-97.
- Shore, B.M, Foster, S.F., Knaffer, CK., Nadeau, G.G., Neil!, N., & Sim, V.M. (1986). The teaching dossier: A guide to its preparation and use. Canadian Association of University Teachers.
- University of Arizona Instructional and Development. (1985). Student Myths vs. research facts. Note to the Faculty, (16), 1-4.
- Windsor, A. (1987). Nursing students' tions of clinical experience. Journal of Nursing Education, 26, 150-154.
- Witley, J.S. (1984). Are student constructive criticism. Community and Junior College Journal, 54, 41-42.