Journal of Nursing Education

The articles prior to January 2012 are part of the back file collection and are not available with a current paid subscription. To access the article, you may purchase it or purchase the complete back file collection here

Relationships Between Faculty Evaluations and Faculty Development

Warren E Lacefield, PhD; Richard D Kingston, PhD, DMD

Abstract

Abstract

Past research indicates weaknesses in typical approaches to the utilization of faculty evaluation information to improve faculty teaching effectiveness. This paper describes an alternative faculty development method relying on workshop formats and tested training techniques to facilitate the development of specific teaching skills. An empirical study of student evaluation of teaching data obtained prior and subsequent to faculty development interventions demonstrates that significant impact on faculty performance as teachers can be produced by this approach.

Abstract

Abstract

Past research indicates weaknesses in typical approaches to the utilization of faculty evaluation information to improve faculty teaching effectiveness. This paper describes an alternative faculty development method relying on workshop formats and tested training techniques to facilitate the development of specific teaching skills. An empirical study of student evaluation of teaching data obtained prior and subsequent to faculty development interventions demonstrates that significant impact on faculty performance as teachers can be produced by this approach.

Ever since the word "accountability" found its way into the world of university teaching, ways have been sought to quantify the efforts of higher education faculties. Though the tradition of the American university dictates simultaneous responsibilities in research, service, and teaching, efforts to evaluate these diverse activities have met with variable results. Research seems to cause the least trouble. The number of published articles, the quality of the publications in which they appear, the significance of the research, are all scrutinized by administrators and promotion and tenure committees who feel they have the experience to evaluate research efforts. Service too can be measured by the number of days spent as consultant, lecturer, committee member, or in public service roles. However, when the subject to be evaluated is "teaching," the issue becomes clouded by suspicious cries for academic freedom mixed with liberal doses of inadequate and often !invalidated evaluation methods.

While it may be tempting to link promotion, tenure, and merit decisions to the evaluation of a faculty member's teaching skills, without valid instruments capable of correlating teaching skills with measures of student learning, administrators find themselves on thin ice. Faculty evaluation, therefore, currently is justified primarily for the purpose of evaluating teaching effectiveness so that teachers may be provided, individually and collectively, with the necessary information to assist them to improve their teaching skills through the identification of existing deficiencies.

The most popular source of information for the evaluation of teaching performance has been the student body. A plethora of methods and forms have been developed to solicit information from students regarding teaching skills. Recent research, however, indicates that the feedback of student-generated information to the faculty has little effect on the improvement of their teaching skills. Kulik and Kulik (1974) and Kulik and McKeachi (1975) showed very little effect on the improvement of instruction as a result of feeding back student-generated information to the instructors. Other studies by Pambookian (1974); Marsh, Fleiner, and Thomas (1975); and Miller (1971) seem to bear out the suspicion that the hope of effecting self-improvement through recycling student-generated evaluation has not yet been realized.

This paper discusses remediation methods which seem to have proven effective in analyzing and utilizing evaluation data to improve faculty teaching skills. In addition this paper reports the empirical findings of a research project involving a college of nursing faculty which was designed to assess the impact of a specific faculty development training program on subsequent faculty evaluations by students.

Faculty Development

Reviewing ongoing faculty evaluation and development activities, the state of art with regard to evaluation appears in advance ofthat associated with remediation and enrichment activities. Since there is clear evidence that the simple process of sharing evaluation data with faculty members has little effect on improvement of teaching skills, alternative methods are indicated. Empirical data from needs assessment surveys, trained observers in college classrooms, and other sources have been utilized successfully to identify those observable teacher tactics, behaviors, and skills which facilitate or detract in learning environments. Efforts to assist faculty to recognize these specific behaviors in themselves, extinguish negative behaviors, and develop positive ones require substantially more than an evaluation feedback system. The method described in this article outlines one approach to bridging the gap between faculty evaluation and faculty development based on a diagnostic/prescriptive model.

The Center for Learning Resources, College of Allied Health Professions, University of Kentucky, has been involved in the preparation and further training of health care educators since 1970. In 1975, with initial support from grants provided by the W. K. Kellogg Foundation, the Center undertook a major Extramural Teaching Training project, involving the development and production of instructional materials and videotape role model dramatizations designed for use in faculty development workshop formats for the health and medical professions.

Since this beginning, an implementation system of national and international scope has been developed. The Teaching Improvement Project System (TIPS) now has established 17 permanent regional sites throughout the United States and additional training centers are developing in Central and South America. Each site has assumed a regional responsibility for further preparation of teachers in health and medical fields and is utilizing materials and workshop formats that were developed by the project. The organization of these instructional modules falls into the categories of: 1) preparing for instruction, 2) presentation methods, and 3) evaluation techniques.

To determine specific content and activities to cover these areas, materials developers and workshop faculty focused on day-to-day, usable skills rather than fundamentals, principles, and theory. Specifically, those skills include:

Organizational Skills: The first self-instructional packages and workshop activities that were developed were designed to assist faculty to perform task analysis and content analysis. These instructional units deal with the skills necessary to analyze content and organize it in step-by-step sequences. The content analysis process is useful for subject matters in the psychomotor, cognitive, and affective domains. Materials also point out the value of this process in preparing, presenting, and evaluating instruction.

The second instructional unit, with attendant workshop activities, was developed to explain the preparation of instructional objectives. While much information is available about writing instructional and behavioral objectives, an analysis of the project target authence (university and hospital teachers with little or no previous formal training in education) indicated a need for a simple, jargon-free approach that involves equal emphasis on how to write objectives as well as on explanations of how and why they are useful and beneficial.

Additional organizational materials were developed to communicate specific skills in lesson planning strategy for both demonstrations and lectures. The emphasis in these materials and activities covers the skills of establishing an instructional set, organizing the content area, and providing an effective instructional closure.

Interpersonal Communication Skills: Throughout the materials and workshop activities which deal with interpersonal skills, there is an emphasis on the value of student involvement and the techniques of establishing and maintaining two-way communications in classroom and laboratory settings. When used in workshop formats, discussions and viewing videotapes allow participants to analyze the following observable techniques and behaviors:

A. Teacher Tactics

The tactics that are discussed deal with involvement methods to provide reinforcement and feedback to students. This overview approach allows for discussion of the rationale for benefits of student involvement in classroom activities and leads up to the following specific content areas.

B. Questioning Techniques

Specific questioning skills are covered using role model dramatizations and videotapes, workshop faculty role modeling, and discussion. These skills include: planning eliciting questions according to divergent or convergent strategies; planning eliciting questions at the appropriate cognitive level (knowledge, application, and problem solving); developing probing techniques through the use of prompting, extending, clarifying, justifying, or redirecting probes; the questioning dynamics of when and with whom to use questioning; wait time; and listening skills.

C. Teacher Behaviors

Much research has been done linking teacher enthusiasm with student learning (Collins, 1977). However, in its broadest sense, "enthusiasm" is difficult to remediate when it seems to be lacking. Therefore, researchers have attempted to identify the specific observable behaviors which manifest enthusiasm or lack of it. Again through the use of role model videotapes and large and small group discussions, the specific positive and negative behaviors most commonly identified as facilitating and detracting from the learning environment are discussed. Specifically these are: eye contact, voice, gestures, facial expression, the use of silence, and body movement in the classroom.

Evaluation Skills: As with the other modules in this series, it was considered inadequate to provide selfinstructional materials alone to assist in the development of evaluation skills. Such development is facilitated significantly by large and small group activities. Specific areas discussed are Performanceevaluation and written test construction. For the former, emphasis is placed on the observation and objective evaluation of psychomotor skills. For the latter, a variety of methods and techniques (e.g., blueprinting exercises) are presented to assist faculty to prepare instruments that will measure learning outcomes at the appropriate cognitive levels and will cover subsets of information in appropriate proportions. Practice activities also include item writing skills and analysis of test data.

Microteaching

The most beneficial part of the faculty development programs described above has been the use of microteaching. After two or three days in faculty development workshops, viewing videotapes, participating in small and large group discussions, listening to short lecture presentations, and evaluating the workshop faculty's role modeling, participants develop not only an awareness of appropriate teaching techniques and behaviors but also become skilled and discriminating observes and evaluators of effective teaching. Once this discrimination training has been accomplished, individuals who see themselves on videotape in carefully prepared and controlled microteaching situations, are able to selfevaluate their performance in prescribed areas and often report that the micro-teach activity has been the most beneficial aspect of the program. Follow-up studies indicate that these skills and techniques are being integrated by workshop participants into their day-today classroom activity. There is also considerable feedback effect that they are being utilized as trained observers in the classrooms of their colleagues who have not been through similar workshop and microteaching activities.

The North Dakota Experience

Even though a wide variety of development workshops and self-instructional activities are available, they often are not related directly to the faculty evaluation activities. During the academic year 19771978, the University of North Dakota College of Nursing, in cooperation with the Center for Learning Resources at the University of Kentucky began a project which combined the utilization of student completed faculty evaluation instruments and the TIPS faculty development program. The faculty evaluation instrument was administered to College of Nursing students at the end of the fall semester.The same instrument was also completed by the faculty to secure self-evaluation data. During the winter semester, faculty development workshops were conducted involving all members of the College of Nursing faculty. At the completion of the spring semester, the faculty evaluation was again administered.

The faculty evaluation system used was the Faculty Enrichment and Assessment of Teaching (FEAT) system developed by the College of Allied Health Professions at the University of Kentucky (Lacefield, in press). FEAT has been utilized extensively at the University of Kentucky and at several other mid-eastern colleges and universities. The current data base for normative and other research purposes includes over 6,000 student and faculty responses collected in more than 410 classroom situations.

All of the scales of the FEAT system were not utilized for purposes of this study. The TIPS workshops are directed toward the development of specific teaching skills. The scales of FEAT, on the other hand, measure broad qualities and characteristics of an extended instructional experience. It would be unreasonable to expect much change at this level as a result of the workshop until the participants had time and opportunity to generalize their learning experiences. It is reasonable, however, to expect that instructors may change their behavior patterns quite rapidly in the specific teaching areas focused on by the developmental workshop and also to expect that within the time of one semester these changes would be reflected in student perceptions bearing specifically on those particular instructional phenomena. Therefore, in consultation with the workshop materials developers and the workshop faculty, each FEAT item was examined relative to the content areas covered during the workshop. The items were grouped logically into four categories: 1) items dealing directly or indirectly with aspects of instructional organization emphasized during the workshop; 2) items pertaining to aspects of instructional presentation; and 4) all other items of the instrument related to the general quality of instruction in the classroom but not specifically addressed within the workshop. The structure of the first three item groupings or subscales, together with all relevant summary statistics emerging from the study, is shown in Tables 1-3.

Each FEAT item refers to some specific instructional event or process characteristic. In total, the items functionally define what the instrument developers and users consider to be high-quality instruction. Students completing FEAT were asked to indicate on a 1-5 scale their perception of the descriptiveness of each item as it related to their instructional experiences with specific courses and faculty. The item scale categories are: l)Not At All Descriptive; 2) Descriptive to a Small Extent; 3) Descriptive to a Moderate Extent; 4) Descriptive to a Large Extent; and 5) Descriptive to an Extremely Large Extent. These categories semantically anchor the scale for the measurements in the Tables.

Table

TABLE 1ORGANIZATION

TABLE 1

ORGANIZATION

The unit of analysis for the present study is the mean classroom score on items, subscales, and the instrument as a whole. Using the FEAT data base, reliabilities (Cronbach's alpha) for classroom means for the artificial subscales were computed and were in the neighborhood of .90. Twenty-nine instructors participated in the study and the FEAT instrument was completed by students in courses taught in the fall semester, 1977 and the spring semester, 1978. The same instructors also participated in the TIPS workshops during the spring semester. More time between the workshops and the student evaluations of instruction would have been desirable. It is felt, however, that there was adequate time and opportunity after the workshops for their effects to have noticeable impact on subsequent classroom instruction and student perceptions.

Tables 1-3 report the means and standard deviations of the 29 classrom means for each item on the three selected subscales Organization, Presentation, and Evaluation for fall and spring administrations. The common element between fall and spring is, of course, the instructor; course contents varied. Tables 1-3 report the mean difference between spring and fall classroom means for each item, together with the standard deviation. An examination of these means shows in every case the expected trend toward improved ratings in the spring. This trend is most apparent with items dealing with more-or-less factual content e.g., items 14, 24, 46, 40, 59, 39, 50 and is less apparent for items whose content is more impressionistic e.g., 48, 16, 22, 52, 15.

Table

TABLE 2PRESENTATION

TABLE 2

PRESENTATION

Tables 1-3 report the correlation between fall and spring classroom mean ratings and the t-test for the significance of the difference scores (H0:/¿difr - 0). In most cases, the correlations were moderately high as expected. (In several instances a low correlation was obtained due to highly skewed distributions of classroom means and correspondingly little room for variation). Also, in most cases the mean difference was significant at the a =.10 level for H^/xdiff 0 and at the a = .05 level for Hi:¿udtff>0. Again, the most significant gains were associated with items with factual content and, in particular, items with content specifically emphasized during the TIPS workshops e.g., 14, 31, 10, 40, 50.

Combining the item groups into subscales, four more global variables were constructed. The fourth variable called Other represents the average of the classroom means for the remaining FEAT items not included in any of the first three subscales. This variable can be considered as the overall student perception of the quality of instruction in the classroom. In view of the manner by which items were grouped and subscales constructed, it is not surprising to note that these four variables were rather highly intercorrelated. Table 4 reports the summary statistics for the classroom subscale mean scores. The set of classroom subscale mean difference scores between spring and fall were analyzed using the Hotelling multivariate t2-test. The ta statistics were converted into F statistics and the multivariate, univariate, and step-down F-ratios are given in Table 4 (Tatsuoka, 1971).

Table

TABLE 3EVALUATION

TABLE 3

EVALUATION

Table

TABLE 4

TABLE 4

The multivariate analysis tests the hypothesis that the grand mean of the difference scores jointly over the four variables equals zero. Table 4 indicates that this hypothesis can easily be rejected and the alternative - namely, that significant gains occurred in student perceptions of the quality of the instruction experienced - may be reasonably accepted. The univariate tests indicate that significant gains occurred individually for the variables Organization, Presentation, and Evaluation. Although these variables were correlated with each other and with the variable Other, the stepdown F for Other after the effects of the previous variables had been removed was still insignificant. These findings empirically support the contention that signifcant gains would be noted in the specific teaching areas concentrated upon by the TIPS developmental workshops.

Summary and Conclusions

The review of research in this article argues that faculty evaluation activities which merely provide raw data to faculty members go perhaps only half way toward assisting those persons to improve their teaching skills. The experience of the TIPS workshops, however, shows that significantly greater impact on teaching skills as measured by student evaluations can be achieved when evaluation instruments are used in a diagnostic/prescriptive manner and are linked to faculty developmental activities designed around practical, day-to-day teaching skills. The workshops increase faculty awareness of facilitating techniques and skills. Moreover, they provide discrimination training to prepare faculty to observe, critique, and continually upgrade themselves and their colleagues in their teaching roles. These effects can be linked directly to altered student perceptions of classroom climate.

References

  • Colline, M.L The role of enthusiasm in quality teaching.In Proceedings of the Third International Conference on Improving University Teaching. London: The City University, London and the University of Maryland, 1977.
  • Kulik, J. A., and Kulik, CC. Student ratings of instruction. Teaching of Psychology, 1974, 1, 2:51-57.
  • Kulik, J.A., and McKeachi, W.J. The evaluation of teachers in higher education. In Kerlinger, F.N. (Ed.). Review of Research in Education. Itasca, Illinois: Peacock, 1975, Vol. 3.
  • Lacefield, W.E. Faculty enrichment and assessment of teaching (FEAT). Journal of Allied Health (In press).
  • March, H.W., Fleiner, H., and Thomas, CS. Validity and usefulness of student evaluations of instructional quality. Journal of Educational Psychology, 1975, 67, 833-839.
  • Miller, M.T. Instructor attitudes toward and their use of student ratings of teachers. Journal of Educational Psychology, 1971, 62, 235-239.
  • Pambookian, H.S. Initial level of student evaluation on instruction as a source of influence on instructor change after feedback. Journal of Educational Psychology, 1974, 66, 52-56.
  • Tatsuoka, M.M. Multivariate Analysis. New York: John Wiley and Sons, 1971.

TABLE 1

ORGANIZATION

TABLE 2

PRESENTATION

TABLE 3

EVALUATION

TABLE 4

10.3928/0148-4834-19830901-04

Sign up to receive

Journal E-contents