Journal of Nursing Education

The articles prior to January 2012 are part of the back file collection and are not available with a current paid subscription. To access the article, you may purchase it or purchase the complete back file collection here

Selecting a Model for Use in Curriculum Evaluation

Joan Ediger, RN, MS; Mariah Snyder, RN, PhD; Sheila Corcoran, RN, MEd

Abstract

Introduction

When the issue of curriculum evaluation is brought up at faculty meetings, a groan is often heard throughout the group, or there is a quick change of subject. Faculty wonder "Can we do it?" or "Where do we start?" We have found that evaluation need not be an overwhelming or a negative task. There is a growing body of literature which describes models for educational evaluation. Nursing faculty are finding these models useful guides for planning and conducting curriculum evaluation. Because so many models are available, a first step is the selection of a model which is appropriate for the proposed evaluation. Our purposes in this article are to describe the value of using a model in curriculum evaluation and to present guidelines for selecting an appropriate model.

Evaluation, an essential component of all educational programs, is commonly defined as a process of identifying and collecting information to assist decision makers in choosing among available decision alternatives (Worthen & Sanders, 1973). Within nursing education there are many types of evaluation. In this article we are addressing curriculum evaluation, which does not include program, student or course evaluation. Curriculum evaluation - a subset of program evaluation - examines the philosophy, the educational goals, and the learning experiences for a specific course of study. Course evaluations comprise only one segment of curriculum evaluation. For the purposes of this article we will confine evaluation to that which is initiated by faculty (in contrast to evaluation mandated by external bodies, e.g. NLN accreditation).

Evaluation Models

Evaluation models assist faculty in the process of curriculum evaluation. Green and Stone (1977) define an evaluation model as an analytical plan or framework which guides thought or structures the universe comprising the field in which the evaluator functions (p. 36). In contrast, a design is the plan for collecting the information indicated by the chosen model. The following are some of the advantages of using a model:

A. A model provides direction;

B. It indicates parameters for the evaluation;

C. It supplies a systematic approach;

D. It specifies relationships of parts.

Since evaluation includes data collection, judgments, and decisions, it is particularly important to distinguish between the concepts of judgment and decision in order to understand differences among the types of evaluation models. Judgments involve conclusive statements about the data that has been collected, such as "the observed outcomes do match objectives a, b, and c" or "most of the intended transactions and processes are actually carried out." Judgments are similar to a diagnosis; they serve as input data for decision making. Decisions are final choices made from available alternatives. Examples of decisions might include "This curriculum will be continued" or "Efforts will be made to improve the curriculum in this area." Decisions are usually few in number and are the output of the evaluation process.

Worthren and Sanders (1973) propose three major classifications of curriculum evaluation models: (1) judgment-strategy models, (2) decision-management models, and (3) decision-objectives models.' In judgment-strategy models the evaluator makes judgments on the collected data. Those judgments are presented to the decision-makers. Examples of judgment-strategy models are those of Stake and Scriven. In contrast, the role of the evaluator in the decision-management models is to gather data and describe the situation to the decision-makers; both the judgments and decisions are made by the decision makers, not the evaluators. The Stufflebeam model is an example of a decision-management model. The decisionobjective models developed by Tyler, Pro vus and Hammond do not specify the role of the evaluator in judgment making. The distinctive feature of models in this classification is that decisions are based solely…

Introduction

When the issue of curriculum evaluation is brought up at faculty meetings, a groan is often heard throughout the group, or there is a quick change of subject. Faculty wonder "Can we do it?" or "Where do we start?" We have found that evaluation need not be an overwhelming or a negative task. There is a growing body of literature which describes models for educational evaluation. Nursing faculty are finding these models useful guides for planning and conducting curriculum evaluation. Because so many models are available, a first step is the selection of a model which is appropriate for the proposed evaluation. Our purposes in this article are to describe the value of using a model in curriculum evaluation and to present guidelines for selecting an appropriate model.

Evaluation, an essential component of all educational programs, is commonly defined as a process of identifying and collecting information to assist decision makers in choosing among available decision alternatives (Worthen & Sanders, 1973). Within nursing education there are many types of evaluation. In this article we are addressing curriculum evaluation, which does not include program, student or course evaluation. Curriculum evaluation - a subset of program evaluation - examines the philosophy, the educational goals, and the learning experiences for a specific course of study. Course evaluations comprise only one segment of curriculum evaluation. For the purposes of this article we will confine evaluation to that which is initiated by faculty (in contrast to evaluation mandated by external bodies, e.g. NLN accreditation).

Evaluation Models

Evaluation models assist faculty in the process of curriculum evaluation. Green and Stone (1977) define an evaluation model as an analytical plan or framework which guides thought or structures the universe comprising the field in which the evaluator functions (p. 36). In contrast, a design is the plan for collecting the information indicated by the chosen model. The following are some of the advantages of using a model:

A. A model provides direction;

B. It indicates parameters for the evaluation;

C. It supplies a systematic approach;

D. It specifies relationships of parts.

Since evaluation includes data collection, judgments, and decisions, it is particularly important to distinguish between the concepts of judgment and decision in order to understand differences among the types of evaluation models. Judgments involve conclusive statements about the data that has been collected, such as "the observed outcomes do match objectives a, b, and c" or "most of the intended transactions and processes are actually carried out." Judgments are similar to a diagnosis; they serve as input data for decision making. Decisions are final choices made from available alternatives. Examples of decisions might include "This curriculum will be continued" or "Efforts will be made to improve the curriculum in this area." Decisions are usually few in number and are the output of the evaluation process.

Worthren and Sanders (1973) propose three major classifications of curriculum evaluation models: (1) judgment-strategy models, (2) decision-management models, and (3) decision-objectives models.' In judgment-strategy models the evaluator makes judgments on the collected data. Those judgments are presented to the decision-makers. Examples of judgment-strategy models are those of Stake and Scriven. In contrast, the role of the evaluator in the decision-management models is to gather data and describe the situation to the decision-makers; both the judgments and decisions are made by the decision makers, not the evaluators. The Stufflebeam model is an example of a decision-management model. The decisionobjective models developed by Tyler, Pro vus and Hammond do not specify the role of the evaluator in judgment making. The distinctive feature of models in this classification is that decisions are based solely on whether stated objectives are achieved. While the three types of models differ, their primary purpose fulfills that of evaluation: to assist decision makers in choosing among available decision alternatives.

Faculty knowledgeable in evaluation may elect to develop a model, to modify an existing model, or to combine features from several models. Those with less expertise or time may prefer to use a model which is already developed. It is important that time and thought be devoted to choosing or developing a model.

Although evaluation models provide convenient conceptual guides for evaluation procedures, it must be recognized that research needs to be done to provide empirical evidence on the utility of various models and the basic assumptions within the models themselves (Smith & Murray, 1975; Smith, 1980).

Selecting A Model

The evaluation model selected for use in a particular situation should be appropriate for that situation. Before selecting a model, the following should be answered:

A. What decisions are to be made as a result of this evaluation?

B. What data are needed to make the decisions?

C. What will be the roles of persons who are involved in various aspects of the evaluation process?

As implied in the first question, the model chosen depends upon the decisions that are to be made as a result of the evaluation process. Curriculum evaluation may be conducted for the purpose of making decisions related to continuation, improvement of the curriculum, establishment of curriculum priorities, or comparison of the curriculum with other similar curricula. Since different data are needed for each type of decision, the model selected must provide direction for collecting the appropriate data.

Second, if the decision to be maderequiresinformation only about the outcomes of the curriculum, then the model need only focus on outcomes. However, if the decision requires information about the ongoing activities as well as outcomes of the curriculum, then the model should include both. For example, a decision about how to improve a curriculum usually requires information about both ongoing activities and outcomes. One would not choose a decision objective model because it would limit data collection to outcomes.

Third, it is important to consider the roles of the persons who are involved in the evaluation process. The role of the evaluator may be simply to collect data and submit it to the decision makers. Or it may be to collect data and to make judgments about the program. In the latter case, the evaluator would submit both the data and the judgments to the decision makers. For example, if the faculty want to decide whether the present component of the curriculum dealing with nursing process should be retained, they would need to know whether graduates are able to use the nursing process as defined by the faculty. An evaluator could collect data about graduates' use of nursing process; then the faculty could make a judgment about whether standards they have defined are being met by graduates. Based on this judgment, they could decide whether to retain the current component of the curriculum. This approach to evaluation follows a decision-management model. A different option would be for the evaluator to collect data about graduates' use of nursing process and to make a judgment about whether the data matched a standard identified by the faculty. This approach follows a judgment-strategy model. In this case the evaluator might make judgments about the worth of the intended outcome or the standard. The evaluation may be conducted totally by the faculty, or it may be conducted in part by persons external to the program.

The answers to the three questions may point to one or more models as the appropriate one(s) to be used. It is important that the model chosen be congruent with the philosophy, knowledge, and abilities of the persons using the model for evaluation.

Example of an Evaluation Model

The following example illustrates the selection, use, and value of a model for curriculum evaluation. We were recently asked to serve as evaluation consultants to a faculty wishing to evaluate the curriculum of a baccalaureate nursing program for the purpose of improving the curriculum. The scope of the evaluation was to be the entire curriculum and not just the educational outcomes. We were asked to serve as external evaluators who would describe and judge the curriculum; the faculty would be the decision makers.

After examining with the faculty the three questions listed above, we reviewed various models of evaluation in order to select the most appropriate one. We identified the evaluation model developed by Stake (1973) as an appropriate one in that particular situation.

To understand why this model was selected and how it contributed to the evaluation process, it is necessary to have a clear picture of the model. It is a judgmentstrategy evaluation model: the evaluator's judgment is a critical feature. An essential characteristic of the model is the distinction between the description of the program and the judgments made about it. Figure 1 is a representation of the components of the model. The representation includes a Rationale Cell and two data matrices: a Description Matrix and a Judgment Matrix.

FIGURE 1REPRESENTATION OF THE COMPONENTS OF THE STAKE MODEL'

FIGURE 1

REPRESENTATION OF THE COMPONENTS OF THE STAKE MODEL'

The Rationale Cell includes data about the philosophic background and basic purposes of the program. It serves as a basis for evaluating other aspects of the program.

The matrices provide a format for data collection. They do not explain how to collect the data, but rather they explain what data to collect and compare. The Description Matrix identifies the areas in which data are to be collected in order to describe the program fully. The Judgment Matrix identifies data to be collected in order to have standards and judgments to be used in decision making.

The Description and Judgment Matrices are each divided into three rows: Antecedents, Transactions, and Outcomes (Figure 1). Antecedents are any conditions existing before teaching and learning which may relate to outcomes. They represent the "entry behaviors." Transactions, the dynamic element of the model, are the many encounters which comprise the process of education: for example, encounters between student and teacher, student and student, student and client, and student and materials. Outcomes refer to the results of education - both immediate and long range. Outcome data include measurements such as the following: student achievement, impact of instruction on teachers, wear and tear on equipment, costs, etc.

The Description Matrix is further divided into two columns: Intents and Observations (Figure 1). The Intents are the expected or desired antecedents, transactions, and outcomes of the program. For example, the desired entry skills of students are intended antecedents; the planned learning experiences are intended transactions, and the stated objectives are intended outcomes. The Observations refer to empirical evidence of the actual antecedents, transactions, and outcomes. Examples of such data include student scores on a preadmission test as an observed antecedent, types of assignments given to students as an observed transaction, or employer evaluation of the performance of graduates as observed outcomes.

The divisions in the Description Matrix allow data within each cell to be examined separately and the relationships between data in two cells to be described. Two principal criteria are used in examining the relationships between data in the Description Matrix: contingency and congruency (Figure 2).

Contingency is the criterion used to evaluate the vertical relationships between cells. Within the Intents column the contingency is a logical one; there should be a logical connection between intended antecedents and intended transactions and between intended transactions and intended outcomes. Within the Observations column, the contingency is an empirical one; there should be empirical evidence that the observed outcome is related to the observed transaction and the observed transaction is related to the observed antecedent.

Congruency is the criterion used to evaluate the horizontal relationships between cells in the Description Matrix. The data are considered congruent if what was intended matches the observed data; however, the presence of congruence does not necessarily indicate that the observed data are valid or reliable.

The Judgment Matrix is also divided into two columns: Standards and Judgments (Figure 1). Standards can be defined as a desired level or quality of something. The standard becomes the basis against which the observations are compared. The faculty may set the standard at different levels (i.e., absolute or relative) depending on their purposes for evaluation. A standard may be absolute, in which case an acceptable level for antecedents, transactions, and outcomes is specified; a standard may be relative, in which case the acceptable level of performance on the criterion is reflected by characteristics of other programs. For example, if creativity is being examined, an absolute standard would specify a minimum mean score on a Torrence Creativity test; a relative standard for creativity might be the mean score achieved on a Torrence Creativity test in another program against which the mean score in the program being evaluated will be compared.

FIGURE 2A REPRESENTATION OF THE PROCESSING OF DESCRIPTION DATAa

FIGURE 2

A REPRESENTATION OF THE PROCESSING OF DESCRIPTION DATAa

Once the standards are established, judgments can be made. In this model, the evaluator judges whether the observed data about the curriculum exceed, achieve, or fail to meet the specified standard. Because the decision makers will still have to make a decision about the curriculum, they will need to determine how to combine the judgments in order to arrive at a decision.

We found the judgment-strategy model developed by Stake to be satisfactory for use in evaluating this specific nursing curriculum. The model provided a framework for planning the evaluation design, facilitated a comprehensive evaluation of the curriculum, promoted objectivity in the evaluation process by specifying explicit standards, and was suitable for the evaluators and faculty.

The model provided a definite framework for planning the evaluation design. The matrices and cells identified in the model clarified what data needed to be collected. For example, the Rationale Cell included data about the program's philosophy and objectives. Such data were necessary for the initial phase of the evaluation design which was to devise means for evaluating the philosophy and objectives of the nursing program and their relationship to one another. We also included the program's conceptual framework of nursing in the Rationale Cell because it, too, served as a basis for evaluating the rest of the program.

In addition to indicating the data to be collected, the model provided guidance for examining specific relationships of the data between cells. For example, tools were developed to evaluate congruence between the Rationale Cell and Intents, and between intended and observed outcomes of the curriculum.

The model was excellent in providing for a comprehensive evaluation of the curriculum. Since we wanted to evaluate the program's philosophy and objectives, the Rationale Cell was especially important. The Description and Judgment Matrices provided the scope of evaluation desired in this situation. Also, using the Stake model allowed the faculty to plan for evaluating outcomes of the curriculum as well as ongoing activities.

Because the model required that explicit standards be identified, it facilitated objectivity in the evaluation process. The faculty had to identify specifically what was considered to be acceptable performance by the sample of students for each intended outcome. Distinguishing between description and judgments tended to eliminate evaluator bias.

A characteristic of this particular project was the degree of faculty involvement in the evaluation process. The Stake model was especially useful because faculty at all levels of involvement in the project could readily understand the model and could communicate with other faculty and with us about it. Because the model divided the total evaluation task into smaller segments, the task could be shared by faculty and completed segments could be clearly identified. This allowed the faculty to identify the progress that had been made in completing the evaluation and to feel a sense of accomplishment.

Summary

In summary, our purposes have been to emphasize the importance of using a model in curriculum evaluation and to present guidelines for selecting an appropriate model. We have discussed our experience in selecting and using the Stake model only as an example. While a model does not eliminate all of the problems and frustrations of curriculum evaluation, it does make the task more manageable. It can also improve the quality of the evaluation and can even make curriculum evaluation enjoyable.

References

  • Green, J.L., & Stone, J. C. Curriculum Evaluation. New York: Springer Publishing Co., 1977.
  • Smith, N. President's corner: Studying evaluation assumptions. Evaluation News, 1980, 14, 39-40.
  • Smith, N. & Murray, S. The status of research on models of product development and evaluation. Educational Technology, 1975, J5(3), 13-17.
  • Stake, R. The countenance of evaluation. In B. Worthen & J. Sanders (Eds.), Educational evaluation: Theory and practice. Worthington, Ohio: Charles J. Jones, 1973.
  • Worthen, B. & Sanders, J. (Eds.) Educational evaluation: Theory and practice. Worthington, Ohio: Charles J. Jones, 1973.

10.3928/0148-4834-19830501-04

Sign up to receive

Journal E-contents