The Journal of Continuing Education in Nursing

Evaluation in Continuing Education: Is it Practical?

Betty Mitsunaga; Louise Shores

The increased importance of continuing nursing education today is directing increased attention to its evaluation. As in other forms of nursing education, evaluation has long had a place in continuing education. Some features unique to continuing education, however, pose difficulties in appraisal, evoking questions about the practicality of evaluation or more specifically, the practicality of comprehensive evaluation.

In this article, a description of the nature, purposes and structure of continuing education provides the context for consideration of evaluation. What are the purposes of evaluation? What are some of the techniques especially useful for assessment of continuing education offerings and programs? What are the obstacles to evaluation? These are among the questions elaborated here. A framework for evaluation of continuing education is proposed as one means for overcoming the recognized obstacles.

EVALUATION IN THE CONTEXT OF CONTINUING EDUCATION

Continuing education is increasingly a reality of life for nurses. The rapidly expanding technology and knowledge base demand that each nurse continue to learn throughout her professional lifetime. In the broadest sense, continuing education includes all experiences that contribute to the on-going competency of nurses whether or not planned for the specific aim of learning.

In order to explore evaluation of continuing education, it is necessary to use a more limited definition. For purposes of this discussion, the definition of continuing education includes planned learning experiences beyond a basic nursing educational program and excludes those offered for academic credit. The focus of experience includes the transmission of new knowledge and skills as well as the reinforcement or restoration of knowledge and skills acquired in previous experience. Change in learner behavior is the immediate goal of continuing education while the ultimate goal 'is to promote a higher quality of service to the nurses' clients through improved practice. It follows that evaluation of continuing education will increasingly address both those goals. While there has been progress in the development of measures for behavior change, the measurement of quality of service remains a largely unresolved problem.

Complexity of evaluation is inherent in the form and process of continuing education. The potential learner population for continuing education is made up of all nurses. Even those not practicing must maintain or obtain current knowledge and skill before returning to practice. The potential subject matter for continuing education is the entire knowledge and skill base of nursing. Format runs the gamut from one hour inservice to a program of study several months in length. The obvious result is a wide range in educational offerings,

Continuing education is by nature markedly different from other forms of education, Its major differentiating characteristic is diversity, given that all nurses comprise the target population, and that learners have diverse interests, educational background, andtypes as well as years of experience. To meet their needs, the content of programs offered ranges from the more generalized in applicability such as group dynamics to the more specialized such as gerontological nursing or child abuse. The offerings are organized for presentation as courses, seminars, workshops, or in other formats. The time allotted any given offering takes into account the objectives to be attained but is constrained by the time which the learners can afford in travel to the program site and away from other responsibilities. While the same variety of teaching methods used in other forms of education may be found, the structure of continuing education offerings frequently requires flexibility in their use. In the course of a workshop, plans for the use of a particular method may be abandoned in favor of another method when such change is indicated by ongoing events. Diversity, then, characterizes both the form and process of continuing education.

The organizational framework through which continuing education is offered has implication for evaluation. Some offerings1 are occasional activities of an organization that exists for some other purpose. Other offerings are a part of an overall continuing education program2 conducted by an organization whose mission is or includes on-going responsibility for continuing education. Single offerings as well as total programs are subject to the evaluation process. The nature of the evaluative questions to be answered and the decisions to be made may vary for each educational project.

The nature of continuing education influences the feasibility of evaluation efforts but so also does the nature of the adult learner. Adults function as autonomous, self-directed beings. It is understandable that they may resist being evaluated by other adults, especially when such evaluation carries with it a connotation of grading. Adult motivation is linked with current problems or tasks in which the individual is involved. The goals and objectives listed by the educator may reflect the objectives of the learner only in part. It can safely be assumed that a group of adult learners will include a variety of individuals with differing objectives. An evaluation of learning based on the educators objectives alone will fail to account for this diversity.

PURPOSES OF EVALUATION

Evaluation in continuing education has a practical orientation. Its purpose is to gather and analyze information which will be used for decision making. Before designing evaluation, the evaluator needs to identify the decision makers for whom the results are intended and the questions that are relevant to them. The type of information desired will depend upon the position of the decision maker and the nature of the decisions to be made.

The evaluation results of continuing education offerings and programs have potential utility for five sets of users: the learners, program faculty, program administrators, funding agencies, and credentialing boards or agencies. What each expects in evaluation and would find useful for decision making differs.

Learners are concerned with optimal use of finite personal resources, including time and energy, for continued learning. Evaluation may reveal that certain objectives have not been achieved satisfactorily, but whether additional resources will be invested for the achievement of these or alternative objectives is a decision situation. Evaluation enables comparison of actual outcomes of educational activity with outcomes that were expected by the learner and with outcomes set by others in the form of external standards. If there are gaps among these, the learner may decide that his own expectations or those formulated as external standards require change. Also, he may weigh the achievement of these outcomes against the resource investment required for achievement, Additionally, the learner may decide to focus future efforts on certain gaps and not on others, depending on their importance to his current practice or career enrichment.

Decisions which are made by the faculty concern the design of learning experiences in relation to achievement of their stated objectives by the learner. To improve the offering, the faculty want to know if the format and setting facilitated learning, if the content selected was relevant, if the instructional strategies employed were effective, and if the emphasis and time given to the objectives were appropriate. Disadvantages or deficiencies in any of these elements made apparent by evaluation results require decisions about change with the aim of increasing learning.

Although the program administrator also has concerns about evaluation, these focus more on the offering or program as a whole than on its specific elements. The administrator wants to know about the relative success of an offering in relation to the goals of the total program and its resources. Whether an offering will be continued or discontinued is a policy decision guided by evaluation. If evaluation outcomes show that improvements are needed within an offering, and if improvements mean additional resources, program goals are necessarily considered. The decision might be that the goals can be met as well or better by discontinuing the offering and allocating and resources to other program components. Evaluation has equal, if not greater importance when the offering is a major program component since its continuation with improvements may be critical.

Funding agencies require, more frequently than not, evaluation of offerings or programs which have been financed. The evaluative information may be utilized for decisions such as continuation of funds for the programs and the level of funds to be provided. Other decisions, however, may be related to the goals of the agencysuch asthefeasibility of financing similiar efforts at other sites. Further, evaluative information serves as data for agency planning, for evaluation of its goals, policies, and operations.

Thus, whether the results of an offering or program are commensurate with the resources expended is of interest to learners, faculty, and program administrators, but for different reasons. Each evaluates for his own purposes, and those purposes may be quite diverse.

Credentialing boards also may evaluate continuing education offerings and programs. These boards or agencies decide on the initial and continuing qualifications of professional nurses. The criteria for recognition vary from group to group, dependent upon the nature of the certification or licensure: such criteria may include evidence of successful completion of an approved continuing education course(s). Successful completion refers to achievement of course objectives as demonstrated by evaluation of the learner. Increasingly, certification is not warranted on the basis of unevaluated learning experiences.

There may be other users of evaluation results besides those mentioned above, each seeking information relevant to his own decision-making functions. It is incumbent upon the evaluator, therefore, to ascertain as a first step who the users will be in order that the evaluation design yields the necessary information.

A FRAMEWORK FOR EVALUATION OF CONTINUING EDUCATION

One organizational framework for evaluation of continuing education may be presented as a spiral with each loop representing potentially increased complexity in the evaluation process. At each loop, the type of information which may/be useful for decision making is specified. Learner satisfaction, knowledge, skill and attitude change, change in practice/performance, and the relationship of the practice change to quality of service suggest sets of questions of increasing complexity in the evaluation process. The questions of cost effectiveness may be represented as a core threaded through the entire spiral (Fig. 1).

An evaluation design for a single offering may be located within the spiral and thus viewed in the larger context of evaluation. For example, a pre- and post-test designed to measure knowledge gain in a three-hour didactic course could readily be identified with the second loop of the spiral. The information not provided by the approach may be clearly identified. The degree of learner satisfaction, skill and attitude change, change in practice, and the relationship to quality of service are untouched, yet that approach may do a superb job for measuring knowledge gain.

If an evaluator discovers that all approaches used for program evaluation address the lower loops of the spiral, the framework may provide the impetus for goal setting, e.g., to identify one offering within the next year in which actual practice change will be measured, or, to develop at least one evaluation design to measure the impact on patient care of practice change resulting from one or more offerings.

It is clearly impractical to gather information in all components for every single offering. The evaluation process would be too cumbersome to be useful at all. Rather, a plan for evaluation of a total program of continuing education may be established using some combination of the components so that, within a specified period of time, there would be information within each component and concerning the cost effectiveness of the approaches used.

Several factors may be considered in the development of the plans for evaluation. Decisions about the type of evaluation for each specific offering will take into account the nature and objectives of the offering, as well as the length of time devoted to the learning experience. Obviously, one may not measure outcomes that are not attributable to the offering, given its design and time frame. On the other hand, if no offering within the total scope of a program lends itself to the possibility of producing measurable behavioral change or impact on quality of service, the overall goals of that program need careful consideration.

Increasing emphasis on accountability, relatively limited resources for continuing education, combined with concern for rising health care cost, set the stage for a cost effectiveness imperative. It is not enough that continuing education produces results; it is necessary that those results be accomplished at a reasonable cost. It follows that cost-benefit questions are an important ingrethent in program evaluation and may be viewed in terms of any or all of the loops in the evaluation spiral.

APPROACHES USED IN EVALUATION OF CONTINUING EDUCATION

The Reaction Form is frequently used for continuing education. Such a tool provides information about the learner's self-perception of learning and satisfaction with the offering or its specific components. The respondent is asked to identify such items as: most and least helpful parts of the experience, satisfaction and dissatisfaction with the facilities, and selfperception of knowledge, attitude, or skill change. The data thus obtained clearly do not measure actual learning or application of learning. However, they may provide information for some decision making, provided several basic assumptions are accepted. Those assumptions include: (1) that high learner satisfaction is directly related to the tendency to seek additional related learning experiences; (2) that high learner satisfaction is related to actual learning; (3) that an acceptable comfortable environment is conducive to learning; and (4) that self-perception of having learned is directly related to actual learning. Based on one or more of those assumptions, the educational planner may choose to repeat those approaches which produce the highest learner satisfaction. Facilities identified as deficient for the learner may be modified or avoided for future offerings.

Research is needed to support or reject the assumptions on which the use of reaction forms is based. Reaction forms have an advantage in that they are inexpensive to develop and administer, and that any educational experience regardless of form or process will produce some learner reaction. However, there are severe limitations for drawing any conclusions about the higher levels of the evaluation spiral from Reaction Forms.

Learner attainment of the stated objectives of an offering is measured in a variety of ways including paper and pencil tests, structured interviews, attitude scales, direct observation of skill performance. The specific tools to be selected are based on the types of learner objectives, whether cognitive, affective, or psychomotor. The nature of the decisions to be made with the evaluation data will determine the methodology, For example, if the evaluation results will be used to determine which learners have achieved a predetermined level of knowledge, skill or attitude required for a promotion, a post-test situation is sufficient to provide that information. It may be that some were at the desired level prior to the learning experience (and thus the program was not cost effective); nevertheless, a post-test would provide the needed information. On the other hand, the user's question may be in terms of effectiveness of the offering to attain stated objectives. In that situation, it is necessary to know the level of the learners understanding or behavior for each of those objectives prior to the learning experience so comparison may be made with the level following the experience.

Behavior change consequent to learning poses some logistical problems for many continuing education programs. If learners are dispersed throughout a large geographical area, such evaluation will require mailed responses or travel. The resulting expense may limit the frequency with which questions of behavior change can be addressed. Data about behavior change may be obtained from such sources as the learner's own self-perception, from the perception of others, i.e., subordinates, peers, super-ordinates, from direct observation, or from chart audit. Specific criteria assist the rater, observer, or auditor in determining the attainment of desired behaviors. Need for pre-offering ratings or observations may be determined by considering the decisions which will be dependent on the evaluative data. Whenever feasible, pre- and post-offering data will assist the evaluator to make some inferences, however guarded, about the effectiveness of the offering for producing specified behavior change.

The ultimate purpose of continuing education is to produce a change in the quality of service and the outcome of that service for the client. Thus, in order to demonstrate that continuing education is truly effective, it is necessary to find ways to measure that outcome for the patient. Therein lies one of the major challenges in the design of evaluation of continuing education.

Where the learner population is drawn from a wide geographic area and a variety of practice settings, the barriers to evaluation of patient outcome are obvious.

Access to the patient populations for measurement of outcomes is hampered by distance, varied practice and record keeping systems, as well as the sheer number of agencies represented. A second consideration is that measurable change in patient outcome may be dependent on a combination of favorable factors including a system ready to incorporate the change and the learner being a person with power or support sufficient to accomplish the change.

One approach is to periodically conduct an evalution of the effectiveness of a specific offering using a relatively self-contained population. For example, if a course including bladder training is offered to the staff in five local nursing homes, the logistics of gathering pre- and post-observations of patient continence are simplified. Thus it may be possible to demonstrate change in patient outcome which seems to result from that offering. It is interesting to note that this evaluation still does not answer the question of whether the same course, offered to one or two persons from each of 25 nursing homes, would produce any measureable results.

Nursing audit of patient outcome offers one source of data based on the assumption that a change in the record represents an actual change in patient outcome rather than a change in documentation. Pre- and postoffering audit of patient outcomes which may have been influenced by the offering provide the data needed to determine the offering's effectiveness.

EVALUATION ISSUES IN CONTINUING EDUCATION

The foregoing discussion may have conveyed the impression that evaluation of continuing education offerings or programs is widely valued asan activity given the potential utility of its outcomes for decision makers. Further, it may have appeared that when an array of suitable methods and techniques have evolved, most of the difficulties in evaluation will have been removed. If such a sanguine view has been communicated, then it becomes obligatory that existing issues be considered. Some of these issues represent obstacles to the evaluation process, while other issues are inherent characteristics of continuing education which affect the efficiency of evaluation.

Among the obstacles to evaluation activity, not the least in terms of impact are the limitations of resources, more specifically, time, personnel, and finances. A thriving continuing education program inevitably encounters many pressures to produce more offerings. Productivity as measured by number of offerings and enrollment has high, indeed critical, importance for a self-supporting program, whose principle source of income derives from the fees or tuition generated. In addition, a successful program is subject to demands from the target population for a multiplicity of offerings. Consequently, program maintenance, or expansion, may take precedence over evaluation in the allocation of limited resources. Under these circumstances, the argument might be that "something is better than nothing," resulting in evaluation which is inadequate by design. If the evaluation compromise does not occur at the onset of the process, it may occur at the end in that the analysis of results provided the decision maker may be less than comprehensive. In either case, evaluation receives short shift because of other demands on the available resources.

Another obstacle is the response of adult learners to evaluation which involves appraisal of their performance. Resistance is not infrequent, especially when the learner's goals in the situation have little relationship to the performance dimensions being judged. Moreover, performance appraisal is sometimes perceived to negate competence demonstrated in professional experience, thus threatening self-esteem. While self-assessment may evoke less resistance, it is neither possible nor appropriate in every situation. The learning environment, however, probably influences attitudes toward evaluation more than the particular techniques utilized. An environment which is conducive to self-directed learning also minimizes resistance to appraisal of deficits and gains.

The efficiency and efficacy of evaluation is further affected by factors which characterize continuing education. Not uncommonly, the enrollment in a popular offering markedly exceeds the number of students for whom the course was designed. Concession to the demand assures financial success but also modifies the conditions for learning. In so doing, the validity of assumptions for the evaluation design is changed. When certain events do not or cannot occur as was assumed, it is possible that learning will not take place in the expected ways. The accuracy of evaluation results can thus be questioned. Inaccurate data may lead to inappropriate decisions and thus be more dangerous than no data at all.

The diversity of the learners in educational preparation, experience, and interests has implications not only for the assessment techniques utilized but also for the interpretation and relative utility of outcomes. For the self-directed adult learner, individualized assessment would be indicated to identify gains made in extent and kind which, in turn, motivates continuing learning. Individualized assessment, however, may or may not be indicated by course objectives; rather, the objectives may call for the application of a single standard to the aggregate of learners. Particular qualities of the individual are not relevant then in appraising performance; instead, performance is measured and interpreted in relation to the standard. This is often dissatisfying to adult learners who perceive the feedback to have limited utility for their own decision making.

A different issue arises in relation to course assessment where faculty-learner contacttime is minimal. Achievements expected in a course two hours in length, for example, would be less ambitious than those in an eight-hour course which allows more time for contact between faculty and learners. In the former, some increment in knowledge is the only outcome usually anticipated and evaluation of such short courses yields only a narrow range of information. Thus, the nature of the anticipated evaluation data may not warrant excessive use of time and money. Similarly, when the pressures to produce more offerings and scarce resources are considered, the feasibility of evaluation becomes an issue.

At the same time, requirements of credentialing programs may bu a source of counterpressure. Some programs credit participation in "approved" continuing education offerings. If approval of the offerings is contingent upon a plan for evaluation of learner performance, and moreover, if the resulting information must be provided the credentialing agency, the issue assumes another dimension. The importance of continuing education credits earned by the learner is a reality that cannot be ignored.

Aside from the questions regarding the feasibility of evaluation in general, efficiency of the process is influenced by factors characteristic of continuing education. Geographical dispersion of learners, for instance, poses special difficulties for follow-up procedures, particularly if assessment of clinical practice in the work setting is indicated. The evaluator can employ various techniques other than direct observation of clinical performance. While these techniques may be more efficient, the data produced are likely to be less reliable and less valid reflections of practice. The results are at best only indirect evidence of performance level.

Flexibility within continuing education offerings is yet another factor that detracts from efficiency. Departures from the intended design of a course while it is on-going may be a virtue on one hand in terms of adapting to the learning needs of participants. On the other hand, they may be disruptive to the evaluation process by necessitating quick changes in the methodological protocol, with insufficient planning time.

Finally, an issue increasing in salience is the relationship of continuing education to the professional competency of participants. Credentialing boards and agencies have given prominence to the role of continuing education but have also made its limitations, including evaluation, more apparent. For the most part, continuing education does not assure continued competence but instead, educational achievement. Therefore, care must be taken that evaluation results are interpreted in the appropriate context.

COMMUNICATION OF EVALUATION RESULTS TO USERS

If the purpose of evaluation in continuing education is to gather and analyze information for decision making, then consideration should be given to communication of that information to the decision maker. The type and timing of communication will be determined in part by the nature of the decision to be made. For example, if pre-test information is to be used for decisions about content and course structure, it logically must be communicated to the teacher before the offering. On the other hand, if it is to be used on a comparative basis with post-test data to determine the course effectiveness, it may be inappropriate for the teacher to analyze the pre-test data until the course is over, thus avoiding "teaching to" the gaps in the pre-test and significantly influencing the subsequent evaluation.

Communication of evaluative data to the learner has a direct effect on the learning process itself. Immediate feedback, written or oral, during the course of learning will aid the learner in decisions about the amount and direction of energy needed for goal achievement. Such feedback tends to be rewarding to the learner and may increase motivation toward the goals of the offering.

The learner retains the rig ht to the evaluation data about his own progress. Opportunity to view that data should be provided or available on request. Data indentifiable to the individual are kept confidential except with his explicit or implicit approval.

Following the completion of an offering and periodically in evaluation of an on-going program, an analytic and summative report of the available evaluation data will be useful as a means of communication. Such reports will aid program developers in improvement of program design and strategies for learning and will provide a mechanism for accounting to policy makers and funding agencies.

An evaluation cycle which began with determination of the decisions to be based on the data gathered, is completed when those data are analyzed and reported to the appropriate decision makers and actually used in their decisions.

REFERENCES

  • 1 . Offering or Course: One segment of a continuing education program or a series of learning experiences dealing with specific content. In Guidelines for State Voluntary and Mandatory Programs. ANA, 1 975, ? 23.
  • 2. Program: Planned organized effort directed toward accomplishing major objectives. A program includes many segments which are described as educational offerings. In Guidelines for State Voluntary and Mandatory Programs. ANA, 1975, ? 24.

10.3928/0022-0124-19771101-04

Sign up to receive

Journal E-contents