With the invention of the World Wide Web in 1989 (Jackson, 2000), the Internet has transformed adult distance education activities. It has certainly become an innovative tool in nursing education programs over the past decade, with many schools of nursing integrating Internet technology into their programs. Research has also helped to validate and legitimize distance education. Russell’s (1999) landmark “no significant difference phenomenon” was a major impetus for distance education. The power and reach of the Internet have moved distance education and online learning from the margins to the mainstream.
Despite the magnitude of this pedagogical innovation, reports of systematic program evaluation of online education are scarce in the literature. Thus, a key objective of this literature review is to ascertain the extent of systematic evaluation of such programs. Although online course offerings provide greater access to adult education, educators have expressed concern that online course availability has outpaced the evaluation of these courses for quality, particularly at the program level (Avery, Cohen, & Walker, 2008; Bangert, 2006; Billings, 2000). Nurse educators note that online education of adults has been adopted rapidly, but it has received less evaluation of quality, efficacy, and cost than have traditional educational practices (Leners, Wilson, & Sitzman, 2007; Mills, 2007). To what extent are schools of nursing systematically evaluating online education at the program level? What evaluation tools are used? What are they finding? How are the results used? This article seeks to provide an integrative review of the nursing and adult education literature to ascertain what systematic evaluation of graduate-level, online education programs in schools of nursing is being performed.
Why Evaluate Online Nursing Education?
Program evaluation is a component of every major program planning theory or framework used in adult education (Caffarella, 2002; Cervero & Wilson, 2006; Forester, 1989; Tyler, 1949). Evaluation is defined as “the identification, clarification, and application of defensible criteria to determine an evaluation object’s value (worth or merit) in relation to those criteria” (Fitzpatrick, Sanders, & Worthen, 2011, p. 7). Fitzpatrick et al. (2011) defined program as “an ongoing planned intervention that seeks to achieve some particular outcomes(s), in response to some perceived educational, social, or commercial problem” (p. 8). Therefore, program evaluation is the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve or further program effectiveness, increase understanding, and inform decisions about future programming (Patton, 2008). The purpose of program evaluation of educational initiatives that include online activities will vary with the informational needs of each stakeholder or stakeholder group. These may include (a) justification of investment, (b) measuring progress toward program objectives, (c) measuring quality or effectiveness, (d) providing a basis for improvement, and (e) informing decision making (Thompson & Irele, 2007). In addition, program evaluation of online nursing education is necessary to inform stakeholders, who may include students, faculty, staff, administrators, policy makers, boards, consultants, community groups, accrediting agencies, government organizations, businesses, and employers (Rovai, 2003).
Two basic types of evaluation are available. Formative evaluation focuses on program improvement and often provides information for judging the merit or worth of a part of a program. Summative evaluation focuses on providing information for decision making (e.g., for program adoption, continuation, or expansion; Fitzpatrick et al., 2011). Schools of nursing often use program evaluation processes to ascertain the success or failure of the programs in meeting predetermined goals and standards, rather than to achieve program improvement (Chapman, 2006). An evaluation methodology based on a theoretical model can help to ensure that the evaluation plan is both comprehensive and coherent (Bevil, 1991). Although the literature recognizes several types of evaluation models or approaches, Fitzpatrick et al. (2011) implied that no one approach is best. Therefore, choice is not empirically based but is a matter of evaluation purpose, evaluator preference, and stakeholder needs. The challenge is in deciding which approach or combination of approaches is most relevant.
General components and guidelines for systematic program evaluation (Fitzpatrick et al., 2011; Patton, 2008; Rossi, Lipsey, & Freeman, 2004; Rovai, 2003) identified in the evaluation literature include:
- Identify stakeholders and clarify the purposes of the evaluation.
- Analyze the context of the evaluation and set boundaries on what is to be evaluated.
- Determine the evaluation approach or approaches.
- Identify and select the evaluation questions and criteria.
- Conduct the evaluation, using identified methods for data collection and analysis.
- Interpret, report, and use the results.
Anecdotal evidence suggests that higher education institutions, such as schools of nursing, are familiar with program evaluation of the more traditional educational offerings to ensure quality and to maintain accreditation from their state, region, and either of the two professional accrediting bodies—the Commission on Collegiate Nursing Education or the National League for Nursing Accrediting Commission (Story et al., 2010). However, the Internet has rapidly changed education’s context (Thompson & Irele, 2007). Thompson and Irele (2007) stated that “programs, institutions, and societies must make significant decisions as to how they wish to influence or shape these changes, and/or be shaped by them” (p. 419). With so much at stake, schools of nursing should be concerned with the evaluation practices that may result in program improvement. Therefore, program evaluation plans must incorporate distance education activities to show delineation in education delivery. Evaluation standards and criteria from the various accrediting or reviewing organizations, such as the Commission on Collegiate Nursing Education, the National League for Nursing Accrediting Commission, and specialty and regional organizations, can be used as guidelines to create a single, integrated, comprehensive, evaluation plan encompassing both traditional and distance education course offerings (Suhayda & Miller, 2006).
Definition of Terms
In this article, the term Web-enhanced education applies to a combination of face-to-face and online (synchronous and asynchronous) delivery of courses and programs. The term Web-based applies to courses and programs that are delivered online only (synchronous and asynchronous). The term online education refers to both Web-enhanced and Web-based education. Distance education technologies apply only to the use of the Internet and the World Wide Web.
This literature review focuses on systematic program evaluation of online nursing education at the graduate level, including master’s, post-master’s, and PhD education. Graduate-level programs were chosen because more distance education courses, which include nursing, are offered at this level (Aud et al., 2011; Potempa et al., 2001), and the longevity of these offerings has allowed schools of nursing more opportunity for data collection and experience in program evaluation.
The primary criterion for inclusion in this review was a focus on evaluation, based on research and theory development in systematic program evaluation of online nursing education. Dissertations, research presentations, and technical articles were also included. Teaching strategies without a systematic approach to evaluation, practice-oriented literature, and anecdotal literature were omitted. The databases searched for this literature review were Web of KnowledgeSM, CINAHL®, PubMed®, Academic Search™ Complete, JSTOR®, Education Research Complete™, and ERIC™. Terms used in the search began with program evaluation (summative and formative), which garnered evaluations of entire programs and specific outcomes and constructs measured for formative evaluation and program improvement. After the initial search protocol, the additional phrase graduate nursing programs was used to focus and narrow the results. Finally, the following key words were also used in the search: distance education/learning, e-learning, Web-based education, Web-enhanced education, Internet-based education/learning, and online education/learning because these terms are used interchangeably in the literature. Because the Internet was not widely used by the general public until 1989, only research articles from 1989 to present were included (Jackson, 2000).
The initial search criteria yielded 156 articles. The articles were briefly reviewed for appropriate terminology to indicate a potential match with the research topic. Most of these articles were descriptive, practice oriented, or teaching strategies that lacked an empiric approach to evaluation research. For example, an informative article by Suhayda and Miller (2006) described the steps used in developing an evaluation plan based on a management-oriented approach to program evaluation.
Twenty-seven articles were considered to focus on evaluation of online graduate-level nursing education. Three of the articles included undergraduate/graduate and graduate/another discipline (Billings, Skiba, & Connors, 2005; Johnson, Posey, & Simmens, 2005; Seiler & Billings, 2004). Nineteen of the articles were focused at the individual-course level and did not indicate how the results could contribute to total program evaluation. The topics of these 27 articles were sorted into five broad categories: (a) the success, failure, and use of a variety of technology software (Little, Passmore, & Schullo, 2006), (b) comparison between online and traditional courses (Beta-Jones & Avery, 2004; Cragg, Dunning, & Ellis, 2008; Woo & Kimmick, 2000), (c) teaching and learning effectiveness and strategies (Daroszewski, Kinser, & Lloyd, 2004; Huckstadt & Hayes, 2005), (d) teaching and learning outcomes (Edwards, 2005), and (e) measurement of student perceptions and experiences of online nursing education (Billings et al., 2005; Fearing & Riley, 2005; Seiler & Billings, 2004: Wills & Stommel, 2002).
Although these studies on online education have informed and significantly contributed to nursing and higher education, they did not indicate how the results fit into a larger program evaluation plan, how the research results could be used for program improvement, or how the evaluation was used by stakeholders in decision making, nor were these elements of their focus. Searches for research articles on cost effectiveness in online nursing education delivery yielded no results.
Five articles potentially met criteria for systematic program evaluation and were included in the analysis. The Table (Available as supplemental material in the online version of this article) provides a summary of the five articles, including authors, approaches to evaluation, methodology used, findings, and use of evaluation results. Two of the articles (Lindsay, Jeffrey, & Singh, 2009; Singh, Jeffrey, & Lindsay, 2008) used a method of evaluation found in the evaluation literature and will be presented first. These two articles were also more inclusive of components and guidelines used to evaluate programs. One article (Avery et al., 2008) used a method of evaluation found in the evaluation literature and distance learning (DL) evaluation frameworks and will be presented third. The remaining two articles (Ali, Hodson-Carlton, & Ryan, 2002; Mills, 2007) are based on DL evaluation frameworks and will be presented last.
Table: Summary of Evaluation Frameworks and Models used in Online Education Programs
Lindsay et al. (2009) evaluated the experience of nine nursing faculty who developed and taught a Web-based master’s degree nursing program. Stakeholders were identified as students, faculty, administrative partners, and employers. Lindsay et al. (2009) noted that all stakeholders participated in the evaluation, but the article focused on faculty experience. The article sought to evaluate graduate nursing faculty experience with developing and teaching in the master’s online program. Summative, formative, and accountability evaluation was performed to assess the implementation of the program to detect or predict defects in program design. Portions of Stufflebeam’s CIPP (Context, Input, Process, Product) program evaluation model (considered a management approach to program evaluation; Fitzpatrick et al., 2011) provided accountability indicators for the process related to program implementation and the product related to program outcomes on which to focus the evaluation. The stakeholders identified four evaluation questions that probed the faculty’s experience in implementing this online graduate program.
Qualitative data were collected through journaling and teaching focus groups; the journal entries and focus group transcripts were analyzed to identify achievements and challenges related to online education. Achievements included “establishing congruence between learning outcomes and content with course processes” through online program delivery (Lindsay et al., 2009, p. 183) and satisfaction with managing the workload of transitioning to online education methods. The satisfaction of bringing faculty research and experience to the preparation process was also highlighted. Challenges included pedagogical challenges and the time commitment required for course development, course management, and student engagement. The evaluation results led to plans to include employers as stakeholders, to review the curriculum to possibly add a thesis option, and to increase faculty development in technology. This program evaluation included all components of the process; however, a small convenience sample was used.
Singh et al. (2008) evaluated student perspectives and experiences with a Web-based master’s in nursing program. The stakeholders were identified as students, faculty, administrative partners, and employers. The study sought to perform summative and formative evaluation for accountability of the program through documenting graduate student experiences from orientation to graduation during the period of 2005 to 2007. This comprehensive evaluation used nine questions focused on processes and outcomes of teaching and learning from the students’ perspective. The researchers adopted a participatory evaluation approach aimed at utilization of results.
The sample consisted of the first cohort of students who entered and completed the program (n = 11) over a period of 2 to 3 years. Data collection methods included three questionnaires, journaling, focus groups, and individual interviews. Qualitative data analysis from journaling led to a list of achievements, challenges, and recommendations. Achievements included staying in the course and supporting peers. Student challenges included balancing the demands of the program and other priorities, and shared methods of assessment of students’ work. Singh et al. (2008) reported using the results in three areas. Formative evaluation yielded changes in program orientation and alignment of online course goals with goals of the program. Through summative evaluation, student demand for more face-to-face interaction was addressed via adoption of Web-enhanced and Web-based program delivery.
Singh et al. (2008) offered narrative results for the questionnaires but did not share quantitative descriptive results or offer results from the interviews or focus groups. A convenience sample was used and the sample size was small. However, this study contained all components of the evaluation process.
Avery et al. (2008) evaluated a Web-enhanced, 16-course program for three master’s specialty areas of nursing at a large midwestern U.S. university. Evaluation stakeholders were not specified, but nursing faculty seemed to be the focus. The project sought to evaluate the quality of the 16 Web-enhanced courses. The faculty’s shared beliefs on evaluation areas were well grounded in the literature and resulted in the selection of four quality standards: (a) course mechanics, (b) course organization, (c) student support, and (d) communication and interaction in online education.
These researchers used methods from existing instruments in the literature to develop an appropriate evaluation instrument. Instrument reliability and validity were determined by a pilot test focused on a graduate ethics course that had been taught several times by one faculty member. The final instrument consisted of 20 items and one comment question. Two faculty, who had the most experience with online teaching, and one educational technology person collected data through peer review, interviews, and the developed instrument.
To promote utilization of evaluation results, Avery et al. (2008) used Patton’s (2008) utilization-focused approach to evaluation for data analysis. Utilization-focused evaluation is often subcategorized as a participant approach to evaluation. Researchers reported results through descriptive statistics. Qualitative analysis was accomplished by reviewing each item in follow-up faculty interviews. The presence of goals and objectives appropriate to course level was rated the highest at 4.51 of 5 points. The presence of a written connection between the course objectives and learning activities was scored lowest at 2.88 of 5. Themes derived from qualitative data included support for technology, support for different learning styles, importance of student–student interaction, and the need for course objectives to match learning activities. Findings were reported to the faculty as a whole for quality improvement of online courses. The evaluation received positive faculty response and led to program-specific decisions regarding program improvement and instrument revision.
Avery et al. (2008) specified the purpose and goals of the evaluation and developed quality standards using appropriate literature. The resulting evaluation instrument was specific to online and blended delivery. Data collection and analysis were consistent with qualitative and quantitative methods and a utilization-focused approach to program evaluation. Results were disseminated with utilization in mind. This study contributed to the development of best practices in program and evaluation instrument quality.
The researchers did not share the instrument or questions used for data collection. Not knowing the scale used or the interpretation assigned to the questions created difficulty with attaching exact meaning to the statistical results. Input from students, alumni, and administrators would have added to the evaluation plan. Cost was omitted from the study, and the researchers noted that broader overall program evaluation was beyond the study’s intended scope.
Ali et al. (2002) used a scenario approach to research how a midwestern nursing school developed, implemented, and continually evaluated Web-based learning in a master’s program. Stakeholders were not specifically identified, but students and faculty seemed to be the focus. The purpose was to evaluate the development and implementation of a Web-based master’s program over a 2-year period from 1998 to 2000, with a focus on assessing student satisfaction with the program.
The instrument was based on Chickering’s and Ehrmann’s (1996) principles of good educational practices that were restructured for distance education technologies. From these principles, criteria for evaluation were developed, including course content, interaction, participation, critical thinking, faculty preparation, communication skills, and technical skills. The final nine-item survey tool used a 5-point Likert agreement response choices scale and three open-ended questions. Results were calculated using descriptive statistics. Specific areas of satisfaction included content and the currency of content, critical-thinking exercises, faculty–student interaction, student–student interaction, time allotted for assignments, and opportunities to apply theory through case study. Negative responses included lack of timely feedback and too many assignments. The authors did not specifically note the improvement activities or decisions that resulted from this study.
Although program specific, the study by Ali et al. (2002) yielded results generalizable to Web-based courses. The specific focus was on students’ perspectives of online courses. The context was identified. Data collection methods and results were appropriate for the study. On the other hand, the researchers did not report the questions used or share the evaluation tool. As a result of the study and literature review, Ali et al. presented a consumer guide for Web-based program selection. However, the article did not directly address utilization because it did not include specific program improvements or decisions prompted by the evaluation results.
Mills (2007) used a comparative study approach in conducting a program evaluation of an online versus a traditional master’s degree and post-master’s certificate program. Stakeholders were identified as students, faculty, and administrators at a midwestern school. This summative evaluation was performed to determine whether to aggressively continue the distance learning program as part of the strategic initiatives for the school of nursing. Mills hypothesized that “student socio-demographic and admissions data and all student outcome measures and program performance would be comparable between distance learning and on-site students, with two exceptions” (p. 74): increasing student access (a goal of the online program) and demonstrating marketability of the post-master’s program for online delivery. The theoretical approach was based on an online evaluation framework from the EDUCAUSE Center for Applied Research (Newman, 2003) that was modified to evaluate only three of the six constructs: student outcomes, program effectiveness, and organizational effectiveness.
Archival or secondary data were collected from 17 courses and from all students within those courses (N = 270). Mills (2001) examined the program for sociodemographic and student-related outcomes (cumulative grade point averages and certification pass rates), program effectiveness (enrollment, retention, and completion rates), and organizational effectiveness (cost). Data were analyzed using both descriptive and qualitative measures. Master’s DL students tended to be approximately 6 years older than on-site students. No differences existed in other demographics. No significant differences (p = 0.169) were found in cumulative grade point average between on-site and online.
In examining program effectiveness, retention rates were significantly higher for DL groups than on-site (p = 0.038). Organizational effectiveness data showed a higher cost in technology and faculty workload for DL. However, enrollment had shifted from on-site to DL courses and program enrollment was increased overall. On-site and online did not differ in tuition or fee structure. Evaluation data, reflecting its success and organizational effectiveness, led to administrative decisions to continue the DL program (Mills, 2007).
Mills’ (2007) study had some limitations. Because it used archival data, some information for data collection and analysis was lacking. A convenience sample was used. The study addressed major components of the program evaluation process to include utilization of results. Three measures of the EDUCAUSE framework were noted to be outside of the scope of the evaluation: institutional transformation, institutional outcomes, and faculty-related outcomes. The article did not mention any potential in measuring cost benefit, cost analysis, or return on investment noted in the framework. These omissions limited the completeness of the summative evaluation.
A review of seven discipline-appropriate databases, using 10 key terms, revealed a paucity of published systematic program evaluation research on online graduate education. Lack of a program approach to evaluation of nursing distance education activities persists in the literature, despite a history spanning more than two decades. Ali et al. (2002) reported evaluation results from as early as 1998. The majority of the literature on online nursing education has been aimed at the individual course level (Ali et al., 2002; Avery et al., 2008).
Types of evaluation, formative or summative, were sometimes noted in the review articles. Stakeholders were generally identified as students, faculty, administrators, and alumni. In those studies where all stakeholders were not indicated, the researchers did note the primary focus and purpose of the evaluation. In general, student and faculty perspectives seemed most important to the evaluators, although Mills (2007) included program performance and institutional effectiveness.
Two evaluations (Lindsay et al., 2009; Singh et al., 2008) used two different approaches, respectively: (a) participatory approach, focused on utilization; and (b) the CIPP management approach, focused on decision making and accountability. Suhayda and Miller (2006) also described the use of a management approach using the CIPP model to frame their school of nursing program evaluation plan, which includes traditional, Web-enhanced, and Web-based delivery. One evaluation (Mills, 2007) used a framework from the EDUCAUSE Center for Applied Research (Newman, 2003) that focused on three of six measures. Ali et al. (2002) and Avery et al. (2008) adopted Chickering’s and Ehrmann’s (1996) principles of good education applied to DL as a theoretical approach. This wide variety emphasizes how choice of evaluation approach depends on stakeholder needs, purpose of the evaluation, and evaluator expertise.
Data collection methods included questionnaires, focus groups, interviews, journaling, and archival data collection. Data analysis methods were both quantitative and qualitative. The research, using multiple approaches to data collection, did not always present all results. Evaluation results were used for program improvement and decision making. Program improvement included processes and strategies aimed at student satisfaction and teaching effectiveness within the program. When indicated, resulting decisions included continuation of programs and modifications or additions of online courses within programs.
Four articles did not evaluate cost effectiveness or offer cost benefit analysis, reflecting a need for research in this area. This supports Green’s (2009) findings from a national survey of senior campus officials responsible for managing online and distance education programs. Green noted that almost half (45%) of the survey participants checked “unknown” when asked if their program made or lost money. The “unknown” responses ranged from 26% in private master’s program institutions to 63% in community colleges. Green (2009, para. 5) observed that many campuses lack the cost accounting methodology to assess the actual profits resulting from the revenues associated with rising enrollments in online programs.
The literature also lacked research on the cost benefit for students and how students choose between online education and traditional on-campus education. A recent survey (Green, 2009) reported that online tuition may be the same, lower, or higher for online versus traditional campus students. According to the same survey, students in online rather than traditional on-campus programs may pay 10% or more in additional fees.
Research Agenda for Online Nursing Education Program Evaluation
As stated previously, Thompson and Irele (2007) indicated several purposes for online education initiatives and activities to include: (a) justification of investment, (b) measuring progress toward program objectives, (c) measuring quality and effectiveness, (d) providing a basis for improvement, and (e) informing decision making. The current trends found in program evaluation research of online nursing education also support a research agenda in these areas. First, to justify their investment, schools of nursing need to evaluate the cost benefit of online programs. Comparison studies can assess graduation rates prior to and after implementation in terms of faculty time, the cost of technology software, and the cost to hire technology support. Cost benefit studies can compare traditional programs with online programs, evaluating instructional technology cost only versus student tuition, technology fees, and indirect costs, such as value-added benefits (e.g., increased employment opportunities and increased access to education). This comparison can offer a basis for students to choose online versus traditional education.
To measure progress toward program objectives, researchers must evaluate how program objectives correspond to assessed student needs. End-of-program evaluations or alumni surveys may be used to determine whether program objectives met graduates’ needs in finding employment or obtaining promotion.
Various methodological designs can be used in measuring quality and effectiveness in program evaluation research. Qualitative research studies, including a case study approach and interview and focus group methods of data collection, can provide insight into student and faculty perspectives, how students respond to new and innovative technology, and identify strengths and weaknesses of online education programs. Program evaluation research using mixed methodologies tends to be a more comprehensive study. For example, studies based on mixed methods can better ascertain schools of nursing faculty, staff, student, and administration values that may underline a program. By using a combination of data collection methods, researchers can investigate program reputation; assess the added values of auxiliary support, such as library, student, and program expectations and outcomes; and even the involvement of stakeholders in the evaluation process.
Program evaluation research is needed to test theoretical evaluation models or approaches to determine which are most the useful and valuable in program planning and evaluation of online education. This not only contributes to evidence-based educational practices but also to theory development and best practices in online education.
Finally, studies are needed to assess the extent that program evaluation is used as a basis for improvement and decision making. Some future research questions might include:
- To what extent are schools of nursing systematically evaluating their online nursing education programs at the master’s degree level?
- To what extent are schools of nursing using the evaluation results in revising content?
- What are the barriers and facilitators to systematic program evaluation processes and utilization of online nursing education programs at the master’s degree level?
- What program characteristics influence the systematic program evaluation processes and utilization of online nursing education programs at the master’s degree level? Data analysis can be reported as descriptive statistics, using means, standard deviations, and simple regressions.
Practice and research support the use of the Internet for education. Growth will continue as more sophisticated technology is developed and made accessible to adult learners. Increasing demands in the nursing workforce will mandate the need for quality programs that are cost efficient, cost effective, and beneficial to students and organizations. The challenge facing online nurse educators is gathering enough data to perform systematic program evaluation to articulate distance education’s place in teaching–learning (Rovai, 2003).
- Ali, N.S., Hodson-Carlton, K. & Ryan, M. (2002). Web-based professional education for advanced practice nursing: A consumer guide for program selection. The Journal of Continuing Education in Nursing, 33, 33–38.
- Aud, S., Hussar, W., Kena, G., Bianco, K., Frohlich, L., Kemp, J. & Tahan, K. (2011). The condition of education 2011 (NCES 2011-033). Retrieved from U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office Web site: http://nces.ed.gov/pubs2011/2011033_1.pdf
- Avery, M., Cohen, B. & Walker, J.D. (2008). Evaluation of an online graduate nursing curriculum: Examining standards of quality. International Journal of Nursing Education Scholarship, 5, 1–17. doi:10.2202/1548-923X.1538 [CrossRef]
- Bangert, A. (2006). The development of an instrument for assessing online teaching effectiveness. Journal of Educational Computing Research, 35, 227–244. doi:10.2190/B3XP-5K61-7Q07-U443 [CrossRef]
- Beta-Jones, B. & Avery, M. (2004). Teaching pharmacology to graduate nursing students: Evaluation and comparison of web-based and face-to-face methods. Journal of Nursing Education, 43, 185–189.
- Bevil, C. (1991). Program evaluation in nursing education: Creating a meaningful plan. In Garbin, M. (Ed.), Assessing educational outcomes (pp. 53–67). New York, NY: National League for Nursing.
- Billings, D. (2000). A framework for assessing outcomes and practices in Web-based courses in nursing. Journal of Nursing Education, 39, 60–67.
- Billings, D., Skiba, D. & Connors, H. (2005). Best practices in web-based courses: Generational differences across undergraduate and graduate nursing students. Journal of Professional Nursing, 21, 126–133. doi:10.1016/j.profnurs.2005.01.002 [CrossRef]
- Caffarella, R.S. (2002). Planning programs for adult learners: A practical guide for educators, trainers, and staff developers (2nd ed.). San Francisco, CA: Jossey-Bass.
- Cervero, R.M. & Wilson, A.L. (2006). Working the planning table: Negotiating democratically for adult, continuing, and workplace education. San Francisco, CA: Jossey-Bass.
- Chapman, D.D. (2006). Building an evaluation plan for fully online degree programs. Online Journal of Distance Learning Administration. Retrieved from http://www.westga.edu/~distance/ojdla/spring91/chapman91.pdf
- Chickering, A. & Ehrmann, S. (1996, October). Implementing the seven principles: Technology as lever. American Association for Higher Education Bulletin, 3–6. Retrieved from http://www.tltgroup.org/programs/seven.html
- Cragg, B., Dunning, J. & Ellis, J. (2008). Teacher and student behaviors in face-to-face and on-line courses: Dealing with complex concepts. Journal of Distance Education, 22, 115–128.
- Daroszewski, E., Kinser, A. & Lloyd, S. (2004). Online, directed journaling in community health advanced practice nursing clinical education. Journal of Nursing Education, 43, 175–180.
- Edwards, P. (2005). Impact of technology on content and nature of teaching and learning. Nursing Education Perspectives, 26, 344–347.
- Fearing, A. & Riley, M. (2005). Graduate students’ perceptions of online teaching and relationship to preferred learning styles. MEDSURG Nursing, 14, 383–389.
- Fitzpatrick, J., Sanders, J. & Worthen, B. (2011). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Boston, MA: Pearson.
- Forester, J. (1989). Planning in the face of power. Berkeley, CA: University of California Press.
- Green, K. (2009). The 2009 Campus Computing Survey. The Campus Computing Project. Retrieved from http://www.campuscomputing.net/2009-campus-computing-survey
- Huckstadt, A. & Hayes, K. (2005). Evaluation of interactive online courses for advanced practice nurses. Journal of the American Academy of Nurse Practitioners, 17, 85–89. doi:10.1111/j.1041-2972.2005.0015.x [CrossRef]
- Jackson, L.A. (2000). Using the Internet for training and development. In Kossek, E.E. & Block, R.N. (Eds.), Managing human resources in the 21st century: From case concepts to strategic choice (pp. 20.7–20.27). Cincinnati, OH: Southwestern College Press.
- Johnson, J., Posey, L. & Simmens, S. (2005). Faculty and student perceptions of web-based learning: Bringing online educational programs to underserved communities. The American Journal for Nurse Practitioners, 9(4), 9–18.
- Leners, D., Wilson, V. & Sitzman, K. (2007). Twenty-first century doctoral education: Online with a focus on nursing education. Nursing Education Perspectives, 28, 332–336.
- Lindsay, G., Jeffrey, J. & Singh, M. (2009). Paradox of a graduate human science curriculum experienced online: A faculty perspective. The Journal of Continuing Education in Nursing, 40, 181–186. doi:10.3928/00220124-20090401-07 [CrossRef]
- Little, B.B., Passmore, D. & Schullo, S. (2006). Using synchronous software in web-based nursing courses. CIN: Computers, Informatics, Nursing, 24, 317–325. doi:10.1097/00024665-200611000-00005 [CrossRef]
- Mills, A. (2007). Evaluation of online and on-site options for master’s degree and post-master’s certificate programs. Nurse Educator, 32, 73–77. doi:10.1097/01.NNE.0000264326.10297.e7 [CrossRef]
- Newman, A. (2003, February). Measuring success in web-based distance learning. EDUCAUSE Center for Applied Research Bulletin, 4, 1–11.
- Patton, M.Q. (2008). Utilization-focused evaluation. Thousand Oaks, CA: Sage.
- Potempa, K., Stanley, J., Davis, B., Miller, K., Hassett, M. & Pepicello, S. (2001). Survey of distance technology used in AACN member schools. Journal of Professional Nursing, 17, 7–13. doi:10.1053/jpnu.2001.20259 [CrossRef]
- Rossi, P., Lipsey, M.W. & Freeman, H.E. (2004). Evaluation: A systematic approach. Thousand Oaks, CA: Sage.
- Rovai, A. (2003). A practical framework for evaluating online distance education programs. The Internet and Higher Education, 6, 109–124. doi:10.1016/S1096-7516(03)00019-8 [CrossRef]
- Russell, T.L. (1999). The no significant difference phenomenon. Raleigh, NC: North Carolina State University Press.
- Seiler, K. & Billings, D. (2004). Student experiences in web-based nursing courses: Benchmarking best practices. International Journal of Nursing Education Scholarship, 1, 1–12. doi:10.2202/1548-923X.1061 [CrossRef]
- Singh, M., Jeffrey, J. & Lindsay, G. (2008). Isolated learning for caring professionals: Advantages and challenges. The International Journal of Learning, 15, 179–186.
- Story, L., Butts, J.B., Bishop, S.B., Green, L., Johnson, K. & Mattison, H. (2010). Innovative strategies for nursing education program evaluation. Journal of Nursing Education, 49, 351–354. doi:10.3928/01484834-20100217-07 [CrossRef]
- Suhayda, R. & Miller, J. (2006). Optimizing evaluation of nursing education programs. Nurse Educator, 31, 200–206. doi:10.1097/00006223-200609000-00005 [CrossRef]
- Thompson, M. & Irele, M. (2007). Evaluating distance education programs. In Moore, M. (Ed.), Handbook of distance education (pp. 419–450). Mahwah, NJ: Lawrence Erlbaum Associates.
- Tyler, R.W. (1949). Basic principles of curriculum and instruction. Chicago, IL: University of Chicago Press.
- Wills, C. & Stommel, M. (2002). Graduate nursing students’ precourse and postcourse perceptions and preferences concerning completely web-based courses. Journal of Nursing Education, 41, 193–201.
- Woo, M.A. & Kimmick, J.V. (2000). Comparison of internet versus lecture instructional methods for teaching nursing research. Journal of Professional Nursing, 16, 132–139. doi:10.1053/PN.2000.5919 [CrossRef]
Summary of Evaluation Frameworks and Models used in Online Education Programs
|Author/Title||Stakeholders, Purpose(s), and Evaluation Questions||Context and Boundaries||Approach||Sample and Data Collection Method(s)||Findings||Utilization of Evaluation Results|
|Lindsay, Jeffrey, & Singh (2009) “Paradox of a Graduate Human Science Curriculum Experienced Online: A Faculty Perspective”||Faculty perspectivesSummative, formative, program accountabilityQuestions identified||Web-based graduate MScN program, York University, Ontario, Canada2005–2007||Stufflebeam’s CIPP modelMixed-methods||Sample: 9 faculty membersSurvey, journaling, focus groups||Challenge included how to “know” students in online environment; increase in faculty workload; increased time needed for implementation; need for formal technology training; ongoing technical support.The article did not share how or whether results were reported to faculty.||Data from evaluation resulted in thesis option under review; offer same courses face-to-face; Web-based courses meet during the semester face-to-face; and allocate resources for faculty development in technology; ongoing discussion online confidentiality, anonymity, and inherent ethical issues.|
|Singh, Jeffrey, & Lindsay (2008) “Isolated Learning for Caring Professionals: Advantages and Challenges”||Students and their perspectives and experiencesSummative, formative, program accountabilityQuestions identified||Web-based graduate MScN program, York University, Ontario, Canada2005–2007||Participatory evaluation approach to focus this portion of the evaluationMixed-methods||Sample: 11 graduate nursing studentsSurvey Focus groups Journaling Interviews||Challenges included time management, inconsistent access to online library, “silence” of peers and teachers, heavy workload, accessing online software, disconnect between program and course objectives.Article did not share how or whether results were reported to students||Information from evaluation results have been used to further develop a program orientation process; align online course goals with program goals; establish student technologic support; and improve access to online resources, such as a library.|
|Avery, Cohen, & Walker (2008) “Evaluation of an Online Graduate Nursing Curriculum: Examining Standards of Quality”||Faculty impliedFormative to evaluate quality of the programNo specific questions identified||Entire graduate nursing specialties at a large midwestern U.S. University2000–2004||No evaluation approach identified. Theoretically based on Chickering’s and Ehrmann’s seven principles of good education adopted to technology; nursing and other discipline literature on good quality to include Billings model; faculty expertiseMixed-methods||Sample: 16 graduate nursing courses/course facultyPeer review Survey Interview Extant data – course evaluations||Course objectives did not always match learning activities; faculty and students need more technical support; diverse learning styles must be supported; interaction is critical in online courses.Results shared with faculty individually and collectively.||Patton’s utilization-focused approach to evaluation was used to promote use of the findings.Best practices for online nursing education were identified and a quality in online education checklist was developed. Policies were developed and implemented for student technology needs for specific courses and to recommend or require that faculty make explicit connections between course objectives and activities. Revisions were made to evaluation tool and procedures|
|Ali, Hodson-Carlton, & Ryan (2002) “Web-based Professional Education for Advanced Practice Nursing: A Consumer Guide for Program Selection”||Students, faculty impliedStudent perceptions and satisfactionNo specific questions identified||Ball State University graduate program at masters, post-masters, post-RN level1998–2000||No evaluation approach identified; theoretically based on Chickering’s and Ehrmann’s seven principles of good education adopted to technologyQuantitative method||Sample: 417 nursing studentsSurvey||Course outcomes were being met; participants were satisfied with delivery methods, course design, faculty participation, and feedback. Content was current, challenging, and stimulated critical thinking. Article did not share how or whether results were reported to stakeholders.||Online education was a viable option for nurses seeking graduate education and would be continued. From the evaluation research, a consumer guide for program selection of Web-based higher education was developed.|
|Mills (2007) “Evaluation of Online and On-site Options for Master’s Degree and Post-Master’s Certificate Programs”||Students, faculty, administratorsSummative evaluationQuestions implied||MScN and post-masters certificate1997–2003||No evaluation approach identified; theoretical approach based in part on a framework from the EDUCAUSE Center for Applied ResearchMixed-methods||Sample: 17 courses within MScN and post-master’s programComparative study using archival data or secondary data, end-of-course evaluations and interviews||Results revealed some differences in student outcomes related to course grades; program effectiveness was positive for Web-based courses. It was noted that cost effectiveness of Web-based learning was difficult to measure out precisely.The article did not share how or whether results were reported to stakeholders.||Continue the Web-based program on the basis of success of program and organizational effectiveness.|