Developments in technology, combined with the explosion of online course offerings, have changed the face of higher education. One area affected by these changes is instructor feedback. Historically, providing students with feedback on writing assignments was limited to the instructor’s written comments. However, with courses relying more heavily on online technologies, along with increases in class size and faculty workload, it is essential to find efficient alternatives to written feedback that integrate technology, while offering learners sufficient detail to facilitate improvement.
Integrating technology with feedback practices provides students with the opportunity to gain exposure and develop competencies related to varied technology, which is congruent with the baccalaureate Essentials document (American Association of Colleges of Nursing, 2008). The Essentials emphasizes that nursing graduates should have basic competence in the use of computers and information systems (American Association of Colleges of Nursing, 2008).
Individual learning style, or the manner in which learners receive, process, understand, store, and recall new information, may influence how well students understand feedback (DeYoung, 2008). Fleming (1995) identified four learning style preferences: visual, aural, read/write, and kinesthetic. Written annotated feedback provided by writing notes in the margins of papers or in the comments section of an electronic document fits within the read/write preference. In contrast, the aural mode denotes a preference for hearing the information. Leite, Svinicki, and Shi (2010) found that when given a choice on learning style, students were nearly as likely to select the aural mode (24.9%) as they were to select the read/write mode (26.9%), indicating that nearly equal numbers of students preferred to hear information as to read it.
The current pilot study was conducted to explore the use of embedded audio feedback (EAF) for written assignments. The goal was to provide student feedback that met their learning needs, while incorporating innovative technology and maximizing efficiency for faculty. This study compared student and faculty perceptions of EAF with traditional written feedback (WF) on a series of writing assignments in a required nursing informatics course.
Although instructor feedback is a key element in student learning, particularly the development of writing skills, the role and efficacy of feedback at the university level has not been extensively researched (Bailey & Garner, 2010; Ball, 2010; Carless, 2006). A review of the literature related to written feedback (Ball, 2010) found little evidence to support that it facilitates learning. Further, faculty and students differ in their perceptions about feedback. Faculty often believe that the feedback they provide is more detailed and useful than do the students (Carless, 2006). In addition, faculty perceive that students value the grade on an assignment over the potential learning from considering the recommendations for improvement (Bailey & Garner, 2010; Carless, 2006).
Audio feedback may foster connectedness and provide better guidance. Wood, Moskovitz, and Valiga (2011) found that students receiving audio feedback understood the grader’s comments better, were more motivated to improve, and felt more connected to the course. Rodway-Dyer, Knight, and Dunne (2011) found that when they paired audio feedback with traditional written comments, students understood the feedback and felt they would incorporate it into future assignments. Similarly, Silva (2012) provided audio feedback while highlighting sections of the written work and found that students viewed this method positively, considering it more personable than written comments alone.
The pilot study was conducted using a quasi-experimental, cross-over, posttest only design. The authors obtained institutional review board exemption and maintained student confidentiality throughout the project. The graded assignments in this pilot study included a series of three short literature review papers and a fourth paper that synthesized the previous papers and incorporated feedback from the first three. Four members of the research team (J.K.G., M.H., N.M.P., N.S.), subsequently referred to as “graders,” provided feedback, using a rubric that evaluated content, writing style, adequacy of paraphrasing, formatting according to American Psychological Association, and grammar.
Next, each of the 87 students in the course was assigned to one of the four graders, who graded all four written assignments for that student. The graders randomly assigned the students to receive WF or EAF on the first assignment, and then alternated between the two methods for the remaining assignments. These graders provided WF by inserting comments in the students’ papers electronically and provided EAF by using the iPad® application, iAnnotate® PDF. This application allowed the graders to record audio comments of up to 1 minute in length and insert speaker icons, which indicate the specific content in the paper to which they refer.
Interrater reliability was established by having all graders evaluate one randomly selected paper and provide feedback. They assigned comparable grades and made similar comments, which supported interrater reliability.
The graders used a consistent approach for grading papers. Students submitted all papers in Microsoft® Word format. If providing EAF, graders converted the papers to PDF format, transferred to them to DropBox, downloaded them to an iPad, and used iAnnotate PDF to record comments. The graders then returned the papers to the students.
The graders instructed students on how to access audio comments in an iAnnotate PDF document prior to grading with EAF and directed the students to ask for help if they could not hear the graders’ comments. No students asked for help with the application at this point.
At the end of the course, the students completed a Feedback Preference Survey, which the authors created for this study. The 13-question survey incorporated a variety of questions, including multiple choice, Likert scale, and short answer. Questions addressed students’ previous experience with EAF and their perceptions of WF and EAF. The students rated the extent to which they agreed with the statement, “I prefer audiovisual feedback to written feedback on my assignments.”
In addition, the authors used partial results from the Assessment Technologies Institute (ATI) Self-Assessment Inventory, which 60 of the participants had taken as freshmen. This 195-question inventory includes seven questions on auditory learning style and eight questions on visual learning style. Students receive a score for each style subscale. For the purposes of the current study, the authors determined the learning style preference to be either auditory or visual according to the higher subscale score on the Self-Assessment Inventory for each student. The ATI developers reported that validity is based on learning style subscales being parallel with elements of previous surveys. Furthermore, the coefficient alpha for the whole inventory is 0.9144; however, the values for the auditory and visual subsections were lower (auditory, 0.31; visual, 0.47). ATI used the Spearman-Brown Prophecy Formula to predict the reliability values (auditory, 0.74; visual, 0.84) if more items were added to the inventory ((M. Dunham, personal communication, April 29, 2014).
The authors also used data about students’ level of experience working with computers and their stress from computer tasks collected from a technology preference survey created by the informatics faculty. Students completed this survey at the start of the informatics course. This survey also collected information on whether students used PC or Mac computers, which could have affected the ease of accessing EAF. No information about the validity and reliability of this survey is available.
After grading was completed, the graders met with the rest of the research team to discuss the seven survey questions in focus-group style. These questions addressed graders’ perceptions of EAF and WF, including the ease of use, timeliness, and ability to meet learner needs.
Of the 87 students who completed the informatics course, 85 (98%) completed the Feedback Preference Survey. All of the students were traditional college sophomores or juniors, aged 19 to 22 years. Five were male and 82 were female. The Technology Preference Survey showed that 51 (60%) students were Mac users and 34 (40%) were PC users. Experience with computers was rated as “lots” by 28 (33%), “moderate” by 56 (66%), and “very little” by 1 (1%). Also, 24 (28%) indicated that they were “very stressed” by computer tasks, 56 (66%) indicated they were “mildly stressed,” and 5 (6%) indicated they were “not at all stressed.”
The Feedback Preference Survey revealed that 10 students (12%) received audio feedback in a previous course. Only five students (6%) indicated that they had learned best in the past by hearing instructions. They rated their ability to understand WF as poor (0%), fair (5%), good (33%), very good (45%), or excellent (18%) and their ability to understand EAF as poor (9%), fair (11%), good (25%), very good (40%), or excellent (15%). When asked which type of feedback they were better able to understand, 39 (46%) chose EAF and 46 (54%) chose WF. Fifty-one students (60%) indicated that WF provided the most useful guidance for learning, whereas 34 students (40%) indicated EAF as being the most useful. Similarly, 55 students (65%) reported that they were more likely to incorporate WF in future assignments, compared with 30 students (35%) who identified EAF.
Responses to the sentence “I prefer audiovisual feedback [EAF] to written feedback on my assignments” were: strongly agree (n = 6, 7%), agree (n = 24, 28%), undecided (n = 24, 28%), disagree (n = 18, 21%), strongly disagree (n = 12, 14%), and unanswered (n = 1, 1%). The data from this question (preference for EAF) were the basis for exploring relationships with other variables to minimize the number of categories required for chi square analysis; thus, we collapsed agree and strongly agree into agree and disagree and strongly disagree into disagree.
Analysis revealed no statistically significant relationships between preference for EAF and learning style (χ2 = 1.4, p = 0.49), grader (χ2 = 7.71, p = 0.26), or the type of feedback students received first (χ2 = 3.3, p = 0.19). In addition, no statistically significant relationships were noted between preference for EAF and the students’ operating system (χ2 = 0.95, p = 0.62), experience with computers (χ2 = 2.12, p = 0.35), and stress from computers (χ2 = 3.57, p = 0.47).
Qualitative analysis of students’ comments on the survey revealed three positive themes regarding EAF. One theme was more depth in feedback, as evidenced by comments such as, “I feel that I understood what was being explained way better than just reading it.” Other comments, such as, “It was a lot more personal and gave me motivation to go and change things about my paper,” supported the theme of more personal feedback. Finally, students expressed better understanding of the needed changes by comments such as, “I could go along with what the instructions were just like I would if we were in class, and the grader giving me actual examples on how to make edits allowed feedback to be more specific.”
Conversely, some students’ comments indicated negative themes. These emerged as access issues and technological difficulties, time commitment, lack of ease of using, and lack of preparation for using this technology. Students’ comments included “I did not like it…because if you didn’t understand part of it the first time you listened to it, you had to listen to the whole thing all over again,” which was indicative of students’ concern for the time involved. The technological issues were evident in comments such as “Confusing to learn how to access at the beginning.”
As noted, more than 25% of the students expressed no preference for either method, and some indicated wanting a combination of methods. One student commented, “It would be nice to have both audiovisual feedback as well as written feedback.”
During the focus group, graders voiced mixed feelings about using EAF. A positive theme that emerged was EAF increased their ability to provide more personal, detailed feedback. As one grader said, “I felt that I could, with intonation, say some things that softened my comments.” However, three negative themes emerged from this discussion: cumbersome to learn the technology, less flexible process, and failure to increase interaction with students. Graders’ comments related to these themes included “Very steep learning curve related to all of the steps with using the technology”; “I found that it was a problem when I got interrupted in the middle, which frequently happened, and then had to go back and see what it was I’d said already”; and “I don’t think I got any more questions or responses.” Graders indicated that they did not feel using EAF saved time over using WF.
The current study provided some insight into the feedback preferences of students and faculty by comparing WF with EAF. The preferences expressed by the students did not indicate a significant difference between the two grading modalities as had been anticipated. However, of the 70% of the students who indicated a preference, half preferred EAF. This may indicate a need to give students a choice of feedback modality and to study how individualizing the feedback method affects learning outcomes.
The students commented positively about hearing the instructor’s voice during the feedback and about feeling that EAF promoted a more personal approach, which was consistent with findings in previous studies (Rodway-Dyer et al., 2011; Silva, 2012; Wood et al., 2011). The students’ positive comments also indicated that they felt EAF helped to promote connectedness between themselves and their graders, although the graders did not share this perception. Many of the negative comments about EAF related to problems with the technology, as opposed to the pedagogy. Both students and graders indicated a need for more training on the use of EAF. In addition, students using Mac computers had to download additional software (Adobe® Reader®) to access the EAF comments. Without installing this program, students with Macs could not hear the EAF. However, some students waited until the final survey to inform their grader of this problem. In future studies, a more interactive process in which students reply to comments may prevent this problem. This may also promote a sense of connectedness. Future research should consider EAF using a similar, but more user-friendly, technology.
Although the literature indicated that learning style would determine preference for feedback methods (Leite et al., 2010), this pilot study did not support that relationship. One factor might have been that the students completed the learning style assessment in their freshman year, rather than specifically for this study. Similarly, Fleming, McKee, and Huntley-Moore (2011) found that nursing students’ learning styles varied significantly over time. The ATI instrument used in the current study was intended for self-assessment, rather than research, and its psychometrics were uncertain. Future studies should include more rigor in selection and timing of the learning style assessment.
The current study did not measure student learning or analyze WF and EAF for improved student performance on the assignments. Because of the study design, all students received alternating EAF and WF, making it difficult to determine how each method may have influenced performance. Analyzing student performance in a future study would provide additional information that may support one method over the other.
This study compared faculty and student perceptions of WF and EAF. Although the students’ preference for EAF was not statistically significant, half of the students who indicated a preference did prefer EAF. As students and faculty become more familiar with EAF, it may become a preferred method. Faculty should continue to explore more effective and individualized methods for feedback.
- American Association of Colleges of Nursing. (2008). The essentials of baccalaureate education for professional nursing practice. Retrieved from http://www.aacn.nche.edu/education-resources/baccessentials08.pdf
- Bailey, R. & Garner, M. (2010). Is the feedback in higher education assessment worth the paper it is written on? Teachers’ reflections on their practices. Teaching in Higher Education, 15, 187–198. doi:10.1080/13562511003620019 [CrossRef]
- Ball, E.C. (2010). Annotation an effective device for student feedback: A critical review of the literature. Nurse Education in Practice, 10, 138–143. doi:10.1016/j.nepr.2009.05.003 [CrossRef]
- Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education, 31, 219–233. doi:10.1080/03075070600572132 [CrossRef]
- DeYoung, S. (2008). Teaching strategies for nurse educators (2nd ed.). Upper Saddle River, NJ: Prentice Hall.
- Fleming, N.D. (1995). I’m different; not dumb. Modes of presentation (VARK) in the tertiary classroom. In Zelmer, A. (Ed.), Research and development in higher education, proceedings of the 1995 Annual Conference of the Higher Education and Research Development Society of Australasia (HERDSA), HERDSA, 18, 308–313.
- Fleming, S., McKee, G. & Huntley-Moore, S. (2011). Undergraduate nursing students’ learning styles: A longitudinal study. Nurse Education Today, 31, 444–449. doi:10.1016/j.nedt.2010.08.005 [CrossRef]
- Leite, W.L., Svinicki, M. & Shi, Y. (2010). Attempted validation of the scores of the VARK: Learning styles inventory with multitrait-multimethod confirmatory factor analysis models. Educational and Psychological Measurement, 70, 323–339. doi:10.1177/0013164409344507 [CrossRef]
- Rodway-Dyer, S., Knight, J. & Dunne, E. (2011). A case study on audio feedback with geography undergraduates. Journal of Geography in Higher Education, 35, 217–231. doi:10.1080/03098265.2010.524197 [CrossRef]
- Silva, M.L. (2012). Camtasia in the classroom: Student attitudes and preferences for video commentary or Microsoft word comments during the revision process. Computers & Composition, 29, 1–22. doi:10.1016/j.compcom.2011.12.001 [CrossRef]
- Wood, K.A, Moskovitz, C. & Valiga, T.M. (2011). Audio feedback for student writing in online nursing courses: Exploring student and instructor reactions. Journal of Nursing Education, 50, 540–543. doi:10.3928/01484834-20110616-04 [CrossRef]