Journal of Nursing Education

Research Briefs 

Development and Pilot Testing of the Faculty Advisor Evaluation Questionnaire

Elizabeth Harrison, PhD, RN

Abstract

The importance of academic advising has been established in part by its designation as an element in the American Association of Colleges of Nursing’s Standards for Accreditation. In addition, academic advising plays an essential role in students’ development, academic success, satisfaction, recruitment, and retention; therefore, access to valid and reliable evaluation tools is of considerable importance. The purpose of this study was to develop and pilot test the Faculty Advisor Evaluation Questionnaire (FAEQ), which is an instrument developed from qualitative nursing research. The psychometric properties were explored using face and content validity, internal consistency reliability, and principle components factor analysis. The four-factor solution of the resulting 30-item questionnaire accounted for 81% of the variability. Cronbach’s alpha values of the four factors ranged from 0.885 to 0.974. The FAEQ should elicit valid and reliable results, although further testing is needed to validate the findings in a larger and more diverse sample.

Abstract

The importance of academic advising has been established in part by its designation as an element in the American Association of Colleges of Nursing’s Standards for Accreditation. In addition, academic advising plays an essential role in students’ development, academic success, satisfaction, recruitment, and retention; therefore, access to valid and reliable evaluation tools is of considerable importance. The purpose of this study was to develop and pilot test the Faculty Advisor Evaluation Questionnaire (FAEQ), which is an instrument developed from qualitative nursing research. The psychometric properties were explored using face and content validity, internal consistency reliability, and principle components factor analysis. The four-factor solution of the resulting 30-item questionnaire accounted for 81% of the variability. Cronbach’s alpha values of the four factors ranged from 0.885 to 0.974. The FAEQ should elicit valid and reliable results, although further testing is needed to validate the findings in a larger and more diverse sample.

Dr. Harrison is Associate Professor, School of Nursing, College of Health, The University of Southern Mississippi, Hattiesburg, Mississippi.

This pilot study was partially funded by a grant from The University of Southern Mississippi. The author thanks Dr. Kolotylo for her assistance with construction of the Faculty Advisor Evaluation Questionnaire, Dr. Harbaugh for editorial assistance with the preparation of the manuscript, and Dr. Johnson for editorial assistance and guidance in the evaluation of the data.

The author received support for travel by Winona State University-MN to present study findings for faculty development purposes.

Address correspondence to Elizabeth Harrison, PhD, RN, Associate Professor, School of Nursing, College of Health, The University of Southern Mississippi, 118 College Drive #5095, Hattiesburg, MS 39406; e-mail: e.harrison@usm.edu.

Received: June 29, 2011
Accepted: November 16, 2011
Posted Online: December 30, 2011

The importance of academic support services, including academic advising services, has been established, in part, by its designation as a key element in the American Association of Colleges of Nursing’s Standards for Accreditation (2009). In addition to this designation, research and scholarly commentary indicate that effective academic advisors play a role in student development (Light, 2001; Pizzolato, 2008; Reinarz & Ehrlich, 2002). Good advising is tied also to students’ academic success (Bahr, 2008; Campbell & Nutt, 2008; Museus & Ravello, 2010) and may influence decisions about and attitudes toward lifelong learning (Hunter & White, 2004; Smith & Allen, 2006; Stickle, 1982). Finally, effective academic advising has been associated with student satisfaction, recruitment, and retention (Elliott & Healy, 2001; Freeman, 2008; Peterson, Wagner, & Lamb, 2001). Therefore, it is important, to understand what constitutes good academic advising and to have access to tools that enable students to evaluate the quality of advising.

Advancements in academic advising research are, for the most part, the aegis of education and pedagogy. Nursing’s contribution to the study of academic advising scholarship has been minimal. The purpose of this study was to develop and pilot test the Faculty Advisor Evaluation Questionnaire (FAEQ).

The results of the studies by Harrison (2009a, 2009b), who investigated nursing students’ and nurse faculty’s perceptions of the characteristics of effective academic advisors, were the basis for the development of the FAEQ. Data from these studies were consistent with extant advising research, but the nursing studies revealed additional characteristics of the effective advisor that were unique to nursing (Table 1). The initial 63-item FAEQ was developed based on 10 characteristics of an effective academic advisor as identified in the aforementioned studies.

Questions Based on the Characteristics of the Effective Academic Advisor

Table 1: Questions Based on the Characteristics of the Effective Academic Advisor

Development

Five to nine questions were developed to measure each characteristic (Table 1). Items were revised multiple times to ensure that they met criteria for structurally sound and appropriately worded queries. According to Dillman (2007), survey questions should be written in a way that potential respondents would interpret them in the same manner, and they should be willing and able to answer each question accurately. Other principles for writing survey questions include, but are not limited to, avoiding bias, using simple words, being precise, and avoiding hypothetical and double-barreled questions.

Questions were formatted in a six-point Likert-type scale (completely agree, generally agree, neither agree nor disagree, generally disagree, completely disagree, and not applicable) and 12 demographic items were developed. The demographic data included, but were not limited to, questions about the student’s major area of study, type of nursing program, gender, ethnicity, status of advisor (faculty or nonfaculty), and frequency of advising meetings. Finally, sets of instructions for completing the FAEQ and demographics were constructed.

Validity Testing

Validity and pilot reliability of the FAEQ were conducted in two phases at different universities (referred to here as university A and university B) due to a change in employment by the principal investigator (PI). Institutional review board approval was obtained at university A for validity testing and at university B for the reliability pilot study.

Two nursing students confirmed face validity. Face validity is the extent to which the study participant feels that the questionnaire measures what it is intended to measure (Nunnally & Bernstein, 1994). Although face validity cannot be quantified and is often confused with content validity, it provides helpful evidence when used in conjunction with other forms of evidence (Lynn, 1986; Nunnally & Bernstein, 1994; Rattray & Jones, 2007). Face validity was deemed valuable to the development of the FAEQ because both student and faculty perceptions of effective academic advising were used in its development.

Content validity is concerned with the adequacy of questions to measure the concept of interest—in this case, the characteristics of an effective academic advisor. Content validity may be established through literature review, personal reflection, or analytical critique (Higgins & Straub, 2006). Seven nurse experts confirmed content validity and revisions were made to the FAEQ based on their evaluations. For the purpose of this study, analytical critique was used, and the experts were chosen based on their experience as faculty advisors and their willingness to serve. All seven panel members were female; one was Black, six were White; their experience in nursing ranged from 16 to 25+ years; four members had been academic advisors for 16 to 25 years and two members had 2 to 5 years and one member had less than 2 years of experience in advising. Five panel members were masters’ prepared, and two had doctorates in nursing. The members were solicited from public and private universities in Florida, Minnesota, Pennsylvania, Mississippi, California, Iowa, and Wisconsin.

The panel members were asked to link each question to an objective or characteristic and evaluate each using a four-point Likert-type scale (1 = totally irrelevant, 2 = somewhat relevant, 3 = quite relevant, 4 = very relevant). Members were provided with instructions that included the study objectives and definition of major concepts. For example, objective one, ‘To determine the student’s perception of the advisor’s knowledge of program and university requirements,” reflects the first major concept or characteristic: knowledge. Knowledge was defined as being familiar with facts and a range of information, including, but not limited to, knowledge of students and their development. In addition, panel members were asked to evaluate the adequacy and redundancy of the questionnaire as a whole in a series of yes or no questions.

Packets containing an invitation to participate, the questionnaire, study purpose, objectives, and major concepts, as well as a demographic data collection sheet for the validity experts, were mailed to the participants with a postage-paid return envelope. A reminder letter was mailed to participants several weeks later.

The assessment of the analytical critique or judgment-quantification phase in establishing content validity included the calculation of the index of content validity (CVI) and the evaluation of qualitative responses to questions about the instrument’s adequacy and redundancy. The CVI is the proportion of items or questions that receive a rating of 3 = quite relevant or 4 = very relevant (Lynn, 1986). The CVI range of the 60 questions was 0.43 to 1.00. Twenty-nine questions had a CVI of 1.00. The CVI of the remaining 31 questions ranged from 0.43 to 0.86.

According to Davis (1992), new instruments should achieve an 80% or higher agreement among reviewers. To achieve this significance level and reduce the number of items in the FAEQ, 27 questions rated below 0.85 were considered for elimination. Four of 27 were either reworded or retained based on their conceptual or theoretical significance. For example, the item, “My advisor helps me with my career goals,” was retained despite a CVI of 0.57 because academic advising research and scholarly commentary consistently cite the considerable role academic advisors play in helping students to develop career goals (Beasley-Fielstein, 1986; Hunter & White, 2004; Kelley & Lynch, 1991; O’Banion, 1972; Pizzolato, 2008; Shultz, Colton, & Colton, 2001; Walsh, 1979; Yarbrough, 2002).

Ten questions were added based on the evaluation of qualitative responses from the content judges to meet the requirement of adequacy. “I believe that my advisor is committed to promoting my success,” and “My advisor demonstrates the behavior of a good mentor” are examples of questions suggested by the panel members. Several additions and revisions were also made to the demographic data collection sheet based on the content judges’ qualitative responses. “Do you have an individual assigned to you as an academic advisor?” and “Is English your second language?” are examples of demographic questions added or revised based on feedback from panel members. Based on the aforementioned revisions, the FAEQ was reduced from 63 to 50 items.

Reliability Pilot Study

The next step in the development of the FAEQ was to conduct a pilot study to discover and solve problems before implementation of the full study. The questionnaire format was recreated for Survey Monkey ( http://www.surveymonkey.com/). Students at a medium-sized public university in the southern United States received a brief invitation to participate in the research and a link to the questionnaire in the university’s student e-news. After accessing the site, students were able to read the formal invitation to participate. It included an explanation of the study, estimated time to complete, inclusion criteria, and PI and institutional review board contact information. The inclusion criteria were enrollment full time or part time in a 4-year college or university; age 18 years or older; ability to read, write, and understand English; and willingness to complete the questionnaire. To estimate completion time, two students were timed while taking a 50-item hard copy form of the questionnaire, and their times were averaged. Completion of the questionnaire signified consent. The instructions and demographic data section also preceded the questionnaire.

Six hundred thirty-three students responded to the invitation, and the results were evaluated for internal consistency reliability (Cronbach’s alpha) and factor analysis. Internal consistency reliability refers to the consistency with which a questionnaire measures the concept of interest—in this case, the quality of academic advising (Waltz, Strickland, & Lenz, 1991). Cronbach’s alpha should exceed 0.70 for a developing questionnaire and 0.80 for an established questionnaire (Rattray & Jones, 2007). The initial overall reliability of the FAEQ was 0.81, indicating that the questionnaire consistently measures the concept.

In general, a 10:1 ratio (10 respondents for each question) is necessary for factor analysis (Hair, Black, Babin, Anderson, & Tatham, 2006); therefore, the number of responses in this pilot study was sufficient for factor analysis.

A principle components factor analysis with oblimin rotation (chosen because questions were related to each other) was conducted to explore the relationship of variables. Principle components factor analysis enables the identification of underlying subscales and allows the researcher to identify and remove redundant or unnecessary items. The Kaiser-Meyer-Olkin measure, a statistic associated with factor analysis, is used to test sampling adequacy and indicates that sufficient numbers of completed questionnaires are available to substantiate further analysis. The Kaiser-Meyer-Olkin for the FAEQ was 0.963. According to Kaiser (1974), values >0.90 are considered “marvelous.” The Bartlett’s test of sphericity (χ2 = 8583.51, df = 406, p = 0.000) indicated significant correlations among the items and provided support for the factorability of the data.

Two criteria were used to determine how many factors should be extracted. First, the Kaiser-Guttman eigenvalue greater-than-one-rule indicated that three factors should be extracted. The second criterion was the scree plot, which suggested a four-factor solution. This four-factor solution accounted for 81% of the variability. Items were then assessed based on their individual factor loadings (>0.40) and theoretical significance. Based on this evaluation, the 50-item questionnaire was reduced to 30 items. Table 2 displays the factor loadings for the individual questions of the yet unnamed factors in the 30-item questionnaire. Cronbach’s alpha values were 0.974, 0.965, 0.885, and 0.927 for factors 1 through 4, respectively, indicating homogeneity among items.

Four-Factor Structure of Measurements for Questionnaire Items

Table 2: Four-Factor Structure of Measurements for Questionnaire Items

Discussion

The purpose of this study was to develop and pilot test the FAEQ. The questionnaire was developed based on qualitative nursing research that identified faculty and student perceptions’ of effective academic advisors. The psychometric properties of the FAEQ were explored using face and content validity, internal consistency reliability, and principle components factor analysis. The panel of experts agreed that the questionnaire items were relevant (CVI = 0.43 to 1.00). The questionnaire was reduced from 63 to 50 items. The CVI of the 50-item questionnaire was not calculated due to the addition of unjudged questions suggested by content experts.

Internal consistency reliability of the questionnaire was 0.81 based on a pilot study conducted at a medium-sized university using undergraduate students across disciplines. Four factors were extracted with Cronbach’s alphas ranging from 0.885 to 0.974. Pilot study data provide tentative evidence that the 30-item FAEQ should elicit valid and reliable results, adequately measuring the quality of academic advising.

The questions retained in the 30-item solution correspond to the initial 10 characteristics of the effective academic advisor (Harrison, 2009a, 2009b). Knowledge and availability questions appear stronger, whereas only one question related to accountability was retained (Table 1). Accountability was unique to the faculty study (Harrison, 2009a) and was not found in the extant advising research or scholarly commentary.

Conclusion

Further testing is needed to validate the findings of this study in a larger and more diverse sample. Of particular interest to the PI is the separation of disciplinary voices in subsequent studies. Due to a relatively low response rate from nursing majors in the pilot study (6.5%), an in-depth analysis of interdisciplinary differences has been deferred to an ongoing nationwide study. According to McGillin (2009), the academic advising “literature has failed to notice … that the academic disciplines posses language, processes, and world views that are shaped by disciplinary ways of knowing” (p. 4). She continued by stating that, “Nowhere is the exchange of cultural information more likely to happen than in mentoring and advising students” (p. 4). It appears probable that different advising perspectives and approaches are more likely to be evaluated accurately by discipline-specific instruments, which also enables faculty development and improved advising outcomes. This perspective, in addition to the considerable impact of effective advising, warrants discipline-specific tools and the data they elicit.

References

  • American Association of Colleges of Nursing. (2009). Standards for accreditation of baccalaureate and graduate degree nursing programs. Retrieved from http://www.aacn.nche.edu/Accreditation/pdf/standards09.pdf
  • Bahr, P.R. (2008). Cooling out in the community college: What is the effect of academic advising on students’ chances of success?Research in Higher Education, 49, 701–732. doi:10.1007/s11162-008-9100-0 [CrossRef]
  • Beasley-Fielstein, L.B. (1986). Student perceptions of the developmental advisor-advisee relationship. NCADA Journal, 6, 107–117.
  • Campbell, S.M. & Nutt, C.L. (2008). Academic advising in the new global century: Supporting student engagement and learning outcomes achievement. Peer Review, 10(1), 4–7.
  • Davis, L.L. (1992). Instrument review: Getting the most from a panel of experts. Applied Nursing Research, 5, 194–197. doi:10.1016/S0897-1897(05)80008-4 [CrossRef]
  • Dillman, D.A. (2007). Mail and internet surveys: The tailored design method (2nd ed.). Hoboken, NJ: John Wiley & Sons.
  • Elliott, K.M. & Healy, M.A. (2001). Key factors influencing student satisfaction related to recruitment and retention. Journal of Marketing for Higher Education, 10(4), 1–11. doi:10.1300/J050v10n04_01 [CrossRef]
  • Freeman, L.C. (2008). Establishing effective advising practices to influence student learning and success. Peer Review, 10(1), 12–14.
  • Hair, J.F., Black, W.M., Babin, B.J., Anderson, R.E. & Tatham, R.L. (2006). Multivariate analysis (6th ed.). Upper Saddle River, NJ: Pearson Prentice Hall.
  • Harrison, E.M. (2009a). Faculty perceptions of academic advising: “I don’t get no respect.”Nursing Education Perspectives, 30, 299–233.
  • Harrison, E.M. (2009b). What constitutes good academic advising? Nursing students’ perceptions of academic advising. Journal of Nursing Education, 48, 361–366. doi:10.3928/01484834-20090615-02 [CrossRef]
  • Higgins, P.A. & Straub, A.J. (2006). Understanding the error of our ways: Mapping the concepts of validity and reliability. Nursing Outlook, 54, 32–29. doi:10.1016/j.outlook.2004.12.004 [CrossRef]
  • Hunter, M.S. & White, E.R. (2004). Could fixing academic advising fix higher education?About Campus, 9(1), 20–25.
  • Kaiser, H.F. (1974). An index of factorial simplicity. Psychometrika, 39, 31–36. doi:10.1007/BF02291575 [CrossRef]
  • Kelley, K.N. & Lynch, M.J. (1991). Factors students use when evaluating advisors. NACADA Journal, 11, 26–33.
  • Light, R.J. (2001). The power of good advice for students. Chronicle of Higher Education. Retrieved from http://chronicle.com/article/The-Power-of-Good-Advice-for/9193/
  • Lynn, M.R. (1986). Determining and quantification of content validity. Nursing Research, 35, 382–385. doi:10.1097/00006199-198611000-00017 [CrossRef]
  • McGillin, V. (2009). Are there disciplinary voices in academic advising?Academic Advising Today, 32(4), 4, 20.
  • Museus, S.D. & Ravello, J.N. (2010). Characteristics of academic advising that contribute to racial and ethnic minority student success at predominantly white institutions. NACADA Journal, 30, 47–58.
  • Nunnally, J.C. & Bernstein, I.H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.
  • O’Banion, T. (1972). Academic advising model. Junior College Journal, 42, 66–68.
  • Peterson, M., Wagner, J.A. & Lamb, C.W. (2001). The role of advising in non-returning students’ perceptions of their university. Journal of Marketing for Higher Education, 10(3), 45–59. doi:10.1300/J050v10n03_03 [CrossRef]
  • Pizzolato, J.E. (2008). Advisor, teacher, partner: Using the learning partnerships model to reshape academic advising. About Campus, 13(1), 25. doi:10.1002/abc.243 [CrossRef]
  • Rattray, J. & Jones, M.C. (2007). Essential elements of questionnaire design and development. Journal of Clinical Research, 16, 234–243.
  • Reinarz, A.G. & Ehrlich, N.J., (2002). Assessment of academic advising: A cross-sectional study. NACADA Journal, 22, 50–65.
  • Shultz, E.L., Colton, G.M. & Colton, C. (2001). The adventor program: Advisement and mentoring for students of color in higher education. Journal of Humanistic Counseling, Education and Development, 40, 208–218. doi:10.1002/j.2164-490X.2001.tb00118.x [CrossRef]
  • Smith, C.L. & Allen, J.M. (2006). Essential functions of academic advising: What students want and get. NACADA Journal, 26, 56–66.
  • Stickle, F. (1982). Faculty and student perceptions of faculty advising effectiveness. Journal of College Student Personnel, 23, 262–265.
  • Walsh, E.M. (1979). Revitalizing academic advisement. Personnel and Guidance Journal, 57, 446–449. doi:10.1002/j.2164-4918.1979.tb05433.x [CrossRef]
  • Waltz, C.F., Strickland, O.L. & Lenz, E.R. (1991). Measurement in nursing research (2nd ed.). Philadelphia, PA: F.A. Davis.
  • Yarbrough, D. (2002). The engagement model for effective academic advising with undergraduate college students and student organizations. Journal of Humanistic Counseling, Education, and Development, 41, 61–68. doi:10.1002/j.2164-490X.2002.tb00130.x [CrossRef]

Questions Based on the Characteristics of the Effective Academic Advisor

CharacteristicQuestions DevelopedQuestions Retained
Knowledgeable96
Available65
Organized52
Accountablea61
Approachable52
Exemplary communication skills83
Advocate63
Fosters/nurtures82
Possesses moral integritya54
Authenticitya52

Four-Factor Structure of Measurements for Questionnaire Items

ItemIndividual Factor Loading
Factor 1
  1. My advisor makes me feel welcome.0.896
  2. My advisor has a pleasant personality.0.911
  3. My advisor is kind to me.0.903
  4. My advisor is honest with me, even if he/she knows that I may not agree with him/her.0.819
  5. I can tell that my advisor respects me by his/her tone of voice.0.926
  6. I can tell that my advisor respects me by his/her manner of speaking.0.926
  7. My advisor is easy to talk to.0.887
  8. I can tell that my advisor is listening to me because he/she uses direct eye contact when speaking to me.0.851
  9. My advisor is a good model for the profession in which he/she teaches.0.871
Factor 2
  1. My advisor is knowledgeable about the courses I need to successfully complete in order to graduate.0.791
  2. My advisor is familiar with the policies of the university that are relevant to my plan of study.0.784
  3. My advisor informs me of changes in university policies that affect me.0.834
  4. My advisor is knowledgeable about the policies and progression plan for my major.0.857
  5. My advisor informs me of policy changes in my major that affect me.0.849
  6. My advisor helps me plan my class schedule so that I am able to incorporate courses I need in order to graduate.0.862
  7. My advisor helps me to develop my present educational goals.0.856
  8. My advisor helps me plan my career goals.0.792
  9. My advisor is prepared for our advising sessions.0.768
  10. I trust my advisor’s advice.0.879
  11. My advisor helps me to select courses that enable me to meet my goals for the future.0.871
  12. My advisor is confident in his/her abilities as an advisor.0.782
Factor 3
  1. My advisor’s contact information is easy to locate.0.721
  2. My advisor’s office hours are posted.0.614
  3. If my advisor’s office hours are not convenient for me, my advisor arranges a mutually convenient time for us to meet.0.772
  4. My advisor responds to my e-mails without delay.0.725
  5. My advisor is able to accommodate me when I have an urgent situation.0.797
Factor 4
  1. My advisor follows up with me after making a referral.0.766
  2. My advisor supports my academic achievements (i.e., he/she writes letters of support for scholarships).0.817
  3. My advisor intervenes on my behalf, when needed.0.962
  4. My advisor advocates for me in situations that involve my welfare.0.875
Authors

Dr. Harrison is Associate Professor, School of Nursing, College of Health, The University of Southern Mississippi, Hattiesburg, Mississippi.

This pilot study was partially funded by a grant from The University of Southern Mississippi. The author thanks Dr. Kolotylo for her assistance with construction of the Faculty Advisor Evaluation Questionnaire, Dr. Harbaugh for editorial assistance with the preparation of the manuscript, and Dr. Johnson for editorial assistance and guidance in the evaluation of the data.

The author received support for travel by Winona State University-MN to present study findings for faculty development purposes.

Address correspondence to Elizabeth Harrison, PhD, RN, Associate Professor, School of Nursing, College of Health, The University of Southern Mississippi, 118 College Drive #5095, Hattiesburg, MS 39406; e-mail: e.harrison@usm.edu

10.3928/01484834-20111230-04

Sign up to receive

Journal E-contents