Journal of Nursing Education

Major Article 

A Validity Study of the Interprofessional Collaborative Competency Attainment Survey: An Interprofessional Collaborative Competency Measure

Abstract

Background:

In health care, there is a shift toward competency assessment, including in interprofessional collaboration and education. The Interprofessional Collaborative Competency Attainment Survey (ICCAS) has been designed to assess self-reported change in interprofessional competency.

Method:

The current study collects validity evidence for the ICCAS by replicating and expanding previous research, examining internal structure, item functioning, concurrent validity, response process, and consequential validity, including theoretical interpretation of the instrument's application and outcomes.

Results:

The ICCAS shows good reliability, a single-factor structure, adequate item discrimination, and a moderate concurrent validity. Insight was gained to response process and potential consequences that lend caution to the interpretation of ICCAS results dependent on learner populations.

Conclusion:

The ICCAS has shown stability, making it a potentially useful instrument in assessing self-reported competency but one that should be applied over multiple time points with an awareness of the specific characteristics and knowledge of the sample. [J Nurs Educ. 2019;58(8):454–462.]

Abstract

Background:

In health care, there is a shift toward competency assessment, including in interprofessional collaboration and education. The Interprofessional Collaborative Competency Attainment Survey (ICCAS) has been designed to assess self-reported change in interprofessional competency.

Method:

The current study collects validity evidence for the ICCAS by replicating and expanding previous research, examining internal structure, item functioning, concurrent validity, response process, and consequential validity, including theoretical interpretation of the instrument's application and outcomes.

Results:

The ICCAS shows good reliability, a single-factor structure, adequate item discrimination, and a moderate concurrent validity. Insight was gained to response process and potential consequences that lend caution to the interpretation of ICCAS results dependent on learner populations.

Conclusion:

The ICCAS has shown stability, making it a potentially useful instrument in assessing self-reported competency but one that should be applied over multiple time points with an awareness of the specific characteristics and knowledge of the sample. [J Nurs Educ. 2019;58(8):454–462.]

Health professional education is not a single course or experience, but rather a continuing process of improvement toward competency requiring diverse and continued assessment (McClelland, 1973). The most apparent implication of this is the need for multiple methods of assessment administered longitudinally. Currently in interprofessional education (IPE) assessment, there is a trend toward assessing short-term impact (Reeves, Palaganas, & Zierler, 2017), with an overreliance placed on interprofessional attitude measures (Batteson & Garber, 2018; Blue, Chesluk, Conforti, & Holmboe, 2015). An abundance of attitude measures are available (Gillan, Lovrics, Halpern, Wiljer, & Harnett, 2011; Shrader, Farland, Danielson, Sicat, & Umland, 2017), with most being of questionable validity (Cox, 2015; Mahler, Berger, & Reeves, 2015; Oates & Davidson, 2015; Reeves et al., 2011; Simmons, Wagner, & Reeves, 2016). The development of valid and efficacious attitude measures seems to have hit an impasse, and the failures of attitude assessment indicate a necessary shift in assessment (King & Violato, 2018; Violato & King, 2018). The way forward appears to be through the use of measures that are focused on competency development. Focusing on competency development aligns with the shift toward Competency Based Medical Education (CBME) (Holmboe, 2015). One of the measures that appears promising in this endeavor is the Interprofessional Collaborative Competency Attainment Survey (ICCAS).

The ICCAS was designed to evaluate self-reported competency for interprofessional collaboration (IPC) based on behaviorally related constructs (Archibald, Trumpower, & MacDonald, 2014; MacDonald et al., 2010; Schmitz et al., 2017). The items are intended to reflect the Canadian Interprofessional Health Collaborative (CIHC) Competencies Framework (Archibald et al., 2014; Canadian Interprofessional Health Collaborative, 2010) and have been determined to be sufficiently aligned with the four domains of the Interprofessional Education Collaborative (IPEC, 2016) core competencies (Schmitz et al., 2017). The ICCAS represents an attempt to move away from the use of imprecise attitudinal measures in interprofessional assessment toward a more precise instrument for measuring interprofessional competency development (MacDonald et al., 2010)

In the process of validation of the ICCAS, there have been two studies. The initial validity study of the ICCAS was performed using 584 students and clinicians from Canada and New Zealand registered in IPE programs (Archibald et al., 2014). The second study conducted at the University of Minnesota (Schmitz at al., 2017) continued the validity process and refinement of the ICCAS using 785 participants enrolled in the course Fundamentals in Interprofessional Communication and Collaboration. Both studies identified a single-factor structure for the ICCAS, comparable reliability values, and similar effect sizes for change in ICCAS item scores.

The development of validity for a measure consists of constructing a validity argument based on evidence (Kane, 1992, 2016) rather than relying on a single study. Statements from the Institute of Medicine, outlined by Schmitz et al. (2017), support the validity evidence approach in that the appropriate tact for the advancement of the field of IPE/IPC regarding instrument development is:

[A] coordinated series of well-designed studies of the association between interprofessional education and collaborative behavior, including teamwork and performance in practice. These studies should be focused on developing broad consensus on how to measure interprofessional collaboration effectively across a range of learning environments, patient populations, and practice settings.

The current research seeks to continue the process of accruing validity evidence for the ICCAS as an instrument in inter-professional evaluation through examining internal structure, reliability, item functioning, concurrent validity, response process, and consequences (Cook, 2014; Kane, 1992, 2016).

For the current study, the ICCAS was administered after the Interprofessional Learning Pathway Launch at the University of Alberta. The launch is a 3-hour experiential learning session that introduces first-year students to interprofessional practice concepts and core competencies (King et al., 2017; University of Alberta: Health Sciences Education and Research Commons, 2018). The current data represent the first use of the ICCAS at the University of Alberta. The purpose of the current study was twofold:

  • To continue to develop validity evidence for the ICCAS and expand understanding of internal structure, reliability, item functioning, and concurrent validity of the ICCAS as a part of an introductory IPE experience with an aim toward longitudinal assessment.
  • To determine what insights to the population and their experience in early IPE can be obtained by using the ICCAS and how this affects the response process and the consequential validity of using the ICCAS with the population.

Method

Research Design

The ICCAS was chosen as it aligns with the shift toward CBME, the CIHC framework, and the IPEC core competencies. The close alignment with Schmitz et al. (2017) between sample and the rationale for the use of the ICCAS presented an optimal continuation in the collection of validity evidence for the ICCAS (Cook, 2014; Cook, Brydges, Ginsburg, & Hatala, 2015; Downing, 2003). The ICCAS was administered to participants after the launch. As with the previous studies (Archibald et al., 2014; Schmitz et al., 2017), the ICCAS was administered in a retrospective pretest–posttest (RPP) design. The RPP design gives participants a standard frame of reference for comparison at two time points and is often used in program evaluation for determining program impact (Chang & Little, 2018). Similar to the Schmitz et al. (2017) study, a 5-point scale was used, where 1 = poor, 2 = fair, 3 = good, 4 = very good, and 5 = excellent. To assess concurrent validity, a transitional item for overall ability was also included (Feinstein, 1987; Schmitz et al., 2017). The transitional item was:

Compared to the time before the launch, would you say your ability to collaborate interprofessionally is… (Select one): 1 = much worse now; 2 = somewhat worse now; 3 = about the same; 4 = somewhat better now; and 5 = much better now this item was adapted from previous work where the item was reverse scored

In addition, another transitional item was used:

Overall, the launch was an excellent IP learning experience (Please indicate): 1 = strongly disagree; 2 = disagree; 3 = somewhat disagree; 4 = neither agree nor disagree; 5 = somewhat agree; 6 = agree; 7 = strongly agree.

Sample

All students participating in the launch were in the first few weeks of their health professional programs. Attendance at the launch was required as part of students' curricula; however, completion of the ICCAS was voluntary. Table 1 lists the programs with students participating in the launch. At the end of the launch, students gathered in a lecture theatre for a debrief of the experience. Before the debrief began, the students were provided a link to the ICCAS that was hosted online using the survey tool Qualtrics®. Ethics approval was granted by Human Research Ethics Board 1 at University of Alberta.

Demographic Information and ICCAS Scores

Table 1:

Demographic Information and ICCAS Scores

Analysis

Data were downloaded from Qualtrics to be analyzed using jamovi, a point-and-click interface for the R programming language. As with the Minnesota study, pre–post ICCAS effect sizes were calculated using Cohen's d. The same guidelines for the size of effects as Schmitz were used; large effects were those greater than 0.8, moderate differences were between 0.79 and 0.50, and small differences were less than 0.5 (Cohen, 1988). The change for each item across the sample was correlated with the two transitional items. An exploratory factor analysis (EFA) was performed for both the pre- and posttest measures, a fixed number of three factors was initially tested based on the indication of a potential three-factor solution by Schmitz et al. (2017) and Archibald et al. (2014). For both pre- and postmeasures, Keiser-Meyer-Olkin measure of sampling adequacy was .97, indicating the data are factorable. Bartlett's Test of Sphericity was significant (p < .001) for both tests, indicating sufficient correlation between the variables for analysis (Meyers, Gamst, & Guarino, 2013). Varimax rotation was used. Finally, reliability scores using classical test theory were calculated and an item analysis was performed to determine the discrimination and difficulty of each item on the pre- and posttest. Item discrimination is a point biserial correlation coefficient that ranges from −1.00 to 1.00; items with discrimination values of ⩽.20 indicate an item is not functioning well (DeMars, 2018). Missing data were determined to be missing completely at random and was addressed using mean substitution (Tabachnick & Fidell, 2013); different missing data imputation methods were used, with no differences found.

Results

The ICCAS survey was begun by 1,014 participants (1,045 students participated in the launch), with 991 completing the survey (Table 1). Large effect sizes were found for 16 items (Table 2). For transitional item one, the correlations ranged from .15 to .26, whereas for transitional item 2 the correlations ranged from .05 to .17 (Table 3). For the EFA, the post-test data were extracted using a fixed factor number of three (Table 4). The resultant eigenvalues indicated a single-factor solution. This was supported by an examination of the scree plot and the factor loadings, where there were numerous cross-loadings. A single-factor solution was found using eigenvalue determination, fixed number extraction, and parallel analysis. The three extraction techniques, along with different rotations, did not produce appreciably different results. The same process was used for the pretest data, and again a single-factor solution was suggested. The analysis of the internal structure of the ICCAS using EFA indicates a single factor. Cronbach's alpha was calculated for the posttest, α = .95, and pretest, α = .97, showing a strong observed to true score relationship (DeMars, 2018; Traub & Rowley, 1991). All items on the pre- and posttest of the ICCAS showed good discrimination, ranking >.2, indicating the items were able to separate people of high and low self-perceived competency (Ebel & Frisbie, 1991). Difficulty scores represent the average item rating scale value; overall, participants shifted from rating themselves as good (3.41) to very good (4.23) on the 5-point scale.

Effect Sizes for the ICCAS Compared Across Three Samplesa

Table 2:

Effect Sizes for the ICCAS Compared Across Three Samples

Change of Items Over the Sample Correlated With Transitional Items

Table 3:

Change of Items Over the Sample Correlated With Transitional Items

Summary Results of an Exploratory Factor Analysis for the Interprofessional Collaborative Competency Attainment Survey

Table 4:

Summary Results of an Exploratory Factor Analysis for the Interprofessional Collaborative Competency Attainment Survey

Discussion

The current research had two primary purposes: first, to continue to develop validity evidence for the ICCAS and expand understanding of item functioning, internal structure, concurrent validity, and the reliability of the ICCAS as a part of an introductory IPE experience; second, to determine what insights to the population and their experience in early IPE can be obtained by using the ICCAS and the implications for response process and consequential validity.

For the first aim, the research findings contribute additional validity evidence for the ICCAS as a self-assessment instrument for evaluating IPC competencies. The ICCAS appears to be a useful instrument to include as part of the evaluation of the launch. The ICCAS's RPP design allowed for a substantial amount of data to be collected rapidly, resulting in a high rate of survey completion (98%). In addition, a change was observed in participants' pre- and posttest ratings, indicating that self-reported competence change was measured with the ICCAS's RPP design. The psychometric results of the study support the findings of previous research (Archibald et al., 2014; Schmitz et al., 2017). Similar to the prior studies, the current data show ICCAS sensitivity in measuring changes in self-rated competency for an IPE experience. Although larger effect sizes than those in the Minnesota and Ottawa studies were found, there was general alignment between the items and effect sizes for the three studies. The first transitional item showed moderate correlations with the ICCAS items, although the correlations tended to be smaller than those found by Schmitz et al. (2017), demonstrating concurrent validity. The second transitional item produced smaller correlations. The smaller correlations were likely a result of the question not being directly related to competency improvement and the wording of the question limiting participants' perceptions to “Excellence.” The EFA found an internal structure consistent with the previous studies. A single-factor solution was produced that can represent a unidimensional construct, from which it can be argued a global/unitary score of IPC competency can be derived. Finally, the item analysis showed the ICCAS items were able to discriminate participants of high and low self-perceived competence.

The results of the current work substantiate the cogency of the Institute of Medicine (2015) statement by showing broad consensus being developed for the functioning of the ICCAS. Kane (1992) heralded several independent lines of evidence producing redundant outcomes as a virtue for developing validity evidence. In the process of developing validity arguments, the alignment of the current work with previous research contributes another piece of valuable evidence for the ICCAS.

The second aim of the study provides interesting results for validity evidence regarding understanding the response process of participants, along with implications and consequences relevant to the use of the ICCAS. Wider understanding is also obtained relevant to IPC assessment, pedagogical process, and epistemic development.

The effect sizes observed were very large, some >1 SD (Table 2)—much greater than those reported in either of the previous validation studies (Archibald et al., 2014; Schmitz et al., 2017). The somewhat surprisingly large effect sizes can be explained by two psychological theories—the Weber-Fechner law (Weber, 1996) and the Dunning-Kruger effect (Kruger & Dunning, 1999). First, the Weber-Fechner law is a psychophysical law that states human perceptions of change depend on the magnitude of a stimulus, where a change in a variable is more likely to be perceived when the baseline level of the variable is low rather than high. For example, when holding a 1-kg weight the addition of 100 g is more perceptible than if holding 10 kg (Bless & Burger, 2016; Dehaene, 2003; Stout, 1913; Weber, 1996). The Weber-Fechner law also applies to cognitive perceptions such as situational judgment (Leeds, 2012), forgiveness (McCullough, Luna, Berry, Tabak, & Bono, 2010), perceptions of numeric magnitude (Dehaene, 1992), product pricing (Lin & Wang, 2017), and consumer behavior (Huang, Tan, Ke, & Wei, (2018). In the Schmitz et al. (2017) study, participants completed the IPC course for a total of 12 instructional hours divided over six sessions (University of Minnesota Academic Health Center, 2018). Schmitz et al. (2017) did not directly report the extent of students prior IPE/IPC experience; however, across the seven program categories participants had a mean 1.07 years of clinical experience and 1.8 years of nonclinical experience. Likewise, in the Archibald et al. (2014) study, previous IPE/IPC experience was not reported, although it is stated that most of the participants were health care trainees or practicing health care professionals. Given these conditions, it is reasonable to hypothesize that most of the participants in both studies had some form of previous IPE/IPC exposure. The current sample had low previous IPE/IPC experience, 23% across faculties (Table 1), and was entering the first year of their health professional programs. For 77% of the participants in the study, the launch was the first introduction to IPC. Being the participants' first introduction to IPC, the launch had a substantial impact on them, as can be seen in the changes in mean item scores (Table 3) and the overall mean instrument change of .82 on a 5-point scale. The level of experience and change in scores points to the Weber-Fechner law. The students in the launch had a low baseline level of IPC experience, thus any exposure to IPC has a large perceived effect on the participants and results in large effect sizes. Further support for the theory exists when examining programs with lower initial IPE experience, such as Pharmacy, Kinesiology Sport and Recreation, Nursing, and Dental Hygiene, which had higher Cohen's d than the programs that had higher initial IPC experience, such as Medicine, Medical Laboratory Science, and Dietetics (Table 1). As students gain more experience, less change—or improvement—is perceived for IPC competency. It should be noted that the samples for some of the individual programs were small; however, this is an interesting trend.

Further evidence for effect of the Weber-Fechner law is found in the scale reliability scores. The ICCAS showed a post-test α = .95, and a pretest α = .97. In classical test theory, alpha levels are influenced by sample characteristics (De Champlain, 2010; Ebel, & Frisbie, 1991). In the pretest measure, there is greater variability in the level of IPC experience—23% had IPC experience, whereas 77% had none—leading to increased response variability, higher item variance, and subsequently a higher alpha level. In the posttest measure, 100% of participants had some IPC experience—the launch. As the sample became more homogenous, the response variability, item variance, and the alpha level drop. The decrease, although slight, in the reliability scores represents a perceptually large change occurring from a small stimulus—the Weber-Fechner law.

The second psychological theory that can be drawn on to explain the large effects sizes is the Dunning-Kruger effect (Kruger & Dunning, 1999). From the item analysis, it was observed participants shifted from rating themselves as good to very good on the 5-point scale. The Dunning-Kruger effect posits that low performers do not realize they are low performers and in fact overestimate and are overconfident in their abilities (Dunning, 2011; Kruger & Dunning, 1999; Sanchez & Dunning, 2018; Simons, 2013). Most (77%) of the participants had no IPC or IPE experience. Despite being low in IPC competency, participants rated themselves overall as being good on the IPC competencies, indicating that participants lack awareness about their competency levels. After the 3-hour experience, the participants felt they had become very good. Participants are perhaps overconfident. The level of experience in IPE/IPC is low, and participants are likely unaware of the overall level of knowledge, skills, and attributes required for IPC competency. Participants are ignorant of their own ignorance and, after a small amount of exposure to IPC, have become very confident in their abilities. The minimal clinical experience in the sample further underscores that participants lack knowledge of real collaboration in a clinical environment. The Dunning-Kruger effect and the Weber-Fechner law compound, resulting in the large effect sizes. Understanding the effects provides valuable understanding of the cognitive response process for an early learner sample using the ICCAS: “a little learning is a dangerous thing” (Pope, 1711).

There are two implications, both relevant to the consequences of using the ICCAS with an early learner population resulting from the second aim of the study. First, it is necessary to be wary of using a single assessment at any stage in the IPE process. An introductory IPE course is unlikely to have a large positive effect on students interprofessional competencies, as observed in this study. The students likely demonstrate a noticeable, self-rated improvement because students have not been exposed to IPC before and those who have had IPC exposure received it in an educational context rather than a clinical environment. What is needed is continuous multiple assessments of competence from the first introduction of IPC through to licensure to determine the efficacy of any IPE intervention in competency development (Domac, Anderson, O'Reilly, & Smith, 2015). Continuous assessment will help determine what is the true effect size of IPE.

Second, there is a need for psychological and educational theories for course development and assessment. Without a guiding theoretical construct during development and assessment and interpretation, there is the risk that an incomplete account of the active phenomena will be presented (Hean et al., 2018) and claims will be made about students' ability and educational outcomes that are inaccurate and unfounded. The current data serve as an example of the consequences of using a single measure in a theoretical vacuum. If effect sizes were to be interpreted atheoretically, erroneous claims about students' competency development may be made, rather than understanding the large effect sizes as an artifact of students' baseline lack of knowledge. The current research shows the utility of incorporating psychological and educational theory in the curriculum evaluation to explain and predict cognitive and behavioral change in learners (Hean et al., 2018). Continuing to apply theory in curricular design, delivery, and the understanding of learner experiences (Hean et al., 2018) will lead to improved pedagogy and enhanced epistemic clarity.

Limitations and Future Directions

This research was limited to a single administration of the ICCAS with a sample of students at the beginning of their health professional program. To further determine the utility of IPE experiences and self-rating scales, such as the ICCAS, longitudinal study is needed. Participants should be measured at regular intervals after the initial IPE experience to determine how self-ratings may change as students gain more experience. Measurement with the ICCAS should be done with learners further along in training and with more IPC experience to determine how the ICCAS functions in advanced samples. It could be hypothesized, based on the Weber-Fechner law and the Dunning-Kruger effect, that more advanced trainees will rate themselves as being lower in IPC competency than students early in training. As the advanced samples accrue more experience, changes in competency will be less noticeable and participants will not be as overconfident as knowledge is developed around what it takes to be competent. Greater experience may be why smaller, though still moderate to large, effect sizes were observed in the Schmitz et al. (2017) sample, and even smaller effect sizes were seen in the Archibald et al. (2014) sample (Table 2). In addition, direct observation should be used to assess the true development of IPC competency and produce concurrent criterion validity for measures such as the ICCAS. With minimal modification, the ICCAS could likely be used as a measure for direct observation, providing comparability between self-rating and direct observation measures.

Finally, the RPP method, although common in program evaluation, may not reflect accurately the self-perceived competency prior to the IPE experience. However, similar responses were found between the traditional pre–post design and RPP in medical students after a resuscitation course (Bhanji, Gottesman, de Grave, Steinert, & Winer, 2012), indicating that the RPP method can be a practical approach to evaluate learning and program impact. The influence of completing an evaluation of precourse competency postcourse for the ICCAS is unknown. Administering the ICCAS to randomly assigned students in either a pre–post or RPP design would provide further validity evidence for the ICCAS.

Conclusion

The current data show consilience (Wilson, 1998); the Weber-Fechner law and the Dunning-Kruger effect converge to provide an excellent example of how research in health sciences education cannot only be bolstered by the incorporation of theory, particularly psychological theory (Amico, Mugavero, Krousel-Wood, Bosworth, & Merlin, 2018; Croskerry, Cosby, Graber, & Hardeep, 2017), but that theory is necessary (Norman, 2007; Paradis & Whitehead, 2018). The current study provides validity evidence for internal structure, reliability, item functioning, concurrent validity, response process, and consequences (Cook, 2014; Cook et al., 2015; Cook & Lineberry, 2016; Kane, 2016) along with a coherent explanatory framework to advance the development of IPC instruments. Considering the accumulating evidence, the ICCAS has practical value as an acceptable measure of self-rated IPC competency. The ICCAS provides educators with an instrument that can be easily administered as a piece of the assessment of students' development of IPC competency from program entry to licensure. The ICCAS should not be used as a one-off measure, but rather it should be incorporated as a useful instrument in developing a complete understanding of health professional development by forming a longitudinal narrative of competency development throughout an individual's education.

References

  • Amico, K.R., Mugavero, M., Krousel-Wood, M.A., Bosworth, H.B. & Merlin, J.S. (2018). Advantages to using social-behavioral models of medication adherence in research and practice. Journal of General Internal Medicine, 33, 207–215. doi:10.1007/s11606-017-4197-5 [CrossRef]
  • Archibald, D., Trumpower, D. & MacDonald, C.J. (2014). Validation of the Interprofessional Collaborative Competency Attainment Survey (ICCAS). Journal of Interprofessional Care, 28, 553–558. doi:10.3109/13561820.2014.917407 [CrossRef]
  • Batteson, T. & Garber, S.S. (2018). Assessing constructs underlying interprofessional competencies through the design of a new measure of interprofessional education. Journal of Interprofessional Education and Practice. Advance online publication. https://doi.org/10.1016/j.xjep.2018.08.004 doi:10.1016/j.xjep.2018.08.004 [CrossRef]
  • Bhanji, F., Gottesman, R., de Grave, W., Steinert, Y. & Winer, L.R. (2012). The retrospective pre-post: A practical method to evaluate learning from an educational program. Academic Emergency Medicine, 19, 189–194. doi:10.1111/j.1553-2712.2011.01270.x [CrossRef]
  • Bless, H. & Burger, A.M. (2016). A closer look at social psychologists' silver bullet: Inevitable and evitable side effects of the experimental approach. Perspectives on Psychological Science, 1, 296–308. doi:10.1177/1745691615621278 [CrossRef]
  • Blue, A.V., Chesluk, B.J., Conforti, L.N. & Holmboe, E.S. (2015). Assessment and evaluation in interprofessional education: Exploring the field. Journal of Allied Health, 44, 73–82.
  • Canadian Interprofessional Health Collaborative. (2010). A national Interprofessional competency framework. Retrieved from https://www.cihc.ca/files/CIHC_IPCompetencies_Feb1210.pdf
  • Chang, R. & Little, T.D. (2018). Innovations for evaluation research: Multiform protocols, visual analog scaling, and the retrospective pretest–posttest design. Evaluation & the Health Professions, 41, 246–269. doi:10.1177/0163278718759396 [CrossRef]
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.
  • Cook, D.A. (2014). When I say…validity. Medical Education, 48, 948–949. doi:10.1111/medu.12401 [CrossRef]
  • Cook, D.A., Brydges, R., Ginsburg, S. & Hatala, R. (2015). A contemporary approach to validity arguments: A practical guide to Kane's framework. Medical Education, 49, 560–575. doi:10.1111/medu.12678 [CrossRef]
  • Cook, D.A. & Lineberry, M. (2016). Consequences validity evidence: Evaluating the impact of educational assessments. Academic Medicine, 91, 785–795. doi:10.1097/ACM.0000000000001114 [CrossRef]
  • Cox, M. (2015). Measuring the impact of interprofessional education on collaborative practice and patient outcomes. Journal of Interprofessional Education and Practice, 1(2), 34–35. doi:10.1016/j.xjep.2015.07.001 [CrossRef]
  • Croskerry, P., Cosby, K., Graber, M.L. & Hardeep, S. (2017). Diagnosis: Interpreting the shadows. Boca Raton, FL: CRC Press. doi:10.1201/9781315116334 [CrossRef]
  • De Champlain, A.F. (2010). A primer on classical test theory and item response theory for assessments in medical education. Medical Education, 44, 109–117. doi:10.1111/j.1365-2923.2009.03425.x [CrossRef]
  • Dehaene, S. (1992). Varieties of numerical abilities. Cognition, 44, 1–42. doi:10.1016/0010-0277(92)90049-N [CrossRef]
  • Dehaene, S. (2003). The neural basis of the Weber-Fechner law: A logarithmic mental number line. Trends in Cognitive Science, 7, 145–147. doi:10.1016/S1364-6613(03)00055-X [CrossRef]
  • DeMars, C.E. (2018). Classical test theory and item response theory. In Irwing, P., Booth, T. & Hughes, D.J. (Eds.), The Wiley handbook of psychometric testing: A multidisciplinary reference on survey, scale and test development (pp. 49–73). Hoboken, NJ: John Wiley & Sons. doi:10.1002/9781118489772.ch2 [CrossRef]
  • Domac, S., Anderson, L., O'Reilly, M. & Smith, R. (2015). Assessing interprofessional competence using a prospective reflective portfolio. Journal of Interprofessional Care, 29, 179–187. doi:10.3109/13561820.2014.983593 [CrossRef]
  • Downing, S.M. (2003). Validity: On the meaningful interpretation of assessment data. Medical Education, 37, 830–837. doi:10.1046/j.1365-2923.2003.01594.x [CrossRef]
  • Dunning, D. (2011). The Dunning-Kruger effect. On being ignorant of one's own ignorance. In Zanna, M. & Olson, J. (Eds.), Advances in experimental social psychology (Vol. 44, pp. 249–290). Amsterdam, The Netherlands: Elsevier.
  • Ebel, R.L. & Frisbie, D.A. (1991). Essentials of educational measurement (5th ed.) Englewood Cliffs, NJ: Prentice Hall.
  • Feinstein, A.R. (1987). Clinimetrics. New Haven, CT: Yale University Press. doi:10.2307/j.ctt1xp3vbc [CrossRef]
  • Gillan, C., Lovrics, E., Halpern, E., Wiljer, D. & Harnett, N. (2011). The evaluation of learner outcomes in interprofessional continuing education: A literature review and an analysis of survey instruments. Medical Teacher, 33, 461–470. doi:10.3109/0142159X.2011.587915 [CrossRef]
  • Hean, S., Green, C., Anderson, E., Morris, D., John, C., Pitt, R. & O'Halloran, C. (2018). The contribution of theory to the design, delivery, and evaluation of interprofessional curricula: BEME guide no. 49. Medical Teacher, 40, 542–558. doi:10.1080/0142159X.2018.1432851 [CrossRef]
  • Holmboe, E.S. (2015). Realizing the promise of competency-based medical education. Academic Medicine, 90, 411–413. doi:10.1097/ACM.0000000000000515 [CrossRef]
  • Huang, L., Tan, C.-H., Ke, W. & Wei, K. (2018). Helpfulness of online review content: The moderating effects of temporal and social cues. Journal of the Association of Information Systems, 19, 503–522. doi:10.17705/1jais.00499 [CrossRef]
  • Institute of Medicine. (2015). Measuring the impact of interprofessional education on collaborative practice and patient outcomes. Washington, DC: National Academies Press.
  • Interprofessional Education Collaborative. (2016). Core competencies for interprofessional collaborative practice: 2016 update. Washington, DC: Author.
  • Kane, M.T. (1992). An argument based approach to validity. Quantitative Methods for Psychology, 112, 527–535.
  • Kane, M.T. (2016). Explicating validity. Assessment in Education Principles Policy and Practice, 23, 198–211. doi:10.1080/0969594X.2015.1060192 [CrossRef]
  • King, S., Hall, M., McFarlane, L.A., Paslawski, T., Sommerfeldt, S., Hatch, T. & Norton, B. (2017). Launching first-year health sciences students into collaborative practice: Highlighting institutional enablers and barriers to success. Journal of Interprofessional Care, 31, 386–393. doi:10.1080/13561820.2016.1256870 [CrossRef]
  • King, S. & Violato, E. (2018). Longitudinal evaluation of attitudes to interprofessional collaboration: Time for a change? Manuscript submitted for publication.
  • Kruger, J. & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77, 1121–1134. doi:10.1037/0022-3514.77.6.1121 [CrossRef]
  • Leeds, J.P. (2012). The theory of cognitive acuity: Extending psychophysics to the measurement of situational judgment. Journal of Neuroscience, Psychology, and Economics, 5, 166–181. doi:10.1037/a0027294 [CrossRef]
  • Lin, C.H. & Wang, J.W. (2017). Distortion of price discount perceptions through the left-digit effect. Marketing Letters, 28(1), 99–112. doi:10.1007/s11002-015-9387-5 [CrossRef]
  • MacDonald, C.J., Archibald, D., Trumpower, D., Casimiro, L., Cragg, B. & Jelley, W. (2010). Designing and operationalizing a toolkit of bilingual interprofessional education assessment instruments. Journal of Research in Interprofessional Practice and Education, 1, 304–316. doi:10.22230/jripe.2010v1n3a36 [CrossRef]
  • Mahler, C., Berger, S. & Reeves, S. (2015). The Readiness for Interprofessional Learning Scale (RIPLS): A problematic evaluative scale for the interprofessional field. Journal of Interprofessional Care, 29, 289–291. doi:10.3109/13561820.2015.1059652 [CrossRef]
  • McClelland, D.C. (1973). Testing for competence rather than for “intelligence.”American Psychologist, 28(1), 1–14. doi:10.1037/h0034092 [CrossRef]
  • McCullough, M.E., Luna, L.R., Berry, J.W., Tabak, B.A. & Bono, G. (2010). On the form and function of forgiving: Modeling the time-forgiveness relationship and testing the valuable relationships hypothesis. Emotion, 10, 358–376. doi:10.1037/a0019349 [CrossRef]
  • Meyers, L.S., Gamst, G. & Guarino, A.J. (2013). Principal components analysis and exploratory factor analysis. In Applied multivariate research (2nd ed., pp. 640–687). Thousand Oaks, CA: Sage.
  • Oates, M. & Davidson, M. (2015). A critical appraisal of instruments to measure outcomes of interprofessional education. Medical Education, 49, 386–398. doi:10.1111/medu.12681 [CrossRef]
  • Norman, G. (2007). Editorial—How bad is medical education research anyway?Advances in Health Sciences Education, 12(1), 1–5. doi:10.1007/s10459-006-9047-x [CrossRef]
  • Paradis, E. & Whitehead, C.R. (2018). Beyond the lamppost: A proposal for a fourth wave of education for collaboration. Academic Medicine, 93, 1457–1463. doi:10.1097/ACM.0000000000002233 [CrossRef]
  • Pope, A. (1711). An essay on criticism. London, United Kingdom: W. Lewis.
  • Reeves, S., Goldman, J., Gilbert, J., Tepper, J., Silver, I., Suter, E. & Zwarenstein, M.A. (2011). Scoping review to improve conceptual clarity of interprofessional interventions. Journal of Interprofessional Care, 259, 167–174. doi:10.3109/13561820.2010.529960 [CrossRef]
  • Reeves, R., Palaganas, J. & Zierler, Z. (2017). An updated synthesis of review evidence of interprofessional education. Journal of Allied Health, 46, 56–61.
  • Sanchez, C. & Dunning, D. (2018). Overconfidence among beginners: Is a little learning a dangerous thing?Journal of Personality and Social Psychology, 114(1), 10–28. doi:10.1037/pspa0000102 [CrossRef]
  • Schmitz, C.C., Radosevich, D.M., Jardine, P., MacDonald, C.J., Trumpower, D. & Archibald, D. (2017). The Interprofessional Collaborative Competency Attainment Survey (ICCAS): A replication validation study. Journal of Interprofessional Care, 31(1), 28–34. doi:10.1080/13561820.2016.1233096 [CrossRef]
  • Shrader, S., Farland, M.Z., Danielson, J., Sicat, B. & Umland, E.M. (2017). A systematic review of assessment tools measuring interprofessional education outcomes relevant to pharmacy education. The American Journal of Pharmaceutical Education, 81(6), 1–21.
  • Simmons, B.S., Wagner, S.J. & Reeves, S. (2016). Assessment of interprofessional education: Key issues, ideas, challenges, and opportunities. In Wimmers, P.F. & Mentkowski, M. (Eds.), Assessing competence in professional performance across disciplines and professions (Vol. 13, pp. 237–252). Basel, Switzerland: Springer. doi:10.1007/978-3-319-30064-1_12 [CrossRef]
  • Simons, D.J. (2013). Unskilled and optimistic: Overconfident predictions despite calibrated knowledge of relative skill. Psychonomic Bulletin and Review, 20, 601–607. doi:10.3758/s13423-013-0379-2 [CrossRef]
  • Stout, G.F. (1913). The Weber-Fechner law. A manual of psychology (3rd ed., pp. 300–309). New York, NY: Hinds, Noble & Eldridge.
  • Tabachnick, B.G. & Fidell, L.S. (2013). Cleaning up your act: Screening data prior to analysis. Using multivariate statistics (6th ed.). New York, NY: Pearson Education.
  • Traub, R.E. & Rowley, G.L. (1991). An NCME instructional module on understanding reliability. Educational Measurement: Issues and Practice, 10(1), 37–45. doi:10.1111/j.1745-3992.1991.tb00183.x [CrossRef]
  • University of Alberta: Health Sciences Education and Research Commons. (2018). Interprofessional learning pathway. Retrieved from https://www.ualberta.ca/health-sciences-education-research/ip-education/interprofessional-pathway/interprofessional-pathway-launch
  • University of Minnesota Academic Health Center: Office of Education. (2018). Phase I - Orientation: Foundations of Interprofessional Communication & Collaboration (FIPCC). Retrieved from https://www.ahceducation.umn.edu/1health-setting-new-standard-interprofessional-education/phase-i-orientation-foundations-interprofessional-communication-collaboration-fipcc
  • Violato, E. & King, S. (2018). A case of validity evidence for the Interprofessional Attitudes Scale. Manuscript submitted for publication.
  • Weber, E.H. (1996). EH Weber on the tactile senses (2nd ed.) ( Ross, H.E. & Murray, D.J., Eds.). Oxford, England: Erlbaum, Taylor & Francis.
  • Wilson, E.O. (1998). Consilience: The unity of knowledge. New York, NY: Vintage Books–Random House.

Demographic Information and ICCAS Scores

FacultyICCAS Pretest MeanICCAS Posttest MeanProgramaTotal (n)Female (n)Female (%)Previous IPE ExperiencebICCAS Pretest MeanICCAS Posttest MeanMean dcMean Years at University (SD)
ALES (n = 46, 4.6%)89.345.6%
3.734.31Dietetics353018 (51.4%)3.704.290.804.23 (0.99)
Human Ecology110 (0%)3.354.56-
Nutrition10103 (3%)3.894.380.95
KSR (n = 16, 1.6%)93.812.5%
3.384.21KSR16152 (12.5%)3.384.211.34.3 (0.72)
Medicine & Dentistry (n = 252, 25.5%)59.623%
3.894.31Dental Hygiene41403 (7.3%)3.654.361.03.5 (1.43)
Dentistry2995 (17%)3.944.350.65
Med1477940 (27%)3.974.300.58
MLS23158 (35%)3.814.290.69
Radiation Therapy1062 (20%)3.684.130.79
Nursing (n = 310, 31.3%)89.313.9%
3.674.34Nursing31127843 (13.9%)3.674.341.012.58 (1.31)
Pharmacy (n = 127, 12.8%)61.715.6%
3.424.23Pharmacy1287920 (15.7%)3.424.231.143.53 (1.7)
Rehab Med (n = 237, 23.9%)85.129.4%
3.564.26OT918431 (34.1%)3.594.280.984.63 (1.32)
PT886117 (19%)3.424.181.12
SLP565521 (37.5%)3.744.350.88

Effect Sizes for the ICCAS Compared Across Three Samplesa

ICCAS ItemUniversity of Alberta (n = 991)University of Minnesota (n = 785)University of Ottawa (n = 584)



Cohen's dEffectCohen's dEffectCohen's dEffect
1. Promote effective communication among members of an interprofessional (IP) team.0.87Large0.72Moderate0.59Moderate
2. Actively listen to IP team members' ideas and concerns.0.94Large0.51Moderate0.46Small
3. Express my ideas and concerns without being judgmental.0.92Large0.54Moderate0.44Small
4. Provide constructive feedback to IP team members.0.4Small0.52Moderate0.56Moderate
5. Express my ideas and concerns in a clear, concise manner.0.59Moderate0.39Small0.47Small
6. Seek out IP team members to address issues.0.69Large0.78Moderate0.50Moderate
7. Work effectively with IP team members to enhance care.1.12Large0.72Moderate0.52Moderate
8. Learn with, from, and about IP team members to enhance care.1.22Large0.94Large0.52Moderate
9. Identify and describe my abilities and contributions to the IP team.0.80Large0.72Moderate0.51Moderate
10. Be accountable for my contributions to the IP team.0.65Large0.43Small0.43Small
11. Understand the abilities and contributions of IP team members.1.07Large1.01Large0.54Moderate
12. Recognize how others' skills and knowledge complement and overlap with my own.1.19Large0.98Large0.48Small
13. Use an IP team approach with the patient to assess the health situation.1.2Large0.74Moderate0.51Moderate
14. Use an IP team approach with the patient to provide whole person care.1.15Large0.69Moderate0.52Moderate
15. Include the patient/family in decision making.0.76Moderate0.35Small0.50Moderate
16. Actively listen to the perspectives of IP team members.1.01Large0.55Moderate0.44Small
17. Take into account the ideas of IP team members.1.02Large0.60Moderate0.38Small
18. Address team conflict in a respectful manner.0.68Moderate0.43Small0.41Small
19. Develop an effective care plan with IP team members.1.03Large0.75Moderate0.48Moderate
20. Negotiate responsibilities within overlapping scopes of practice.0.81Large0.79Moderate0.61Moderate

Change of Items Over the Sample Correlated With Transitional Items

ICCAS ItemMean Item Changear, Transitional Item 1br, Transitional Item 2c
1. Promote effective communication among members of an interprofessional (IP) team.0.770.260.17
2. Actively listen to IP team members' ideas and concerns.0.710.150.11
3. Express my ideas and concerns without being judgmental.0.780.190.1
4. Provide constructive feedback to IP team members.0.40.230.14
5. Express my ideas and concerns in a clear, concise manner.0.530.190.12
6. Seek out IP team members to address issues.0.720.170.13
7. Work effectively with IP team members to enhance care.1.010.190.17
8. Learn with, from, and about IP team members to enhance care.1.090.190.13
9. Identify and describe my abilities and contributions to the IP team.0.780.20.12
10. Be accountable for my contributions to the IP team.0.620.190.13
11. Understand the abilities and contributions of IP team members.1.030.220.13
12. Recognize how others' skills and knowledge complement and overlap with my own.1.10.220.12
13. Use an IP team approach with the patient to assess the health situation.1.090.20.13
14. Use an IP team approach with the patient to provide whole person care.1.080.20.13
15. Include the patient/family in decision making.0.730.170.1
16. Actively listen to the perspectives of IP team members.0.860.150.05
17. Take into account the ideas of IP team members.0.860.150.08
18. Address team conflict in a respectful manner.0.620.180.07
19. Develop an effective care plan with IP team members.0.990.210.14
20. Negotiate responsibilities within overlapping scopes of practice.0.790.210.1

Summary Results of an Exploratory Factor Analysis for the Interprofessional Collaborative Competency Attainment Survey

VariablePosttest Three FactorPosttest Single FactorPretest Three FactorPretest Single Factor
Eigenvalues9.93.62.329.9312.20.71.4712.20
% of variance22.921.111.549.631.418.918.461
No. of factor crossloadings17-18-
Range of factor loadings.321 to .724.586 to .786.3 to .805.699 to .850
Factor correlations123-123-
.13.070.14.070
.086.055
Authors

Mr. Violato is Graduate Student and PhD candidate, and Dr. King is Associate Professor, Department of Educational Psychology, Faculty of Education, University of Alberta, Edmonton, Alberta, Canada.

The authors have disclosed no potential conflicts of interest, financial or otherwise.

Address correspondence to Efrem Mauro Violato, MSc, Graduate Student and PhD candidate, Department of Educational Psychology, Faculty of Education, 6-132 Education North, University of Alberta, Edmonton, AB, Canada T6G 2G5; e-mail: violato@ualberta.ca.

Received: March 04, 2019
Accepted: April 24, 2019

10.3928/01484834-20190719-04

Sign up to receive

Journal E-contents