1,083
Views
10
CrossRef citations to date
0
Altmetric
Original Articles

The Interactive Media Package for Assessment of Communication and Critical Thinking (IMPACCT©): Testing a Programmatic Online Communication Competence Assessment System

Pages 145-173 | Published online: 09 Mar 2011
 

Abstract

IMPACCT is an online survey covering over 40 self-report types of student communication competency, as well as a test of critical thinking based on cognitive problem-solving. The student nominates two peers who rate the student's interpersonal, computer-mediated, group and leadership, and public speaking communication competence. The student takes the self-report survey at Time 1 (T1), and also at a Time 2 (T2). The system generates a printable profile for the student displaying the following percentiles: (a) T1-Self (i.e., how the student sees self's communication skills in various skill domains at time 1), (b) T2-Self (i.e., how the student's scores changed from T1 to T2), (c) Peers (i.e., how self-rated skills compare to averaged ratings by two peers nominated by the student), and (d) Norms (i.e., how self-rated skills compare to everyone else's self-ratings who has taken the survey). The resulting collective data are available to the department to provide evidence of student self-perceived skill deficits at T1, as well as perceived change from T1 (e.g., beginning of course or major) to T2 (e.g., end of course or major), providing quantifiable data for assessment accounting and reporting. The system was tested on a basic course sample of 1,880 freshmen basic course students and 1,999 affiliated peer raters using a new measure of communication competence developed for this project. All subscales were sufficiently reliable. The self-reported motivation, knowledge, and skills constructs accounted for between 68–72% of student self-perceptions of overall communication competence (i.e., appropriateness, effectiveness, clarity, attractiveness, satisfaction). Critical thinking was unrelated, and peer ratings were only modestly related, to student self-perceptions of communication competence. Student self-perceived skills were systematically lower than peer ratings of students. Students consistently perceived that their communication competence and skills increased significantly over the span of the semester. The system is highly scalable and adaptable to multiple curricular configurations, and shows promise in providing a practical solution to departmental assessment needs. The measure also provides a valuable architecture for significant research and theory development in the area of communication competence.

Notes

1. Most of the underwriting of the online engineering and software development for this project was from a gift to the School of Communication and the College of Professional Studies and Fine Arts at San Diego State University. The donor, Sanford I. Berman, Ph.D., wanted the gift to advance concepts of general semantics in ways that would facilitate student education and communication excellence. Therefore, the critical thinking items were written with feedback oriented to general semantics principles.

2. A copy of the text and figures of the critical thinking items is available by request of the author.

3. There were three items written for this subscale, but one (“at small talk”) was inadvertently left out of the IMPACCT. It has been reinserted in the current operating version of IMPACCT.

4. A welcome debate with the reviewers regarding the necessity and appropriateness of factor analysis developed, which warrants extended comment for the sake of future research on measurement issues related to communication assessment. It is time to consider seriously the appropriateness of factor analytic techniques for many assessment purposes in communication (see Conway & Huffcutt, Citation2003; Fabrigar, Wegener, MacCallum, & Strahan, 1999; Park, Dailey, & Lemus, Citation2002).

First, communication skills are often best considered causal indicators rather than effect indicators. Exploratory factor analysis (EFA), in contrast to principal components analysis (PCA), presupposes that there are multiple observed effects of a latent construct. It matters, therefore, whether a factor of corresponding items represents an underlying explanatory cause, or whether the items themselves are the cause of the phenomenon. For example, eye contact and smiling may determine, or be compositional of, expressiveness. A latent model, in contrast, presumes either an underlying skill that produces eye contact and smiling, or a set of behaviors that produce a distinctive impression of these as a collective skill labeled expressiveness (Bollen & Lennox, Citation1991). One of the implications of this distinction is that construct reliability is essential for effects indicator constructs, but less so for causal indicators “because these correlations are explained by factors outside of the model” (Bollen & Lennox, p. 307). Related, such causal indicators may need to comprise heterogenous observed indicators, which may mitigate against the likelihood of high interitem correlations (Bollen & Lennox), and thus, the likelihood such items would emerge as coherent factors in EFA.

Second, the validity of EFA and PCA are dependent on the rating scale. Just because behavioral items correspond in terms of ratings does not assure that they represent a sound conceptual construct. A factor tends to reflect a commonality of perceptions on the part of raters. The emergence of a communication skill in EFA or PCA, therefore, presupposes both that the rating scale is sensitive to the phenomenal existence of a coherent skill domain and that raters are able to perceptually and coherently discriminate that domain. In the IMPACCT the most common rating scale is a continuum from “extremely below average skill” to “extremely above average skill” compared “to typical conversationalists.” The rating scale is not asking whether or not the skill is there, but how the behaviors and abilities indicated in the items fare relative to an abstract continuum of average performance quality. Thus, a communicator may be similarly above average across a variety of specific skill domains, which might result in those skills factoring together, even though they may involve substantively distinct domains of cognitive and affective substrates, as well as distinct observable behaviors. Similar reasoning applies to commonly used frequency rating scales—just because eye contact and self-disclosure are used with corresponding frequency does not mean they comprise a coherent or pedagogically meaningful “skill” or ability (Spitzberg, Citation2003; Spitzberg & Cupach, Citation2002).

Third, many communication skills may exist, and yet be relatively indistinguishable or imperceptible to raters. In the case of the IMPACCT, for example, a presumed skill of secrecy (i.e., ability to keep secrets appropriately) is included because it has been identified as an important set of activities in interpersonal interactions and relationships (e.g., T. Afifi, Caughlin, & Afifi, Citation2007). The question of whether secret-keeping (a) actually exists as a separate set of cognitive and behavioral routines, (b) is accessible as a phenomenal domain separable from other types of cognitive and behavioral routines, and (c) is psychometrically consistent (i.e., reliable) and distinct (higher intraconstruct item correlations than interconstruct item correlations) is not an obviously legitimate assumption in regard to rater perceptions.

Fourth, the factor structure is determined as much by the domain sampling as it is by the existence of the effects or causal indicators. The potential domain of communication competence is vast, but presumably not infinite. Spitzberg and Cupach (Citation1984, Citation1989, Citation2002) identified well over 100 empirically derived factors identified by research as compositional of interpersonal competence alone. Any practical measure must make choices of what to include and what to exclude, but the basis for such decisions is not obvious. Thus, for example, secret-keeping is probably no more (or less?) important than argumentation or interviewing skills, but no measure can reasonably hope to include all potentially relevant factors or constructs in its content. Factor analytic techniques are therefore highly sensitive to what indicators, items, and constructs are represented in the initial scale construction. Identifying a factor structure as defined may imply that the resulting structure comprises the domain of the construct being operationalized, when in fact it is little more than an artifact of the initial item-generation and selection process.

Fifth, a priori categorization is therefore a reasonable alternative to EFA and PCA in formulating measurement. A measure intended for both basic research and applied curricular purposes has dual objectives: reasonable psychometric evidence of construct validity, and relevance to administrative, curricular and instructional needs. These are not necessarily compatible objectives. Students, for example, may spend relatively small amounts of time in an interviewing context, but to the extent that “interviewing” is a curricular option as a course or a set of learning objectives, then a measure may need to include indicators of interviewing regardless of whether or not the indicators form a psychometrically distinct factor or component.

Sixth, in regard to reporting the results of factor analyses in this manuscript, there are too many decision points to permit a coherent discussion of the results and implications, including the following: (a) Which measures to factor with which other measures: The IMPACCT comprises motivation constructs, knowledge constructs, and skills constructs. Should these be factored collectively or separately? Should the CSRS items be included with the ‘interpersonal’ items, and should the interpersonal items be included with the small group items, and so on? For example, when the 93 “interpersonal” and conversational (CSRS) items intended to assess 26 to 30 a priori constructs are submitted to PCA, 17 potentially definable factors emerge, and when all 164 “skills” items (sans motivation, knowledge, and outcomes items) are submitted, 25 potentially definable components emerge. Yet, when all items of the IMPACCT are submitted to PCA, the first traditionally definable structure that emerges only reveals 10 components. The results of EFA and PCA are highly dependent upon such initial inclusion decisions, which in turn depend on various objectives for conducting the analyses. (b) How to match factors across times: If the Time 1 measures are factored, then must the Time 2 factors be constructed according to these Time 1 factors, even if Time 2 produces different factor structures? In Time 1, for example, the item “speaking about self” loaded modestly on a component with “articulation” and “speaking fluency,” but in Time 2, it loaded on a different component with “smiling and/or laughing” and “use of humor and/or stories.” For curricular assessment reasons, Time 1 and Time 2 indicators must be equivalent, regardless of their factor structures. (c) Whether both a first- and second-order factor analysis should be employed: Should the measures be factored at the item level, or at the construct level, or both? For example, when PCA was applied to the 40 a priori motivation, knowledge and skills constructs, in what amounts to a second-order analysis, a six-component solution emerged, but when the 187 items comprising these constructs are submitted to PCA, a 10-component solution emerged. (d) What to do with isolated items: What if a construct considered important by the discipline shows up as an isolate loading? So, for example, “meaning” loaded with humor and persuasion in a second-order factor analysis that extracted six factors, but did not load anywhere when 10 factors were extracted. Should the assessment of a student's perceived ability to “get meaning across accurately” go unassessed because it does not load on a factor? (e) Which criteria should be applied for determining the number of factors to extract and define. All of the standard criteria have demonstrated substantial limitations (Fabrigar et al., Citation1999; Floyd & Widaman, Citation1995). For all these reasons, and especially given the constraints of manuscript length in reporting the measure debut, the risks of concretizing a factor structure in reporting this measure before all these relevant issues get properly aired in scholarly forums justifies pursuing these issues in separate scholarly and academic contexts and manuscripts. The results of first- and second-order EFAs and PCAs are available from the author upon request.

Additional information

Notes on contributors

Brian H. Spitzberg

Brian H. Spitzberg (Ph.D., University of Southern California, 1981) is Senate Distinguished Professor in the School of Communication, at San Diego State University. This study and the assessment package reported herein were significantly underwritten by a gift from Sanford I. Berman, Ph.D., whose lifelong commitment to promoting general semantics and its role in facilitating communication effectiveness has made this research possible. Appreciation is also due the Dean of the College of Professional Studies and Fine Arts, Joyce Gattas, the School of Communication Director, William Snavely, and the Basic Course Director, Kurt Lindemann, all of whom facilitated this project at every turn

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 152.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.