Assessment of Communication Skills and Related Constructs

I have a friend who, as a teenager, baked a layer cake for entry in the county 4-H Fair. After stacking the layers and frosting the cake, it looked great, and she was all set. It was only when the judges attempted to slice the cake that they discovered that she had not removed the wax paper between each layer of her cake! Needless to say, she did not take home a ribbon for her efforts. Not all cakes are created equal. And the same is true of efforts at communication.

The focus of this chapter is on issues pertaining to the assessment of communication skills and other variables associated with communication proficiency. Suppose that the county-fair judges in the story above had awarded my friend’s cake a blue ribbon, and that every other entry also received a blue ribbon. In that case we would have to conclude that there was a problem with their evaluation process. I noted in the Introduction that “communication can be done well or poorly,” but suppose that all efforts at communication (whether they be political speeches, job interviews, handling marital conflict, disciplining children, etc.) were judged to be of the same quality. Again, we’d have to conclude that something was amiss in making those proficiency judgments.

The purpose of this chapter is to lay out a basic approach to thinking about issues of assessment in general, and communication-skill assessment in particular, and then to use that conceptual framework to help us understand the characteristics of desirable skill-assessment techniques (along with common problems you want to avoid).

Theoretical Constructs and Operationalizations

If you were asked to list the qualities of your best friend, you’d probably use terms like “generous,” “dependable,” “talented,” and “funny.” But take note of an important point here: There is a difference in abstract characterizations like “generous” and “dependable” (that you can’t actually see) and observable behaviors (i.e., things you can see). You can see your friend devoting her Thanksgiving Day to working in a food kitchen or promptly arriving every week to give an elderly person a ride to the doctor, but you have to infer that these behaviors are indicative of attributes like generosity and dependability.

The distinction between conceptual abstractions and observable phenomena is crucial because it goes right to the heart of this chapter’s focus on assessment. Theories and models of communication skill and competence are developed at the level of conceptual abstractions - what are termed theoretical constructs - concepts that are not directly observable, but that might reasonably be assumed to be indexed by measures of what is observable (Kaplan, 1964). As examples, from Chapter 3, consider that “effectiveness,” “appropriateness,” “other orientation,” “rewardingness,” and so on, just like “generosity” and “dependability,” can only be inferred from other, more directly observable, indices. I can’t directly see (measure, assess) “effectiveness” - but I can inspect the number of new contracts written by each member of my company’s sales staff and the satisfaction ratings supplied by their clients. In the lab, I can’t put a number on “other orientation,” but I can count the number of seconds a person spends in “other-directed gaze.”

The process of going from theoretical constructs to measurable indices involves operationalizations. If I was interested in studying “speech fluency,” for example, I might operationalize that theoretical construct by assessing speech rate (i.e., words per minute). And, if I wanted to test the hypothesis that speech fluency is related to “credibility” (another theoretical construct), I could use a measure like McCroskey’s (1966) source credibility instrument and have listeners rate speakers according to how knowledgeable and trustworthy they appeared to be. Testing the hypothesis that speech fluency is positively related to credibility, then, would be as simple as examining the correlation between speakers’ words per minute and listeners’ ratings of their credibility (see Figure 4.1).

Person Factors and Scales

Much of the research on communication skill involves examination of potential relationships between personal attributes and communication proficiency. These person factors (also called “individual-difference variables”) include theoretical constructs like self-esteem, extraversion, charisma, empathy, and conscientiousness, to name just a few. There are various ways to operationalize constructs such as these, but far and away the most communion technique is to administer a scale - basically a questionnaire comprising items that are thought to tap the construct in question. So, the Rosenberg Self-Esteem Scale (1989), for example, includes items like “I feel that I have a number of good qualities,”

and “I am able to do things as well as mos

FIGURE 4.1 and “I am able to do things as well as most people.” The idea, then, is that people’s responses to a particular scale will allow the researcher to put a number on each person’s level of self-esteem, extraversion, etc.

Case in Point: The BURT

Some years ago there was a graduate student in my department who had absolutely no “filter” on what came out of her mouth; as soon as a thought coalesced in her mind, it was on its way to her lips. Behavior of this sort is the essence of the theoretical construct “blirtatiousness,” which according to Swann and Rentfrow (2001), involves “how quickly, frequently, and effusively people respond to their partners” (p. 1160). “High blirters,” then, like our former grad student, “let their thoughts fly;” “low blirters,” in contrast, are more likely to “hold their tongue.” In order to operationalize this construct, Swann and Rentfrow developed the Brief Loquaciousness and Interpersonal Responsiveness Test, the BLIRT, which you can access here: https://abs.la.utexas.edu/ swann/files/2016/03/Blirtatiousness.pdf

People scoring high on the BLIRT are likely to endorse items like “If I have something to say, I don’t hesitate to say it,” while their low-blirt counterparts are more likely to agree with scale items such as “If I disagree with someone, I tend to wait until later to say something.”

The Quality of an Operationalization: Reliability and Validity

If you’ve ever taken on a carpentry project you’re probably familiar with the adage to “measure twice and cut once” - the idea being that there is a possibility of error, of being slightly “off,” in measuring a 2 x 4 before cutting it to length. It turns out that there is some element of error in virtually every measurement: A reading of “10.0” on the gas pump means that you got approximately 10 gallons of gas, a thermometer reading of “98.6” indicates that your body temperature is close to that figure, and do we even have to talk about the accuracy of your bathroom scale? In fact, there is a degree of error in even the most accurate atomic clocks (okay, it is less than 1 second in 30 million years, but it is there nonetheless). The upshot is that any measurement (an “observed” value) reflects a “true” component and an “error” component:

The goal of developing an operationalization of a theoretical construct, then, is to minimize the error component so that the observed value of the measure is as close to the true value as possible. If you imagine making multiple measurements, the idea is to achieve a situation like that depicted in panel “A” of Figure 4.2. The error component is minimized, and time after time, you’re hitting the “bullseye” - i.e., the construct you’re trying to assess. Other factors, though, may inflate the error term, making the accuracy of assessments suspect, and it is here that we encounter issues of reliability and validity.

FIGURE 4.2

Reliability

Reliability refers to the stability or consistency of an assessment tool. What could be more useless to a carpenter than an elastic tape measure - the more you stretch it, the shorter the board gets! Or what about a bathroom scale that never gives the same weight when you repeatedly step on and off? Or, imagine a person completing a scale like the BLIRT multiple times and never getting the same score twice. Those situations would be like that depicted in panel “B” of Figure 4.2 - every assessment gives a different result.

Now, there are several techniques available for quantifying various aspects of reliability, but here we really only need to touch on the three approaches to establishing reliability that you’re most likely to encounter. The most obvious way of examining the stability and consistency of a measurement tool is to administer the instrument to a group of people, and then, after some period of time, administer that instrument to the same group of people once again. Establishing test-retest reliability, then, simply involves examining the correlation between scores at Time 1 and Time 2. Remember from Chapter 3 that correlation coefficients can range in magnitude from 0.0 to 1.0, where a correlation of 0.0 would mean that there was no relationship whatsoever (i.e., zero test- retest reliability), and a correlation of 1.0 means that knowing a person’s score at Time 1 would tell you with 100% certainty what his score was at Time 2. In the case of the BLIRT, for example, Swann and Rentfrow (2001) report a test-retest reliability coefficient of .77 with a lag of three months between the two administrations of that scale.

The second method of establishing reliability that you’re likely to come across involves thinking about “consistency” in a different way. Scales (like the BLIRT) typically involve a number of items that are intended to index the same theoretical construct.1 (If you look at the BLIRT you’ll see that it consists of eight items, all tapping, in one way or another, people’s propensity to speak their mind.2) The idea behind the internal consistency, or reliability, of a measure, then, is that all the items in the scale actually are assessing the same construct. If one or more of the items in the scale is (are) tapping into something different from the other items, that reduces the internal reliability of the scale. One simple way of putting a number on this sort of reliability is to administer the instrument to a group of people, and then examine the correlation between their total score on half the scale items and their total score on the other half of the items. So, in order to establish the split-halves reliability of the BLIRT you might look at the correlation between the odd-numbered items and the even-numbered items of that scale. A more sophisticated method of establishing the internal reliability of a scale is the a (“alpha”) reliability coefficient (Cronbach, 1951). Instead of examining the correlation between two particular halves of a scale, one way of thinking about Cronbach’s a is that it is based on the average correlation between every single item in the scale and every other single item in that scale (i.e., for the BLIRT, the correlation between Item 1

and Items 2, 3, 4, etc. - 28 correlations in all).3 And, because a. is based on correlation coefficients, it will still range in magnitude from 0.0 to 1.0. In the case of the BLIRT, for example, Swann and Rentfrow (2001) report an a value of .84 - meaning that all the items in the scale pretty well “hang together” in assessing the same construct.

The last of our “big three” methods of assessing reliability again reflects a slightly different “take” on how to operationalize a construct and go about establishing the consistency of that operationalization. Suppose that you were interested in studying blirtatiousness, but instead of relying on the BLIRT, you determined that the best way to operationalize that construct would be to have judges (similar to Olympic diving judges or 4-H cake judges) observe people in the course of their everyday lives, and then simply rate them along some low-to-high, numerical (e.g., l-to-7) scale. Framed in this way, reliability becomes a question of consistency across judges, what is termed interrater reliability. If you’ve got a judge who is out of step with the others (maybe he has a different understanding of the meaning of blirtatiousness, or a diving judge suffering from macular degeneration, or a 4-H judge who prefers his cake with wax paper), interrater reliability is going to take a hit. In cases where there are just two judges, establishing interrater reliability is as simple as having each judge independently rate each person or item (contestant, cake, etc.) and once again computing the correlation between their respective ratings. And there are simple techniques for extending that basic correlation-based approach in cases where there are three or more judges.

Validity

Earlier in this chapter we saw that accurate assessment of some construct involves minimizing the “Xerror” term so that “observed” values come as close as possible to the “true” values of that construct. And we’ve seen that a lack of reliability (i.e., stability and consistency) inflates the size of the “error” term - a state of affairs like that depicted in panel “B” of Figure 4.2. But notice that there is a second way that assessments might deviate from true scores. In panel “C” of Figure 4.2 we have a situation where repeated assessments are consistent, but they are also “consistently off the mark.” And this is where issues of validity come into play. There are several concepts that fall under the general heading of “validity” (e.g., “face validity,” “predictive validity,” etc.), but here our concern need only be with construct validity - the extent to which an operationalization actually measures the construct of interest (Cronbach & Meehl, 1955).

We can see an example of “slippage” between what an operationalization is thought to assess and what it might actually be tapping in the case of college course evaluations. End-of-semester teaching evaluations typically include items assessing the instructor’s knowledge of the subject matter, preparation, organization and clarity, and so on. Ambady and Rosenthal (1993), though, designed a clever study in which they correlated actual students’ end-of-semester evaluations with general impressions from a second group of judges who saw three 10-second silent clips of each teacher in action. That’s right: They correlated the course evals from students who had been in the class for an entire semester with global impressions from people who had seen the instructor for a total of 30 seconds. The correlation between the actual end-of-semester evaluations and a global rating based on three 10-second silent snippets was .76! So much for tapping knowledge of subject matter, preparation, clarity, and so on (see Figure 4.2, panel “C”).4

In contrast to reliability, where assessment of test-retest reliability, internal consistency, and interrater reliability is relatively straightforward, establishing the construct validity of an operationalization is rather more involved. A useful metaphor for thinking about establishing validity is that of constructing a stone wall: Piece by piece, stone by stone, evidence is assembled to build an overall case for validity. And this evidence can come in various forms (Campbell & Fiske, 1959; Cronbach & Meehl, 1955). The method of group differences, for example, is based on the idea that certain groups of people should be expected to score higher on a particular measure than members of other groups. In the case of blirtatiousness, Swann and Rentfrow (2001) reasoned that people employed in auto sales ought to score higher than librarians - makes sense, doesn’t it? Sure enough, that’s what they found: Salespersons had higher average BLIRT scores than librarians (one stone in the wall). The method of convergent validity involves examining associations with conceptually related variables. We might expect that, although blirtatiousness and extraversion are conceptually distinct constructs, people high on one should have at least some tendency to be high on the other. And Swann and Rentfrow found that correlation to be .34. Similar results were found for self-esteem, and impulsivity - constructs that are positively correlated with blirtatiousness, but the correlation is not so large as to suggest that they are just different names (or measures) for the same thing. The logic of the method of discriminant validity involves showing that there is no correlation, where conceptually, there should not be one. The definition of blirtatiousness, for example, gives no reason to expect that it will be related to students’ GPA, and that correlation tuned out to be virtually zero.3 The idea, again, is that these various kinds of evidence accumulate, stone by stone, and thus give some level of confidence that the operationalization in question is more akin to the situation depicted in panel “A” of Figure 4.2 than that given in panel “C.”

socially skilled____________________socially unskilled

Assessments of communication proficiency in some contexts may be as simple as ticking off items in a checklist (e.g., “asked the caller’s name,” “asked the caller’s birthdate,” “asked ‘how can I help?’,” etc.). More typically, though, skill assessments involve the use of some sort of rating scale comprising items like that in the section heading immediately above (which comes from an instrument developed by Trower, Bryant, and Argyle, 1978). In fact, there are a great many scales available for assessing general social skills (e.g., Barkham, Hardy, & Startup, 1996; Lowe & Cautela, 1978; McCroskey & McCroskey, 1988; Riggio, 1986; Rubin & Martin, 1994; Wiemann, 1977), as well as more specific communication-relevant abilities such as listening (e.g., Bodie, Winter, Dupuis, & Tompkins, 2019; Pearce, Johnson, & Barker, 2003) and empathy (e.g., Davis, 1983; Jolitfe & Farrington, 2006).6

Beyond the distinction between general assessments of communication skills versus measures that are more narrowly focused on specific communication functions and contexts, there are other dimensions along which assessment instruments can be distinguished, three of which are of particular note. First, scales differ in terms of whether their focus is episodic or dispositional. Episodic assessments are concerned with a person’s behavior on a particular occasion (i.e., episode), like a job interview or sales call; a grade on your informative speech in public speaking class is an episodic assessment. Dispositional measures, in contrast, are aimed at assessing how well a person generally performs (i.e., over numerous occasions). Second, skill-assessment instruments differ according to whether their focus is on molar or molecular characterizations of behavior. Molar assessments involve the sort of abstract characterizations of behavior introduced earlier in this chapter. Descriptions like “friendly,” “nervous,” and “domineering” make no reference to observable behaviors. By way of contrast there are lower-level, molecular characterizations like “smiling,” “self-adaptors,” and “interruptions.” Finally, communication-skill-assessment measures can be distinguished according to who provides the evaluation. In the world of business, assessments are typically made by supervisors and others higher up on the organizational “totem pole.” College course evaluations come from students. But very often research on communication skills involves the use of scales where people are asked to provide their own assessments of their abilities.

Case in Point: The CSRS

There is a particularly good illustration of these differences in approaches to communication-skill assessment in the Conversational Skills Rating Scale (CSRS), developed by Brian Spitzberg (2007), that you can access here: https://www.natcom.org/sites/default/files/pages/Assessment_Resources_ Conversation_Skills_Rating_Scale_2ndEd.pdf

Notice that the CSRS is available in different versions. There is a version for reporting on a specific episode (“Rate how skillfully YOU used, or didn’t use, the following communicative behaviors in the conversation ...” p. 28) and a dispositional version (“Rate how Skillfully YOU GENERALLY use, or do not use, the following communication behaviors in your conversations ...” p. 31).

With respect to the issue of “who provides the evaluation” there is a “partnerrating” version (“Rate how skillfully YOUR PARTNER used, or didn’t use, the following communicative behaviors in the conversation ...” p. 27), an “observer-rating” version (“Rate how skillfully THIS INTERACTANT used or didn’t use, the following communicative behaviors in the conversation ...” p. 29), and a “self-rating” version (“Rate how skillfully YOU used, or didn’t use, the following communicative behaviors in the conversation ...” p. 28). And, finally, you can see that each version of the CSRS assesses both molar (e.g., “good conversationalist,” “socially skilled”) and more molecular (e.g., “speaking rate,” “lean toward partner,” “asking questions”) characterizations of behavior.

Reliability and Validity in Communication-Skill Assessments

When it comes to communication-skill assessments, the earlier discussions of reliability (test-retest; internal consistency, interrater reliability) and construct validity still apply. But in wrapping things up a couple of additional points concerning systematic threats to validity do merit mention. The first of these concerns self-reports of communication skills (i.e., individuals evaluating their own abilities and/or performance). Recall from the discussion of the “communication skills paradox” in Chapter 1 that people tend to misjudge (usually overestimate) their communication skills. As Brian Spitzberg (2015b) has noted, “Everyday communicators can be surprisingly ignorant or forgetful about their actual communication behavior” (p. 253). Indeed, a great many studies show that people tend to evaluate their own communication performance more positively than other people do (see Spitzberg, 2015a). In is also noteworthy that this general pattern extends beyond communication abilities to broader evaluations of job performance where research shows that the correlation between supervisor ratings and self-ratings is actually quite low (Heidemeier & Moser, 2009).

A second example of “missing the mark” in communication performance assessments involves the potential biasing effects of stereotypes - mental representations pertaining to assumed characteristics of members of particular groups (Fiske & Taylor, 2013). Take the “physical attractiveness stereotype” (Eagly, Ashmore, Makhijani, & Longo, 1991) as a case in point. Despite what your mother may have told you about “beauty being in the eye of the beholder,” the fact of the matter is that your mother was wrong. There is very high interrater reliability in making judgments of physical attractiveness (Lan- glois, Kalakanis, Rubenstein, Larson, Hallam, & Smoot, 2000); reliability coefficients, even when people make judgments about members of other cultures and ethnicities, are on the order of .90! It is well-established that physically attractive people enjoy a variety of positive life outcomes, both in their work and in their interpersonal relationships (see Langlois, et al., 2000; Maestrip- ieri, Henry, & Nickels, 2017), but what is most relevant here is that attractive people are stereotypically thought to be more socially competent than their less attractive counterparts (Eagly, et al., 1991; Langlois, et al., 2000). Assessments of communication skills, then, may be influenced by physical appearance, thereby inflating the “Xerror” term, and contributing to a situation like that depicted in panel “C” of Figure 4.2. More broadly, stereotypes concerning the attributes of any particular social group may have the same kind of biasing effects in assessing communication skills.

As a general rule, when the performance qualities being evaluated are ill-specified (and more subjective) problems with reliability and validity are likely to creep in - after all, attributes like “leadership ability,” “takes initiative,” and “collegiality” leave a lot of room for individual interpretation. In contrast, clear definitions and objective performance criteria help to minimize such problems, as does training supervisory staff and evaluators in the application of performance criteria to ensure that there is some level of consistency across judges.

Conclusion

This book began with a discussion of “Four Things Everybody Already Knows about Communication Skills” - one of those being “communication can be done well or poorly.” What should now be apparent is that assessment of communication skills can be done well or poorly. Assessments that lack reliability and/or validity may not simply be of limited usefulness, they may actually be a liability. In business and organizational settings where decisions are made on the basis of skill assessments, the GIGO principle (“garbage in, garbage out”) obviously applies: bad skill assessments —> bad decisions. With questionable skill assessments, areas where (and people for whom) skill training is warranted may be misidentified; promising employees may be overlooked; staff satisfaction and retention may be negatively impacted; and since skill assessments, even bad ones, don’t come free, there is the overarching issue of return on investment.

We’ve now come to something of a transition point in our examination of communication skill and skill enhancement. Chapters 2, 3, and now 4, have focused on introducing key terms and concepts to equip the reader for engaging the research on communication skill, and the “real-world,” practical implications of that research, in a knowledgeable way. If you’ve got a good handle on these last three chapters, you should be pretty well set to take a seat at the “grown-up table” where communication skills and skill assessment are topics of discussion.

Notes

1. It is important to note that many scales are designed to tap multiple dimensions of a particular construct. Davis’ (1983) Interpersonal Reactivity Index, for example, involves four distinct subscales assessing various components of empathy. Depending upon the specific nature of the construct in question, subscales may or may not be highly correlated.

  • 2. Notice, though, that some items of the BLIRT are worded in one direction (e.g., “I speak my mind as soon as a thought enters my head.”) and others are reverse-worded (e.g., “It often takes me a while to figure out how to express myself.”). This is standard practice in developing a scale and helps insure that respondents are reading each item more carefully.
  • 3. The magnitude of Cronbach’s a. actually depends on two factors, the average correlation between every pair of items in the scale and the number of items in the scale. This means that a low a value can be improved by dropping “bad” items (i.e., items that aren’t highly correlated with the rest of the scale) and by adding additional items. This is why scales often seem to be asking the same question, with minor wording variations, again and again. Two items with just a minor word change are likely to be highly correlated, and writing additional, similar, items will increase the total number of items in the scale.
  • 4. Instead, Ambady and Rosenthal’s (1993) data suggest that end-of-semester course evaluations are likely tapping global impressions of “confidence,” “dominance,” “enthusiasm,” “optimism,” and related attributes.
  • 5. Among other approaches to establishing construct validity, one I’ll mention here involves examining associations with other measures of the same construct. Where other reliable and valid measures of a construct exist, one would expect high correlations with a new measure.
  • 6. See Spitzberg (2003) for a review of skill-assessment measures.

References

Ambady, N., Sc Rosenthal, R. (1993). Half a minute: Predicting teacher evaluations from thin slices of nonverbal behavior and physical attractiveness. Journal of Personality and Social Psychology, 64(3), 431-441. http://dx.doi.Org/10.1037/0022-3514.64.3.431 Barkham, M., Hardy, G. E., & Startup, M. (1996). The IIP-32: A short version of the Inventory of Interpersonal Problems. British Journal of Clinical Psychology, 35(1), 21-35. http://dx.doi.Org/10.llll/j.2044-8260.1996.tb01159.x Bodie, G. D., Winter, J., Dupuis, D., Sc Tompkins, T. (2019). The ECHO Listening Profile: Initial validity evidence for a measure of four listening habits. International Journal of Listening. Advance online publication, http://dx.doi.org/10.1080/109040 18.20194611433

Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81-105. http://dx.doi. org/10.1037/h0046016

Cronbach, L. ). (1951). Coefficient alpha and the internal structure of tests. Psycho- metrika, 16(3), 297-335. http://dx.doi.org/10.1007/BF02310555 Cronbach, L. J., Sc Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281-302. http://dx.doi.org/10.1037/h0040957 Davis, M. H. (1983). Measuring individual differences in empathy: Evidence for a multidimensional approach. Journal of Personality and Social Psychology, 44(1), 113-126. http://dx.doi.org/10.1037/0022-3514.44.1413 Eagly, A. H., Ashmore, R. D., Makhijani, N. G., Sc Longo, L. C. (1991). What is beautiful is good, but ... : A meta-analytic review of research on the physical attractiveness stereotype. Psychological Bulletin, 110(1), 109-128. http://dx.doi. org/10.1037/0033-2909.110.1.109

Fiske, S. T., Sc Taylor, S. E. (2013). Social cognition: From brains to culture (2nd ed.). Los Angeles, CA: Sage.

Heidemeier, H., & Moser, K. (2009). Self- other agreement in job performance ratings: A meta-analytic test of a process model. Journal of Applied Psychology, 94(2), 353-370. http://dx.doi.Org/10.1037/0021-9010.94.2.353 Joliffe, D., & Farrington, D. P. (2006). Development and validation of the Basic Empathy Scale. Journal of Adolescence, 29(4), 589-611. http://dx.doi.Org/10.1016/j. adolescence. 2005.08.010

Kaplan, A. (1964). The conduct of inquiry: Methodology for behavioral science. New York, NY: Harper & Row.

Langlois.J. H., Kalakanis, L., Rubenstein, A. J., Larson, A., Hallam, M., & Smoot, M. (2000). Maxims or myths of beauty? A meta-analytic and theoretical review. Psychological Bulletin, 126(2), 390-423. http://dx.doi.Org/10.1037/0033-2909.126.3.390 Lowe, M. R., & Cautela, |. R. (1978). A self-report measure of social skill. Behavior Therapy, 9(4), 535-544. http://dx.doi.org/10.1016/S0005-7894(78)80126-9 Maestripieri, D., Henry, A., & Nickels, N. (2017). Explaining financial and prosocial biases in favor of attractive people: Interdisciplinary perspectives from economics, social psychology, and evolutionary psychology. Behavioral and Brain Sciences, 40, 1-16. http://dx.doi.org/10.1017/S0140525X16000340 McCroskey, J. C. (1966). Scales for the measurement of ethos. Speech Monographs, 33(1), 65-72. http://dx.doi.org/10.1080/03637756609375482 Pearce, C. G., Johnson, I. W., & Barker, R. T. (2003). Assessment of the Listening Styles Inventory: Progress in establishing reliability and validity. Journal of Business and Technical Communication, 17(1), 84-113. http://dx.doi.org/10.1177/1050651902238546 Riggio, R. E. (1986). Assessment of basic social skills. Journal of Personality and Social Psychology, 5/(3), 649—660. http://dx.doi.org/10.1037/0022-3514.5L3.649 Rosenberg, M. (1989). Society and adolescent self-image (rev. ed.). Middletown, CT: Wesleyan University Press.

Rubin, R. B., Sc Martin, M. M. (1994). Development of a measure of interpersonal communication competence. Communication Research Reports, //(1), 33-44. http:// dx.doi.org/10.1080/08824099409359938

Spitzberg, В. H. (2003). Methods of interpersonal skill assessment. In J. O. Greene Sc B. R. Burleson (Eds.), Handbook of communication and social interaction skills (pp. 93-134). Mahwah, N|: Lawrence Erlbaum.

Spitzberg, В. H. (2007). CSRS: The Conversational Skills Rating Scale: An instructional assessment of interpersonal competence. Washington, D.C.: National Communication Association.

Spitzberg, В. H. (2015a). Assessing the state of assessment: Communication competence. In A. F. Hannawa Sc В. H. Spitzberg (Eds.), Handbooks of communication science, Vol. 22: Communication competence (pp. 559-584). Berlin, GR: Mouton de Gruyter. Spitzberg, В. H. (2015b). The composition of competence: Communication skills. In A. F. Hannawa Sc В. H. Spitzberg (Eds.), Handbooks of communication science, Vol. 22: Communication competence (pp. 237-269). Berlin, GR: Mouton de Gruyter.

Swann, W. B., Jr., Sc Rentfrow, P. J. (2001). Blirtatiousness: Cognitive, behavioral, and physiological consequences of rapid responding. Journal of Personality and Social Psychology, 81(b), 1160-1175. http://dx.doi.org/10.1037/0022-3514.8L6.1160 Trower, P., Bryant, B., & Argyle, M. (1978). Social skills and mental health. Pittsburgh, PA: University of Pittsburgh Press.

Wiemann.J. M. (1977). Explication and test of a model of communicative competence. Human Communication Research, 3(3), 195-213. http://dx.doi.0rg/lO.llll/j.1468- 2958.1977. tb00518.x

 
Source
< Prev   CONTENTS   Source   Next >