Case # 1: NASA 1996–2001 Alternative Work Analysis and Selection Validation Projects

Case Study #1 summarizes a NASA ВНР project conducted from 1996 to 2001 on alternative work analysis with results applied to astronaut selection, training, and support. The project built on prior NASA analysis of the astronaut job conducted by Jeanneret (1988) using the Position Analysis Questionnaire (PAQ). The first main objective was to examine and document work differences and worker competencies required for long-duration space missions. The second objective was to update, develop, and validate selection measures and training countermeasures for International Space Station (ISS) missions.

BHP Decisions on Work Analysis and Validation in Response to Challenges

In response to the challenges inherent in the analysis of EEPs, ВНР made various decisions in 1996 regarding work analysis approaches and methods. The first decision was to use a worker-oriented job analysis with characteristics similar to what

TABLE 3.2

Alternative Validation Approaches, EEP Challenges Addressed and Requirements and Advantages of Alternative Approaches

Alternative Validation Approaches

EEP Challenges Addressed

Requirements (R). Advantages (A), and Limitations (L)

Validity Generalization (VG) is "the conclusion that validity generalizes across common situational variables that might differ from one setting to another” (Gibson & Capliner, 2007, p.35). It involves the "application of metaanalysis to the correlations between an employment test and a criterion, typically jobs or workplace training performance.” (McDaniel. 2007. p. 159)

Task dynamism, mission variety, lack of feasibility for local validation, relatively small number of SMEs, changing job, jobs that do not exist

R: Availability of validity evidence, job similarity in major work behaviors, evidence of fairness, substantial similarity in the job performed A: cost-effectiveness

L: job similarity with EEPs would be required

Test transportability (IT) refers to the extent to which tests validated in one location can be used in other locations (Gibson & Capliner, 2007). Test transportability is likely to provide support for the use of previously validated tests in jobs that share specific characteristics with the EEP

Lack of feasibility for local validation, relatively small number of SMEs, new jobs, changing job

R: Availability of validity evidence, job similarity in major work behaviors, evidence of fairness, substantial similarity in the job performed A: Cost-effectiveness

L: Job similarity with EEPs would be required. According to Hoffman and McPhail (1998), validated tests can be used without conducting a local validation study provided there is substantial similarity in the job performed, applicants, and incumbents

Synthetic Validity (SV) "describes the logical process of inferring test-battery validity from predetermined validities of the tests for basic work components” (Mossholder and Arvey, 1984, p. 323). These techniques involve predicting test scores from job component scores (Johnson, 2007)

Task dynamism, mission variety, lack of feasibility for local validation, relatively small number of SMEs

R: Availability of validity evidence, similarity in job components, evidence of fairness, substantial similarity in the job performed A: cost-effectiveness

L: similarity in job components would be required. Mossholder and Arvey (1984) also compared synthetic validation with validity generalization (VG), noting that SV requires more detailed analysis of the job(s) than Validity Generalization

TABLE 3.2 (Continued)

Alternative Validation Approaches, EEP Challenges Addressed and Requirements and Advantages of Alternative Approaches

Alternative Validation Approaches

EEP Challenges Addressed

Requirements (R). Advantages (A), and Limitations (L)

Synthetic Validity:

JRM/Thc J-coefficient Model is a method for determining an index of agreement between elements within jobs and tests expected to relate to those job elements (Primoff, 1957, 1959). Thus, if it is known that a test is valid for an element, it is then possible to estimate the value of the test to the job. Peterson and Bownas (1982) referred to this method as the Job Requirement Matrix (JRM)

Changing job, jobs that do not yet exist. Future work, and a relatively small number of SMEs

R: Commonality of job components, similar inferences across situations and jobs

A: Practical, legally defensible. One advantage to the J-coefficient is that any set of elements can be evaluated as a pattern. The J-coefficient formula represents the degree to which individual elements uniquely contribute to a test score and provides a rating of the "relative importance of job elements for people who work at a job” (Primoff, 1957) compared to traditional validity coefficients, which provide ratings of relative job proficiency for people working in a job. Job elements (requirements) are demonstrative of aptitudes and abilities and are identified by experts knowledgeable of the job who rate the importance of various job elements. Assuming these relationships are stable, overall performance can be predicted by job elements, which can then be translated into validity coefficients.

L: Similarity in job elements required

Synthetic Validity:

Job Component Validity (JCV) is a technique that determines the extent to which incumbents’ test scores (or test validity coefficients) relate to an attribute/component’s importance to a job (Hoffman et al„ 2007: McCormick. 1959)

Changing job, jobs that do not yet exist, future work, and relatively small number of SMEs

R: Commonality of job components, similar inferences across situations and jobs. A basic assumption behind JCV is that common behaviors shared by different jobs require the same individual attributes to carry out such behaviors (McCormick, DeNisi, & Shaw, 1979). This information indirectly links test scores and job components across jobs A: Practical, legally defensible, compatible with the Uniform Guidelines L: Similarity in job elements required

is today called competency modeling (Campion et al., 2011), an alternative work analysis methodology defined in Table 3.1. NASA decided to integrate into the work analysis process elements of what is now called strategic work analysis (Sackett et al., 2012) consisting of asking SMEs to describe competencies needed for the ISS astronaut job not yet identified by NASA. NASA also analyzed the work context by examining psychosocial, physical, and structural job demands unique to the work of long-duration space station astronauts. The decision was made to combine qualitative focus groups and administration of quantitative questionnaires completed by SMEs chosen with strict inclusion criteria regardless of the resulting sample size. Technological integrations also were included, particularly the use of group decision support systems and computer technology for collecting data and administering questionnaires. NASA relied on content and construct validation of selection measures and training countermeasures. The NASA ВНР examined the relationship between selection measures and qualitative judgments of the applicants’ competency level, and examined meta-analytic evidence in the published literature for validity generalization of some measures. The alternative method of validity generalization was used due to lack of availability of quantitative performance data. During the 1996-1998 period it was not possible to conduct a local validation study to examine the criterion-related validity of selection measures for ISS missions that began years later. The first International Space Station mission, ISS Expedition 1, began on October 31, 2000 (NASA, 2015).

 
Source
< Prev   CONTENTS   Source   Next >