SELECTION, TRAINING, AND DEVELOPMENT OF PERSONNEL:SELECTION

SELECTION

In selection research, predictor refers to any technique or technology used to predict the success of individuals in the application context. In general, it is the goal of selection technologies to predict successfully how well people will perform if hired or selected for admission to a specific situation, such as a college. In the following pages, we briefly review several of the most popular predictor categories.

Predictors
Aptitude and Ability Tests

Very generally, aptitude refers to a capacity or suitability for learning in a particular domain or topic area (such as mechanical aptitude), whereas ability refers to a natural or inborn talent to perform in a given area (such as athletic ability). Development and application of tests of aptitude and ability have long been a major focus of activity in the area of selection research. In general, ability and aptitude tests consist of several subcategories. Cognitive ability tests address issues that involve remembering information, producing ideas, thinking, and comparing or evaluating data. Sensory mo- tor tests include physical, perceptual, motor, and kinesetic measures. Personality, temperament, and motivational tests address those issues as predictors of job success. Additionally, so-called lie-detector or polygraph tests are increasingly used in some situations as a selection and / or screening device, despite the existence of a substantial amount of data that question their validity (see Guion 1991; Sackett and Decker 1979).

One series of projects that have recently addressed aptitude testing involves the development and validation of the (U.S.) Armed Services Vocational Aptitude Battery (ASVAB) (Welsh et al. 1990). This effort involved years of research and dozens of studies that applied and researched the contri- butions of aptitude-testing techniques to a huge variety of military job settings. To transfer these military technologies to civilian settings, researchers compared ASVAB counterparts to the civilian General Aptitude Test Battery (GATB). These comparisons were reviewed by Hunter (1986) and others, who have demonstrated that many ASVAB components are highly similar to their civilian counterparts, thus arguing that many of the military’s scales and testing procedures are directly relevant for nonmilitary applications.

A second major research program in this area is known as Project A. That effort (see Campbell 1990 for a review) involved a number of complex research projects designed to validate predictors of hands-on job-performance measures and other criteria for U.S. Army jobs. Project A, conducted at a cost of approximately $25 million, is certainly one of the largest selection-research projects in history. It found that ‘‘core job performance’’ was predicted by general cognitive ability (assessed by the ASVAB), while ratings of effort and leadership, personal discipline, and physical fitness were best predicted by personality measures (McHenry et al. 1990). Other predictors produced only small increments in validity over general cognitive ability in predicting core job performance (Schmidt et al. 1992).

A common issue in selection research is the question of whether ‘‘general’’ ability is more im- portant in determining job success than are various ‘‘special’’ abilities (which presumably are com- ponents of the more overall ‘‘general’’ factor). Various authors (Hunter 1986; Jensen 1986; Thorndike 1986; and many others) have argued in favor of the general factor, whereas others (cf. Prediger 1989)

have challenged these conclusions, suggesting that specific aptitudes are more important in predicting job performance.

Simulations

Simulations are increasingly used as predictors in selection situations. Simulations are representations of job situations. Although presumably task relevant, they are nonetheless abstractions of real-world job tasks. Many simulations used in selection occur in so-called assessment centers, in which multiple techniques are used to judge candidates. In such situations, people are often appraised in groups so that both interpersonal variables and individual factors can be directly considered. Simulation exer- cises are often used to elicit behaviors considered relevant to particular jobs (or tasks that comprise the jobs) for which people are being selected. This includes so-called in-basket simulations for man- agerial positions, where hypothetical memos and corporate agenda matters are provided to job can- didates in order to assess their decision-making (and other) skills.

Recently, computer-based simulations—ranging from low-fidelity ‘‘games’’ to highly sophisticated complex environments coupled with detailed performance-measurement systems—have been devel- oped for use as predictors in selection and assessment situations. One example of this approach is team performance assessment technology (TPAT) (e.g., Swezey et al. 1999). This and similar methods present simulated task environments that assess how individuals and / or teams develop plans and strategies and adapt to changes in fluctuating task demands. This technology exposes respondents to complex environments that generate task-relevant challenges. At known times, participants have an opportunity to engage in strategic planning; at other times, emergencies may require decisive action. These technologies are designed to allow participants to function across a broad range of situations that include unique demands, incomplete information, and rapid change. They often employ a performance-measurement technology termed ‘‘quasi-experimental’’ (Streufert and Swezey 1985). Here, fixed events of importance require that each participant deal with precise issues under identical conditions. Other events, however, are directly influenced by actions of participants. In order to enable reliable measurement, fixed (preprogrammed) features are inserted to allow for comparisons among participants and for performance assessment against predetermined criteria. Other methodologies of this sort include the tactical naval decision making system (TANDEM), a low-fidelity simulation of a command, control, and communication environment (Dwyer et al. 1992; Weaver et al. 1993), and the team interactive decision exercise for teams incorporating distributed expertise (TIDE2) (Hollen- beck et al. 1991).

Interviews and Biodata

Most reviews concerning the reliability and validity of interviews as a selection device ‘‘have ended with the depressing but persistent conclusion that they have neither’’ (Guion 1991, p. 347). Then why are they used at all? Guion provides four basic reasons for using interviews as selection devices in employment situations:

1. They serve a public relations role. Even if rejected, a potential employee who was interviewed in a competent, professional manner may leave with (and convey to others) the impression that he was treated fairly.

2. They can be useful for gathering ancillary information about a candidate. Although collecting some forms of ancillary information (such as personal, family, and social relationship data) is illegal, other (perfectly legitimate) information on such topics as work history and education that may have otherwise been unavailable can be collected in a well-structured interview pro- tocol.

3. They can be used to measure applicant characteristics that may otherwise not be adequately measured, such as friendliness, ability to make a good first impression, and conversational etc.

4. Interviewers are themselves decision makers. In some circumstances, interviewers may be used to arrive at overall judgments about candidate suitability that can be used in the decision process.

Many alternative ways of conducting interviews have been studied. For instance, Martin and Nagao (1989) compared paper-and-pencil, face-to-face, and computerized versions of interviews. They found that socially desirable responses often occurred when impersonal modes of interviewing were used, but that applicants for high-status jobs often resented impersonal interviews. Tullar (1989) found that successful applicants are given longer interviews and dominate the conversation more than less-successful applicants.

As to biodata, information about an individual’s past, Hammer and Kleiman (1988) found that only 6.8% of 248 firms surveyed stated that they had ever used biodata in employment decisions, and only 0.4% indicated that they currently used biodata. Yet there is evidence that biodata has substantial validity as a selection device (Schmidt et al. 1992). The most frequent reasons for not using biodata were (a) lack of knowledge, (b) lack of methodological expertise, and (c) lack of personnel, funding, and time.

An important assumption in use of biodata as a selection tool is that past behaviors—often behaviors far in the past (e.g., high school achievements)—are good predictors of future behaviors. However Kleiman and Faley (1990) found that biodata items inquiring about present behaviors were just as valid as those focusing on past behaviors in predicting intention to reenlist in the Air National Guard; and Russell et al. (1990) found that retrospective life-history essays could serve as the source of valid biodata. These and other studies indicate that biodata may be a significantly underutilized source of information in personnel selection situations.

Work Samples

Unlike simulations, which are not the actual activities performed in the workplace but an abstraction or representation of aspects of those activities, work sample testing involves the actual activities performed on the job. These are measured or rated for use as a predictor. The logic applied to work sample testing is somewhat different from other selection technologies in that it employs a ‘‘criterion- based’’ perspective: the performance being tested is usually a subset of the actual task employed and is derived directly from the content domain of that task. This stands in contrast to other types of selection methodologies where a measure of some ‘‘construct’’ (such as personality, aptitude, ability, or intelligence) is measured and related statistically (usually correlated) with actual job performance (see Swezey 1981 for a detailed discussion of this issue).

In work sample testing, a portion of an actual clerical task that occurs everyday may be used to measure clerical performance (e.g., ‘‘type this letter to Mr. Elliott’’). Typically, such tests actually sample the work domain in some representative fashion; thus, typing a letter would be only one aspect of a clerical work sample test that may also include filing, telephone answering, and calculating tasks.

Work sample testing is less abstract than other predictive techniques in that actual performance measurement indices (e.g., counts, errors) or ratings (including checklists) may be used as indices of performance. Where possible, work sample testing (or its close cousin, simulation) is the preferred method for use in selection research and practice because one does not get mired down trying to decide how to apply abstract mental constructs to the prediction of job performance.

References

The use of references (statements from knowledgeable parties about the qualifications of a candidate for a particular position) as a predictor of job performance or success is the most underresearched— yet widely used—selection technique. The problem with using references as a selection technique is that, in most cases, the persons supplying the references have unspecified criteria. That is, unless the requesting parties provide explicit standards for use in providing the reference, referees may address whatever qualities, issues, or topics they wish in providing their comments. This wide-ranging, sub- jective, unstructured technique is therefore of highly questionable validity as a predictor of job per- formance. Nevertheless, the technique is widely used in employment and other application-oriented settings (e.g., college applications) because we typically value the unstructured opinions of knowl- edgeable people concerning assets and / or liabilities of an applicant or candidate. The key word here is ‘‘knowledgeable.’’ Unfortunately, we often have no way of determining a referee’s level of knowl- edge about a candidate’s merits.

Others

Among the many other predictors used as selection devices are such areas as graphology, genetic testing, investigative background checks, and preemployment drug testing. Of these, preemployment drug testing has undoubtedly received the most attention in the literature. In one noteworthy effort, Fraser and Kroeck (1989) suggested that the value of drug screening in selection situations may be very low. In partial reaction to this hypothesis, a number of investigations were reviewed by Schmidt et al. (1992). Those authors cite the work of Fraser and Kroeck and of Murphy et al. (1990), who, using college student samples, found that individual drug use and self-reported frequency of drug use were consistently correlated with disapproval of employee drug testing. However, no substantial associations between acceptability of drug testing and employment experience or qualifications were established.

Validity and Fairness

As mentioned above, test validity is a complex and ambiguous topic. Basically, one can break this domain into two primary categories: (1) psychometric validities, including criterion-based validities (which address—either in the future [predictive] or simultaneously [concurrent]—the relationship(s) among predictors and criteria), and construct validity (which relates predictors to psychological constructs, such as abilities, achievement, etc.); and (2) content (or job / task-related) validities. See Guion (1991) for a discussion of these topics.

Various U.S. court rulings have mandated that job-related validities be considered in establishing test fairness for most selection and classification situations. These issues are addressed in the latest Standards for Educational and Psychological Testing (1999) document.

Performance Measurement and Criterion Assessment

An oft-quoted principle is that the best predictor of future performance is past performance. However, current performance can be a good predictor as well. In selecting personnel to perform specific tasks, one may look at measurements of their performance on similar tasks in the past (e.g., performance appraisals, certification results, interviews, samples of past work). As noted earlier, one may also measure their current performance on the tasks for which one is selecting them, either by assessing them on a simulation of those tasks or, when feasible, by using work samples. A combination of both types of measures may be most effective. Measures of past performance show not only what personnel can do, but what they have done. In other words, past performance reflects motivation as well as skills. A disadvantage is that, because past performance occurred under different circum- stances than those under which the candidates will perform in the future, ability to generalize from past measures is necessarily limited. Because measures of current performance show what candidates can do under the actual (or simulated) conditions in which they will be performing, using both types of measures provides a more thorough basis on which to predict actual behaviors.

Whether based on past performance or current performance, predictors must not only be accurate, they must also be relevant to the actual tasks to be performed. In other words, the criteria against which candidates’ performance is assessed must match those of the tasks to be performed (Swezey 1981). One way to ensure that predictors match tasks is to use criterion-referenced assessment of predictor variables for employee selection. To do this, one must ensure that the predictors match the tasks for which one is selecting candidates as closely as possible on three relevant dimensions: the conditions under which the tasks will be performed, the actions required to perform the task, and the standards against which successful task performance will be measured. While it is better to have multiple predictors for each important job task (at least one measure of past performance and one of current performance), it may not always be possible to obtain multiple predictors. We do not rec- ommend using a single predictor for multiple tasks—the predictor is likely to be neither reliable nor valid.

Comments

Popular posts from this blog

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

NETWORK OPTIMIZATION MODELS:THE SHORTEST-PATH PROBLEM