SELECTION, TRAINING, AND DEVELOPMENT OF PERSONNEL:TRAINING

TRAINING

Differences between Training and Education

‘‘Training’’ (the systematic, structured development of specific skills required to perform job tasks) differs from ‘‘education’’ (development of broad-based informational background and general skills). Most people are more familiar with education than training because (a) their experiences from elementary school through college usually fall within an educational model, and (b) much of what is called training in business and industry is actually more akin to education than training.

Education is important, even essential, for building broad skill areas, but training is the approach of preference for preparing people to perform specific tasks or jobs. Table 1 shows differences between education and training and between educators and trainers. Not all differences apply to any specific instance of training or education, but as a whole they illustrate the distinction between the two.

Training, Learning, Performance, and Outcomes

The purpose of training is to facilitate learning skills and knowledges required to perform specific tasks. The outcome of training is acceptable performance on those tasks. People often learn to perform tasks without training, but if training is effective, it enables people to learn faster and better. For example, a person might be able to learn to apply the principle of cantilevering to bridge construction through reading and trial and error, but the process would be lengthier and the outcomes would probably be less precise than if the person received systematic, well-designed training.

Because training is a subsystem of a larger organizational system, it should not be treated outside the context of the larger system. In organizations, training is usually requested in two types of circumstances: either current performance is inadequate or new performance is required. In other words, organizations train either to solve current problems or to take advantage of new opportunities. A second factor affecting training is whether it is intended for new or incumbent employees. Basically, training for new opportunities is essentially the same for new and incumbent employees. However, new employees may require prerequisite or remedial training prior to receiving training on new tasks to help take advantage of opportunities, whereas incumbents presumably would have already com- pleted the prerequisite training (or would have otherwise mastered the necessary skills).

Selection, Training, and Development of Personnel-0059

Just as learning can take place without training, organizational outcomes can be achieved without training. In fact, when acceptable performance on specific tasks is a desired organizational outcome, training is frequently not the appropriate way to achieve that outcome (see Gilbert 1978; Harless 1975; Mager and Pipe 1970). The type of analysis that determines factors contributing to performance of an organization’s personnel subsystem has been called by a variety of names, including perform- ance analysis (Mager and Pipe 1970), and front-end analysis (Harless 1975). Mager and Pipe (1970), noting that training is appropriate only when performance is unacceptable because performers lack necessary skills and knowledge, suggested this classic question for making the determination: ‘‘Could they do it if their lives depended on it?’’ If the answer is ‘‘no,’’ then training is warranted. If the answer is ‘‘yes,’’ the next question is, ‘‘What is preventing them from performing as required?’’ Mager and Pipe also suggested various ways of categorizing obstacles to desired performance out- comes that do not consist of skill / knowledge deficiencies (see also Harless 1975; Rummler and Brache 1990). These can be reduced to two broad categories:

1. Lack of environmental support (e.g., well-specified processes, job aids, tools and equipment, management expectations, adequate facilities)

2. Lack of appropriate motivational systems (e.g., clear and immediate feedback, appropriate consequences for action, well-designed reward systems)

For skill / knowledge deficiencies, training interventions are appropriate. Deficiencies involving en- vironmental support and / or appropriate motivational systems should be remedied through other hu- man performance interventions, such as providing clear instructions and developing reward systems. Regardless of the intervention, implementation should always include pilot testing and evaluation components. If desired outcomes are not achieved, the analysis, design, and / or implementation may have been flawed.

Analysis of Organizations, Jobs, Tasks, and People

Organizations consist of people performing tasks to achieve a particular mission. A job is defined as a collection of related tasks that helps accomplish significant elements of an organization’s products and / or services. For training and other human performance-improvement interventions to be suc- cessful, they must address appropriate organizational levels and / or units. Organizational analysis seeks to determine relationships among an organization’s inputs, functions, jobs, and products / services. This type of analysis builds upon front-end and performance analysis.

Rummler and Wilkins (1999) introduced a technique known as performance logic to diagnose an organization’s problems and opportunities. The process highlights disconnects among procedures, skills, jobs, and desired product / service outcomes, pointing the way to appropriate training and other human performance-improvement interventions. Others (Mourier 1998) have also demonstrated suc- cesses using this approach.

Job analysis has several purposes. One deals with determining how to apportion an organization’s tasks among jobs. This process identifies the tasks that belong with each job, the goal being to reduce overlap while providing clean demarcations between products and processes. From the perspective of an organization’s human resources subsystem, job analysis can be used to organize tasks into clusters that take advantage of skill sets common to personnel classification categories (e.g., tool and die press operator, supervisory budget analyst) and consequently reduce the need for training. By distributing tasks appropriately among jobs, an organization may be able to select skilled job can- didates, thereby reducing attendant training costs.

A second purpose of job analysis involves job redesign. During the 1970s, researchers addressed various ways to redesign jobs so that the nature of the jobs themselves would promote performance leading to desired organizational outcomes. For example, Lawler et al. (1973) found that several factors, such as variety, autonomy, and task identity, influenced employee satisfaction and improved job performance. Herzberg (1974) identified other factors, such as client relationship, new learning, and unique expertise, having similar effects on satisfaction and performance. Peterson and Duffany (1975) provide a good overview of the job redesign literature.

Needs analysis (sometimes called needs assessment) examines what should be done so that em- ployees can better perform jobs. Needs analysis focuses on outcomes to determine optimal perform- ance for jobs. Rossett (1987) has provided a detailed needs-analysis technique. Instead of needs analysis, many organizations unfortunately conduct a kind of wants analysis, a process that asks employees and / or supervisors to state what is needed to better perform jobs. Because employees and managers frequently cannot distinguish between their wants and their needs, wants analysis typically yields a laundry list of information that is not linked well to performance outcomes.

Finally, task analysis is a technique that determines the inputs, tools, and skills / knowledge nec- essary for successful task performance. In training situations, task analysis is used to determine the skill / knowledge requirements for personnel to accomplish necessary job outcomes. Skills gaps are defined as the difference between required skills / knowledge and those possessed by the individuals who are (or will be) performing jobs. Training typically focuses on eliminating the skills gaps. Helpful references for conducting task analyses include Carlisle (1986) and Zemke and Kramlinger (1982).

Training Design and Development

A comprehensive reference work with respect to training design and development (Goldstein 1993) refers to what is generally known as a systems approach to training. Among many other things, this approach includes four important components:

1. Feedback is continually used to modify the instructional process, thus training programs are never completely finished products but are always adaptive to information concerning the extent to which programs meet stated objectives.

2. Recognition exists that complicated interactions occur among components of training such as media, individual characteristics of the learner, and instructional strategies.

3. The systems approach to training provides a framework for reference to planning.

4. This view acknowledges that training programs are merely one component of a larger orga- nizational system involving many variables, such as personnel issues, organization issues, and corporate policies.

Thus, in initiating training program development, it is necessary, according to Goldstein, to consider and analyze not only the tasks that comprise the training, but characteristics of the trainees and organizations involved.

This approach to instruction is essentially derived from procedures developed in behavioral psy- chology. Two generic principles underlie technological development in this area. First, the specific behaviors necessary to perform a task must be precisely described, and second, feedback (reinforce- ment for action) is utilized to encourage mastery.

This (and previous work) served as a partial basis for the military’s instructional systems devel- opment (ISD) movement (Branson et al. 1975) and the subsequent refinement of procedures for criterion-referenced measurement technologies (Glaser 1963; Mager 1972; Swezey 1981).

For a discussion of the systems approach to training, we briefly review the Branson et al. (1975) ISD model. This model is possibly the most widely used and comprehensive method of training development. It has been in existence for over 20 years and has revolutionized the design of instruc- tion in many military and civilian contexts. The evolutionary premises behind the model are that performance objectives are developed to address specific behavioral events (identified by task anal- yses), that criterion tests are developed to address the training objectives, and that training is essen- tially developed to teach students to pass the tests and thus achieve the requisite criteria.

The ISD method has five basic phases: analysis, design, development, implementation, and control (evaluation).

Phase one—analysis:

(a) Perform a task analysis. Develop a detailed list of tasks, the conditions under which the tasks are performed, the skills and / or knowledge required for their performance, and the standards that indicate when successful performance has been achieved.

(b) Select pertinent tasks / functions. Note, by observation of an actual operation, the extent to which each task is actually performed, the frequency with which it is performed, and the percentage of a worker’s time devoted to it. Then make an assessment concerning the importance of each task to the overall job.

(c) Construct job-performance measures from the viewpoint that job-performance requirements are the basis for making decisions about instructional methods.

(d) Analyze training programs that already exist to determine their usefulness to the program being developed.

(e) Identify the instructional setting / location for each task selected for instruction. Phase two—design:

(a) Describe student entry behavior. Classify, at the task level, categories into which each entry behavior falls (i.e., cognitive, procedural, decision making, problem solving, etc.). Then check whether each activity is within the existing performance repertoire of the trainees (something the potential students already know) or is a specialized behavior (some- thing the students will have to learn).

(b) Develop statements that translate the performance measures into terminal learning objec- tives. Consider the behavior to be exhibited, the conditions under which the learning will be demonstrated, and the standards of performance that are considered acceptable.

(c) Develop criterion-referenced tests (CRTs). CRTs are tests that measure what a person needs to know or do in order to perform a job. Ensure that at least one test item measures each instructional objective. Determine whether the testing should be done by pencil and paper, by practical exercise, or by other methods. Criterion-referenced tests perform three essential functions:

• They help to construct training by defining detailed performance goals.

• They aid in diagnosis of whether a student has absorbed the training content because the tests are, in fact, operational definitions of performance objectives.

• They provide a validation test for the training program.

(d) Determine the sequence and structure of the training. Develop instructional strategies and the sequencing of instruction for the course.

Phase three—development:

(a) Specify specific learning events and activities.

(b) Specify an instructional management plan. Determine how instruction will be delivered and how the student will be guided through it: by group mode, self-paced mode, or com- binations of the two. Determine how the curriculum will be presented (e.g., lectures, training aids, simulators, job-performance aids, television, demonstration, and computer based).

(c) Review and select existing instructional materials for inclusion in the curriculum.

(d) Develop new instructional materials.

(e) Validate the instruction by having it reviewed by experts or job incumbents or by trying it out on typical students.

Phase four—implementation:

(a) Implement the instructional management plan by ensuring that all materials, equipment, and facilities are available and ready for use. Before beginning instruction, schedule stu- dents and train instructors.

(b) Conduct the instruction under the prescribed management plan in the designated setting under actual conditions using final course materials.

Phase five—control (evaluation):

(a) Monitor course effectiveness, assess student progress, assess the quality of the course, and compare results with the original learning objectives.

(b) Determine how graduates perform on the job.

(c) Revise the course as appropriate.

Training Techniques, Strategies, and Technologies

Two factors known to affect quality of training are the techniques used to deliver the instruction and the methods or strategies used to convey the instructional content. One of the main tasks in the development of training, therefore, is the selection and implementation of appropriate instructional methods and techniques.

One function of technology-assessment research is to provide a basis for estimating the utility of particular technology-based approaches for training. Researchers have analyzed virtually every sig- nificant technology trend of the last two decades, including effects of programmed instruction (Kulik et al. 1980), computer-based instruction (Kulik and Kulik 1987), and visual based instruction (Cohen et al. 1981). The majority of these studies show no significant differences among technology groups and, when pooled, yield only slight advantages for innovative technologies. Kulik and associates report that the size of these statistically significant gains is in the order of 1.5 percentage points on a final exam. Cohen et al. (1981) conducted an analysis of 65 studies in which student achievement using traditional classroom-based instruction was compared with performance across a variety of video-based instructional media, including films, multimedia, and educational TV. They found that only one in four studies (approximately 26%) reported significant differences favoring visual-based instruction. The overall effect size reported by Cohen et al. was relatively small compared to studies of computer-based instruction or of interactive video.

Computer-based instruction and interactive video both represent areas where some data reporting advantages have emerged; specifically, reductions in the time required to reach threshold performance levels (Fletcher 1990; Kulik and Kulik 1987). Results of an analysis of 28 studies, conducted by Fletcher (1990), suggested that interactive video-based instruction increases achievement by an av- erage of 0.50 standard deviations over conventional instruction (lecture, text, on-the-job training, videotape).

Results of decades of work in the area of media research led Clark (1983, 1994) to conclude that media have no significant influence on learning effectiveness but are mere vehicles for presenting instruction. According to Clark (1983, p. 445), ‘‘the best current evidence is that media are mere vehicles that deliver instruction but do not influence student achievement any more than the truck that delivers our groceries causes changes in our nutrition.’’ Similarly, Schramm (1977) commented that ‘‘learning seems to be affected more by what is delivered than by the delivery system.’’ Although students may prefer sophisticated and elegant media, learning appears largely unaffected by these features. Basically, all media can deliver either excellent or ineffective instruction. It appears that it is the instructional methods, strategies, and content that facilitate or hinder learning. Thus, many instructional technologies are considered to be equally effective provided that they are capable of dealing with the instructional methods required to achieve intended training objectives.

Meanwhile, teaching methods appear to be important variables influencing the effectiveness of instructional systems. Instructional methods define how the process of instruction occurs: what information is presented, in what level of detail, how the information is organized, how information is used by the learner, and how guidance and feedback are presented. The choice of a particular instructional method often limits the choice of presentation techniques. Technology-selection decisions, therefore, should be guided by: (1) capacity of the technology to accommodate the instructional method, (2) compatibility of the technology with the user environment, and (3) trade-offs that must be made between technology effectiveness and costs.

Conventional instructional strategies such as providing advance organizers (Allen 1973), identifying common errors (Hoban and Van Ormer 1950), and emphasizing critical elements in a demonstration (Travers 1967) work well with many technology forms, including video-based instructional technologies. Embedding practice questions within instructional sequences also appears to be a potent method supported by interactive video and computer-based instruction. Benefits of embedded questions apparently occur whether responses to the questions are overt or covert or whether the material is preceded or followed by the questions (Teather and Marchant 1974). Compared to research on instructional delivery systems, relatively little research has examined the application of instructional methods.

Morris and Rouse (1985) have suggested that strategies used to teach troubleshooting skills should combine both theoretical and procedural knowledge. This was confirmed by Swezey et al. (1991), who addressed these issues using a variety of instructional strategies. Three strategies, termed ‘‘procedural,’’ ‘‘conceptual,’’ and ‘‘integrated,’’ were used. Results indicated that integrated training strategies that combine procedural and theoretical information facilitate retention of concepts and improve transfer to new settings. Swezey and Llaneras (1992) examined the utility of a technology that provides training program developers with guidance for use in selecting both instructional strategies and media. The technology distinguishes among 18 instructional strategies, as well as among 14 types of instructional media. Outputs are based upon decisions regarding the types of tasks for which training is to be developed, as well as consideration of various situational characteristics operating in the particular setting. Three general classes of situational characteristics are considered: (1) task attributes (motion, color, difficulty, etc.), (2) environmental attributes (hazardous, high workload, etc.), and (3) user attributes (level of experience and familiarity with the task). Results indicated that participants who had access to the selection technology made better presentation strategy, media, and study / practice strategy selections than those relying on their own knowledge.

Simulation as a Training Technology

As technology develops, progress is being made in linking various types of instructional devices. Fletcher (1990) has reviewed much of the literature in this area. In general, he has found that com- puter-based instructional techniques were often more effective than conventional instruction if they included interactive features such as tutorials. Fletcher also notes that the available studies provide little insight into the relative contributions of various features of interactive technology to learning.

Equipment simulators have been used extensively for training aircraft pilots. Jacobs et al. (1990), and Andrews et al. (1992) have reviewed this research. In general, it appears that simulator training combined with training in actual aircraft is more effective than training in the aircraft by itself. According to Tannenbaum and Yukl (1992), the realism of simulators is being enhanced greatly by continuing developments in video technology, speech simulation, and speech recognition; and ad- vancements in networking technology have opened up new possibilities for large-scale simulator networking. Although simulators have been used extensively for training small teams in the military, networking allows larger groups of trainees to practice their skills in large-scale interactive simula- tions (Thorpe 1987; Alluisi 1991; Weaver et al. 1995). Often in these circumstances, the opponent is not merely a computer but other teams, and trainees are therefore often highly motivated by the competition. In such situations, as in the actual environment, simulations are designated to operate directly as well as to react to unpredictable developments. A thorough history of the development of one such complex, networked system, known as SIMNET, has been provided by Alluisi (1991). Comprehensive, system-based evaluations of these technologies, however, remain to be performed.

Acquisition, Retention, and Transfer of Learned Information
Acquisition

Much literature has indicated that it is necessary to separate the initial acquisition of a performance skill or concept from longer-term learning (Schmidt and Bjork 1992; Cormier 1984; Salmoni et al. 1984). Schmidt and Bjork (1992), for instance, have described a process by which characteristics of a task create an environment in which performance is depressed but learning is apparently enhanced. The implications of this phenomenon, known as contextual interference (Battig 1966; Shea and Morgan 1979), for training are that variations of task conditions, manipulation of task difficulty, or other means of inducing extra effort by trainees are likely to be beneficial for retention and transfer. Recent research by Schneider et al. (1995) investigated effects of contextual interference in rule- based learning, a skill underlying many applied tasks found in the aviation, military, and computer fields. Results mirrored those found in the verbal learning and motor skill-acquisition literatures, indicating that a random practice schedule leads to the best retention performance after initial ac- quisition but hinders initial learning. Thus, training conditions designed to achieve a particular training objective (long-term retention, transfer, and / or resistance to altered contexts) are not necessarily those that maximize performance during acquisition. This concept has significant implications for predicting trainee performance. Although a typical method employed to predict trainee performance involves monitoring acquisition data during training, using immediate past performance data as predictors to estimate future retention performance, this strategy may not appropriately index performance on the job or outside of the training environment. Further, initial performance of complex skills tends to be unstable and often is a poor indicator of final performance. Correlations between initial and final performance levels for a grammatical reasoning task, for example, reached only 0.31 (Kennedy et al. 1980), a moderate level at best. Using initial performance during learning as a predictor, therefore, may lead to inconsistent and / or erroneous training prescriptions.

Skill acquisition is believed by many to proceed in accordance with a number of stages or phases of improvement. Although the number of stages and their labels differ among researchers, the exis- tence of such stages is supported by a large amount of research and theoretical development during the past 30 years. Traditional research in learning and memory posits a three-stage model for char- acterizing associative-type learning involving the process of differentiating between various stimuli or classes of stimuli to which responses are required (stimulus discrimination), learning the responses (response learning), and connecting the stimulus with a response (association). Empirical data suggest that this process is most efficient when materials are actively processed in a meaningful manner, rather than merely rehearsed via repetition (Craik and Lockhart 1972; Cofer et al. 1966).

Anderson (1982) has also proposed a three-stage model of skill acquisition, distinguishing among cognitive, associative, and autonomous stages. The cognitive stage corresponds to early practice in which a learner exerts effort to comprehend the nature of the task and how it should be performed.

In this stage, the learner often works from instructions or an example of how a task is to be performed (i.e., modeling or demonstration). Performance may be characterized by instability and slow growth, or by extremely rapid growth, depending upon task difficulty and degrees of prior experience of the learner. By the end of this stage, learners may have a basic understanding of task requirements, rules, and strategies for successful performance; however, these may not be fully elaborated. During the associative stage, declarative knowledge associated with a domain (e.g., facts, information, back- ground knowledge, and general instruction about a skill acquired during the previous stage) is con- verted into procedural knowledge, which takes the form of what are called production rules (condition–action pairs). This process is known as knowledge compilation. It has been estimated that hundreds, or even thousands, of such production rules underlie complex skill development (Anderson 1990). Novice and expert performance is believed to be distinguished by the number and quality of production rules. Experts are believed to possess many more elaborate production rules than novices (Larkin 1981).

Similarly, Rumelhart and Norman (1978) recognized three kinds of learning processes: (1) the acquisition of facts in declarative memory (accretion), (2) the initial acquisition of procedures in procedural memory (restructuring), and (3) the modification of existing procedures to enhance reli- ability and efficiency (tuning). Kyllonen and Alluisi (1987) reviewed these concepts in relation to learning and forgetting facts and skills. Briefly, they suggest that new rules are added to established production systems through the process of accretion, fine-tuned during the process of tuning, and subsequently reorganized into more compact units during the restructuring process.

Rasmussen (1979) has also distinguished among three categories, or modes, of skilled behavior: skill based, rule based, and knowledge based. Skill-based tasks are composed of simple stimulus- response behaviors, which are learned by extensive rehearsal and are highly automated. Rule-based behavior is guided by conscious control and involves the application of appropriate procedures based on unambiguous decision rules. This process involves the ability to recognize specific well-defined situations that call for one rule rather than the other. Knowledge-based skills are used in situations in which familiar cues are absent and clear and definite rules do not always exist. Successful per- formance involves the discrimination and generalization of rule-based learning. Rasmussen proposes that, in contrast to Anderson’s model, performers can move among these modes of performance as dictated by task demands. This general framework is useful in conceptualizing links between task content and the type of training required for proficiency in complex tasks.

Retention

Instructional designers must consider not only how to achieve more rapid, high-quality training, but also how well the skills taught during training will endure after acquisition. Further, what has been learned must be able to be successfully transferred or applied to a wide variety of tasks and job- specific settings. Swezey and Llaneras (1997) have recently reviewed this area.

Retention of learned material is often characterized as a monotonically decreasing function of the retention interval, falling sharply during the time immediately following acquisition and declining more slowly as additional time passes (Wixted and Ebbesen 1991; Ebbinghaus 1913; McGeoch and Irion 1952). There is general consistency in the loss of material over time. Subjects who have acquired a set of paired items, for example, consistently forget about 20% after a single day and approximately 50% after one week (Underwood 1966). Bahrick (1984), among others, demonstrated that although large parts of acquired knowledge may be lost rapidly, significant portions can also endure for ex- tended intervals, even if not intentionally rehearsed.

Evidence suggests that the rate at which skills and knowledge decay in memory varies as a function of the degree of original learning; decay is slower if material has previously been mastered than if lower-level acquisition criteria were imposed (Loftus 1985). The slope and shape of retention functions also depend upon the specific type of material being tested as well as upon the methods used to assess retention. As meaningfulness of material to the student increases, for example, rate of forgetting appears to slow down. Further, recognition performance may be dramatically better than recall, or vice versa, depending simply upon how subjects are instructed (Tulving and Thomson 1973; Watkins and Tulving 1975). Also, various attributes of the learning environment, such as conditions of reinforcement, characteristics of the training apparatus, and habituation of responses, appear to be forgotten at different rates (Parsons et al. 1973).

Retention, like learning and motivation, cannot be observed directly; it must be inferred from performance following instruction or practice. To date, no uniform measurement system for indexing retention has been adopted. General indices of learning and retention used in the research literature over the past 100 years include a huge variety of methods for measuring recall, relearning, and recognition. Luh (1922), for instance, used five different measures to index retention: recognition, reconstruction, written reproduction, recall by anticipation, and relearning.

The list of variables known to reliably influence rate of forgetting of learned knowledge and skills is relatively short. In a recent review, Farr (1987) surveyed the literature and identified a list of variables known to influence long-term retention, including degree of original learning, characteristics of the learning task, and the instructional strategies used during initial acquisition. According to Farr, the single largest determinant of magnitude of retention appears to be the degree of original learning. In general, the greater the degree of learning, the slower will be the rate of forgetting (Underwood and Keppel 1963). This relationship is so strong that it has prompted some researchers to argue that any variable that leads to high initial levels of learning will facilitate retention (Prophet 1976; Hurlock and Montague 1982).

The organization and complexity of material to be learned also appear to have a powerful influence on retention. The efficiency of information acquisition, retrieval, and retention appears to depend to a large degree on how well the material has been organized (Cofer et al. 1966). Conceptually organ- ized material appears to show considerably less memory loss and to be more generalizable than material that is not so organized.

Research has identified numerous variables that fall under the banner of strategies for skill and knowledge acquisition and retention, including spaced reviews, massed / distributed practice, part / whole learning, and feedback. An important variable with respect to forgetting is spaced practice. The ‘‘spacing effect’’ (the dependency of retention on the spacing of successive study sessions) suggests that material be studied at widely spaced intervals if retention is required. Similarly, research comparing distributed practice (involving multiple exposures of material over time) vs. massed practice (requiring concentrated exposure in a single session), has occurred in a wide variety of contexts, including the acquisition of skills associated with aircraft carrier landings (Wightman and Sistrunk 1987), word-processing skills (Bouzid and Crawshaw 1987), perceptual skills associated with military aircraft recognition (Jarrard and Wogalter 1992), and learning and retention of second-language vo- cabulary (Bloom and Shuell 1981). Most research in this area, however, has emphasized acquisition of verbal knowledge and motor skills. In general, research on the issue of distribution of practice has emphasized effects on acquisition and found that distribution of practice is superior to massed practice in most learning situations, long rest periods are superior to short rest periods, and short practice sessions between rests yield better performance than do long practice sessions (Rea and Modigliani 1988). However, no consistent relationships between these variables and long-term retention have emerged.

Techniques that help learners to build mental models that they can use to generate retrieval cues, recognize externally provided cues, and / or generate or reconstruct information have also generally facilitated retention (Kieras and Bovair 1984). Gagne´ (1978) identified three general strategies for enhancing long-term retention, including reminding learners of currently possessed knowledge that is related to the material to be learned, ensuring that the training makes repeated use of the infor- mation presented, and providing for and encouraging elaboration of the material during acquisition as well as during the retention interval.

Other factors that have been shown to induce forgetting include interference by competing material that has been previously or subsequently acquired and the events encountered by individuals between learning and the retention test. Information acquired during this interval may impair retention, while simple rehearsal or reexposure may facilitate retention. Additional factors influencing skill and knowledge retention include the length of the retention interval, the methods used to assess retention, and individual difference variables among trainees. The absolute amount of skill / knowledge decay, for example, tends to increase with time, while the rate of forgetting declines over time. Researchers have also postulated a critical period for skills loss during a nonutilization period at between six months and one year after training (O’Hara 1990). Psychomotor flight skills, for example, are retained many months longer than procedural flight skills (Prophet 1976), and decay of flight skills begins to accelerate after a six-month nonutilization period (Ruffner et al. 1984).

Transfer

The topic of transfer-of-training is integrally related to other training issues, such as learning, memory, retention, cognitive processing, and conditioning; these fields make up a large subset of the subject matter of applied psychology (Swezey and Llaneras 1997). In general, the term transfer-of-training concerns the way in which previous learning affects new learning or performance. The central ques- tion is how previous learning transfers to a new situation. The effect of previous learning may function either to improve or to retard new learning. The first of these is generally referred to as positive transfer, the second as negative transfer. (If new learning is unaffected by prior learning, zero transfer is said to have occurred.) Many training programs are based upon the assumption that what is learned during training will transfer to new situations and settings, most notably the operational environment. Although U.S. industries are estimated to spend billions annually on training and development, only a fraction of these expenditures (not more than 10%) are thought to result in performance transfer to the actual job situation (Georgenson 1982). Researchers, therefore, have sought to determine fundamental conditions or variables that influence transfer-of-training and to develop comprehensive theories and models that integrate and unify knowledge about these variables.

Two major historical viewpoints on transfer exist. The identical elements theory (first proposed by Thorndike and Woodworth 1901) suggests that transfer occurs in situations where identical ele- ments exist in both original and transfer situations. Thus, in a new situation, a learner presumably takes advantage of what the new situation offers that is in common with the learner’s earlier expe- riences. Alternatively, the transfer-through-principles perspective suggests that a learner need not necessarily be aware of similar elements in a situation in order for transfer to occur. This position suggests that previously used ‘‘principles’’ may be applied to occasion transfer. A simple example involves the principles of aerodynamics, learned from kite flying by the Wright brothers, and the application of these principles to airplane construction. Such was the position espoused by Judd (1908), who suggested that what makes transfer possible is not the objective identities between two learning tasks, but the appropriate generalization in the new situation of principles learned in the old. Hendriksen and Schroeder (1941) demonstrated this transfer-of-principles philosophy in a series of studies related to the refraction of light. Two groups were given practice shooting at an underwater target until each was able to hit the target consistently. The depth of the target was then changed, and one group was taught the principles of refraction of light through water while the second was not. In a subsequent session of target shooting, the trained group performed significantly better than did the untrained group. Thus, it was suggested that it may be possible to design effective training environments without a great deal of concern about similarity to the transfer situation, as long as relevant underlying principles are utilized.

A model developed by Miller (1954) attempted to describe relationships among simulation fidelity and training value in terms of cost. Miller hypothesized that as the degree of fidelity in a simulation increased, the costs of the associated training would increase as well. At low levels of fidelity, very little transfer value can be gained with incremental increases in fidelity. However, at greater levels of fidelity, larger transfer gains can be made from small increments in fidelity. Thus, Miller hypoth- esized a point of diminishing returns, where gains in transfer value are outweighed by higher costs. According to this view, changes in the requirements of training should be accompanied by corre- sponding changes in the degree of fidelity in simulation if adequate transfer is to be provided. Although Miller did not specify the appropriate degree of simulation for various tasks, subsequent work (Alessi 1988) suggests that the type of task and the trainee’s level of learning are two parameters that interact with Miller’s hypothesized relationships. To optimize the relationship among fidelity, transfer, and cost, therefore, one must first identify the amount of fidelity of simulation required to obtain large amount of transfer and the point where additional increments of transfer are not worth the added costs.

Another model, developed by Kinkade and Wheaton (1972), distinguishes among three compo- nents of simulation fidelity: equipment fidelity, environmental fidelity, and psychological fidelity. Equipment fidelity refers to the degree to which a training device duplicates the ‘‘appearance and feel’’ of the operational equipment. This characteristic of simulators has also been termed physical fidelity. Environmental, or functional, fidelity refers to the degree to which a training device duplicates the sensory stimulation received from the task situation. Psychological fidelity (a phenomenon that Parsons [1980], has termed ‘‘verisimilitude’’) addresses the degree to which a trainee perceives the training device as duplicating the operational equipment (equipment fidelity) and the task situation (environmental fidelity). Kinkade and Wheaton postulated that the optimal relationship among levels of equipment, environmental, and psychological fidelity varies as a function of the stage of learning. Thus, different degrees of fidelity may be appropriate at different stages of training.

The relationship between degree of fidelity and amount of transfer is complex. Fidelity and transfer relationships have been shown to vary as a function of many factors, including instructor ability, instructional techniques, types of simulators, student time on trainers, and measurement tech- niques (Hays and Singer 1989). Nevertheless, training designers often favor physical fidelity in a training device, rather than the system’s functional characteristics, assuming that increasing levels of physical fidelity are associated with higher levels of transfer. The presumption that similarity facili- tates transfer can be traced to the identical elements theory of transfer first proposed by Thorndike and Woodworth (1901). In fact, numerous studies show that similarity does not need to be especially high in order to generate positive transfer-of-training (see Hays 1980; Provenmire and Roscoe 1971; Valverde 1973). Hay’s (1980) review of fidelity research in military training found no evidence of learning deficits due to lowered fidelity, and others have shown an advantage for lower-fidelity training devices, suggesting that the conditions of simulation that maximize transfer are not necessarily those of maximum fidelity. One possible reason for this may be that widespread use of such well-known techniques as corrective feedback and practice to mastery in simulators and training devices, which may act to increase transfer, may also decrease similarity. A second possibility is that introducing highly similar (but unimportant) components into a simulation diverts a student’s attention from the main topics being trained, thereby causing reduced transfer, as is the case with manipulating the fidelity of motion systems in certain types of flight simulators (Lintern 1987; Swezey 1989).

A recent review by Alessi (1988) devotes considerable attention to debunking the myth that high fidelity necessarily facilitates transfer. After reviewing the literature in the area, Alessi concluded that

transfer-of-training and fidelity are not linearly related but instead may follow an inverted-U-shaped function similar to that suggested by the Yerkes–Dodson law (see Swezey 1978; Welford 1968), which relates performance to stress. According to this view, increases in similarity first cause an increase and then a corresponding decrease in performance, presumably as a function of increasing cognitive load requirements associated with decreased stimulus similarity.

Current issues and problems in transfer span four general areas: (1) measurement of transfer, (2) variables influencing transfer, (3) models of transfer, and (4) application of our knowledge of transfer to applied settings. Research on transfer needs to be conducted with more relevant criterion measures for both generalization and skill maintenance. Hays and Singer (1989) provide a compre-hensive review of this domain. A large proportion of the empirical research on transfer has concentrated on improving the design of training programs through the incorporation of learning principles, including identical elements, teaching of general principles, stimulus variability, and various conditions of practice. A critique of the transfer literature conducted by Baldwin and Ford (1988) also indicates that research in the area needs to take a more interactive perspective that attempts to develop and test frameworks to incorporate more complex interactions among training inputs. Many studies examining training design factors, for example, have relied primarily upon data collected using college students working on simple memory or motor tasks with immediate learning or retention as the criterion of interest. Generalizability, in many cases, is limited to short-term simple motor and memory tasks.

Training Teams and Groups

Because teams are so prevalent, it is important to understand how they function (Swezey and Salas 1992). Teamwork can take on various orientations, including behavioral and cognitive perspectives. Behaviorally oriented views of the teamwork process deal with aspects involved in coordinated events among members that lead to specific functions (e.g., tasks, missions, goals, or actions) that a team actually performs in order to accomplish its required output. Cognitive views of the teamwork domain deal with a team’s shared processing characteristics, such as joint knowledge about a task, joint ability to perform aspects of the work, motivation levels, personalities, and thought processes (Salas et al. 1995). The extent to which mental models are shared among team members may help in understanding and training teamwork skills (Cannon-Bowers and Salas 1990). In order for members of a team to coordinate effectively with others, they must share knowledge of team requirements. This, in turn, can help to coordinate a team effectively. Since the range of each individual team member’s abilities varies widely, the overall level of competency within a team is often dependent upon shared skills. An individual’s ability to complete a team function competently and convey information to other members accurately is a strong influence on the overall team structure and performance.

Swezey et al. (1994) suggest that in order to complete a team task accurately, four things must happen: (1) there must be an exchange of competent information among team members; (2) there must be coordination within the team structure of the task requirements; (3) periodic adjustments must be made to respond to task demands; and (4) a known and agreed-upon organization must exist within the group.

It is important to consider such issues as team size, task characteristics, and feedback use when designing team training programs. It has been shown that team size has a direct influence on the performance and motivation of teams. As the size of the team increases, team resources also increase (Shaw 1976). Due to the increase in information available to larger teams, individual team members have a more readily accessible pool of information for use in addressing team tasks. This in turn can provide increases in creativity, the processing of information, and overall team effectiveness (Morgan and Lassiter 1992). However, the unique aspects associated with larger teams may do more harm than good due to the fact that the increase in size may also pose problems. Various studies (Indik 1965; Gerard et al. 1968) have found that problems in communication and level of participation may increase due to increases in team size. These two issues, when combined, may diminish team per- formance by placing increased stress on a team in the areas of coordination and communication workload (Shaw 1976).

Task goal characteristics are another important contributor to team performance. Given a task that varies in difficulty, it has been found that goals that appear to be challenging and that target a specific task tend to have a higher motivational impact than those that are more easily obtainable (Ilgen et al. 1987; Gladstein 1984; Goodman 1986; Steiner 1972). According to Ilgen et al. (1987), motivation directs how much time an individual is willing to put forth in accomplishing a team task.

The use of feedback is another factor that influences team motivation and performance. Feedback helps to motivate an individual concerning recently performed tasks (Salas et al. 1992). Researchers concur that the feedback should be given within a short period of time after the relevant performance (Dyer 1984; Nieva et al. 1978). The sequence and presentation of the feedback may also play sig- nificant roles in motivation. Salas et al. (1992) have noted that during the early stages of training,

feedback should concentrate on one aspect of performance; however, later in time it should concen- trate on several training components. It has been found that sequencing helps team training in that teams adjust to incorporate their feedback into the task(s) at hand (Briggs and Johnston 1967).

According to Salas and Cannon-Bowers (1995), a variety of methods and approaches are currently under development for use in building effective team training programs. One such technique is the developing technology of team task analysis (Levine and Baker 1991), a technique that, as it matures, may greatly facilitate the development of effective team training. The technology provides a means to distinguish team learning objectives for effective performance from individual objectives and is seen as a potential aid in identifying the teamwork-oriented behaviors (i.e., cues, events, actions) necessary for developing effective team training programs.

A second area is team-performance measurement. To be effective, team-performance measurement must assess the effectiveness of various teamwork components (such as team-member interactions) as well as intrateam cognitive and knowledge activities (such as decision making and communication) in the context of assessing overall performance of team tasks. Hall et al. (1994) have suggested the need for integrating team performance outcomes into any teamwork-measurement system. As team feedback is provided, it is also necessary to estimate the extent to which an individual team member is capable of performing his or her specific tasks within the team. Thus, any competently developed team performance-measurement system must be capable of addressing both team capabilities and individual capabilities separately and in combination.

Teamwork simulation exercises are a third developing technology cited by Salas and Cannon- Bowers (1995). The intent of such simulations is to provide trainees with direct behavioral cues designed to trigger competent teamwork behaviors. Essential components of such simulations include detailed scenarios or exercises where specific teamwork learning objectives are operationalized and incorporated into training.

Salas and Cannon-Bowers (1995) have also identified three generic training methods for use with teams; information based, demonstration based, and practice based. The information-based method involves the presentation of facts and knowledge via the use of such standard delivery techniques as lectures, discussions, and overheads. Using such group-based delivery methods, one can deliver rel- evant information simultaneously to large numbers of individuals. The methods are easy to use and costs are usually low. Information-based methods may be employed in many areas, such as helping teammates understand what is expected of them, what to look for in specific situations, and how and when to exchange knowledge.

Demonstration-based methods are performed, rather than presented, as are information-based methods. They offer students an opportunity to observe behaviors of experienced team members and thus of the behaviors expected of them. Such methods help to provide shared mental models among team members, as well as examples of how one is expected to handle oneself within complex, dynamic, and multifaceted situations (Salas and Cannon-Bowers 1995).

Practice-based methods are implemented via participatory activities such as role playing, mod- eling, and simulation techniques. These methods provide opportunities to practice specific learning objectives and receive feedback information. With such methods, trainees can build upon previous practice attempts until achieving the desired level(s) of success.

A final category of developmental teamwork technologies, according to Salas and Cannon-Bowers (1995), involves teamwork training implementation strategies. These authors discuss two such strat- egies: cross-training and team coordination training. In their view, cross-training, training all team members to understand and perform each other’s tasks, is an important strategy for integrating in- experienced members into experienced groups. Team coordination training addresses the issue that each team member has specific duties and that those duties, when performed together, are designed to provide a coordinated output. Team coordination training involves the use of specific task-oriented strategies to implement coordination activities and has been successfully applied in the aviation and medical fields.

Training Evaluation

Training evaluation may serve a variety of purposes, including improving training and assessing the benefits of training. Formative evaluation consists of processes conducted while training is being developed. Summative evaluation consists of processes conducted after training has been imple- mented.

Because training typically does not remain static (a job’s tasks are often modified, replaced, or added), the distinction between formative evaluation and summative evaluation is sometimes blurred (see Goldstein 1993). Formative evaluation focuses on improving training prior to implementation. It may also serve the purpose of assessing benefits of training in a preliminary way. Summative evaluation focuses on assessing the benefits of training to determine how training outcomes improve on-job performance. It may also serve the purpose of improving future training iterations. In terms of instructional systems design, summative evaluation completes the cycle that began with analysis. Figure 1 shows a simplified version of this cycle.

Both formative and summative aspects of evaluation feed back into the training analysis phase, which showed why training was necessary, the performances that training presumably would change, and the job outcomes that would be affected. Formative evaluation typically includes internal quality checks (such as review by subject matter experts, review against instructional standards, and editing for organization and clarity), as well as pilot testing. Formative evaluation may occur throughout the process shown in Figure 1. Summative evaluation includes assessments of after-training outcomes, both immediately following the conclusion of a training program and at later intervals.

Kirkpatrick (1994) has developed a widely used classification taxonomy of evaluation levels. Table 2 shows these levels and how they apply to both formative and summative evaluation.

Note that pilot testing conducted during formative evaluation should address, at a minimum, levels 1 and 2 of Kirkpatrick’s taxonomy. Summative evaluation should address all four levels. Kirkpatrick (1994) has suggested that all four levels must be addressed for evaluation to be useful to organizations. Although using the level 1 evaluation standard as a measure of satisfaction can be useful, using it as an indicator of training effectiveness is misleading. Clark (1982), for instance, showed that about one-third of the time, a negative correlation exists between a learner’s end-of-course self-rating and measures of learning.

Level 2 evaluation is obviously beneficial for determining whether training produces the learning it is intended to produce. As discussed above, we recommend criterion-referenced testing as the appropriate source of level 2 data for training (as opposed to education) evaluation. End-of-course criterion-referenced tests are designed to show whether or not trainees can perform adequately on measures of each course objective. In criterion-referenced measurement, the operative question is, ‘‘Can trainees perform the required task or not?’’ Contrast this with norm-referenced testing, which addresses the question, ‘‘How do learners’ levels of knowledge compare?’’ In practice, a norm- referenced test could show a percentage of students excelling, while a criterion-referenced test on the same information could show all students as failing to meet certain training standards.

Conducted with a criterion-referenced approach, level 2 evaluation shows what trainees can do at the end of a course, but not what they will do on the job. There are several reasons for this, as noted above. First, there is the issue of retention over time. If learners do not have the opportunity to practice what they have learned in training, retention will decrease. Consequently, for training to be effective, it should be conducted close to the time when trainees need to use what they have learned. Transfer issues, as previously described, may also affect actual job performance and should be care- fully considered in training design.

Another reason that level 2 evaluation may not predict how trainees will perform later in time is that outside factors may inhibit application of what is learned. As noted previously, organizations may lack appropriate motivational systems and environmental support for specific behaviors. Thus, if trainees are taught to perform behavior y in response to stimulus x in training but are required to perform behavior z in response to that stimulus on the job, the behavior learned in training will soon be extinguished. Similarly, if employees are taught to use certain tools during training but those tools are absent from the work environment, the tool-related behaviors learned during training will not be used.

Selection, Training, and Development of Personnel-0060

Kirkpatrick’s level 3 evaluation seeks to measure changes in behavior after training. Because factors other than training may influence later behaviors, measuring these changes successfully may

Selection, Training, and Development of Personnel-0061

require an experiment. Table 3 shows an example of an experimental design permitting a level 3 evaluation.

Using the design shown in Table 3, training has been randomly assigned to group B. Baseline performance of both groups is then measured at the same time, before group B receives training. If the assignment is truly random, there should be no significant differences between the performance of trainees in groups A1 and B1. At the same point after training, job performance of both groups is again measured. If all other factors are equal (as they should be in random assignment), differences between groups A2 and B2 will show the effect of training on job behavior. Depending on the type of measurement, one may use various statistical indices, such as a chi squared test or phi coefficient as a test of statistical significance (see Swezey 1981 for a discussion of this issue.)

This design does not use level 2 evaluation data. However, by modifying the design so that measures of A1 and B1 behaviors occur at the end of training (rather than prior to training) and measures of A2 and B2 occur later (e.g., at some interval after training), one can use level 2 evaluation data to assess the relative effect of nontraining factors on job performance. In this case (level 2 evaluation), B1 performance should be significantly higher than A1 performance. But B2 performance may not be significantly higher than A2 performance, depending upon whether nontraining factors inhibit job performance. Again, statistical comparisons are used to determine significance of the effect(s).

Selection, Training, and Development of Personnel-0062

Selection, Training, and Development of Personnel-0063

Kirkpatrick’s level 4 evaluation addresses the basic question, ‘‘So what?’’ Even if evaluation at levels 1 through 3 show positive results, if training does not lead to desired results on job performance, it may not be the proper solution or worth the expense. For example, if training is designed to lead to performance that reduces error rates, and these error rates do not decline adequately, training may not have been the correct solution to the problem situation. Figure 2 provides a brief overview of how to address failure at each of Kirkpatrick’s four levels of evaluation.

Comments

Popular posts from this blog

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

INTEGER PROGRAMMING:THE BRANCH-AND-CUT APPROACH TO SOLVING BIP PROBLEMS