JOB EVALUATION IN ORGANIZATIONS:EVALUATING THE JOB-EVALUATION SYSTEM

EVALUATING THE JOB-EVALUATION SYSTEM

Job evaluation can take on the appearance of a bona fide measurement instrument (objective, nu- merical, generalizable, documented, and reliable). If it is viewed as such, then job evaluation can be judged according to precise technical standards. Just as with employment tests, the reliability and validity of job evaluation plans should be ascertained. In addition, the system should be evaluated for its utility, legality, and acceptability.

Reliability: Consistent Results

Job evaluation involves substantial judgment. Reliability refers to the consistency of results obtained from job evaluation conducted under different conditions. For example, to what extent do different job evaluators produce similar results for the same job? Few employers or consulting firms report the results of their studies. However, several research studies by academics have been reported (Arvey 1986; Schwab 1980; Snelgar 1983; Madigan 1985; Davis and Sauser 1993; Cunningham and Graham 1993; Supel 1990). These studies present a mixed picture; some report relatively high consistency (different evaluators assign the same jobs the same total point scores), while others report lower agreement on the values assigned to each specific compensable factor. Some evidence also reports that evaluators’ background and training may affect the consistency of the evaluations. Interestingly, an evaluator’s affiliation with union or management appears to have little effect on the consistency of the results (Lawshe and Farbo 1949; Harding et al. 1960; Moore 1946; Dertien 1981).

Validity: Legitimate Results

Validity is the degree to which a job-evaluation method yields the desired results. The desired results can be measured several ways: (1) the hit rate (percentage of correct decisions it makes), (2) con- vergence (agreement with results obtained from other job-evaluation plans), and (3) employee ac- ceptance (employee and manager attitudes about the job-evaluation process and the results) (Fox 1962; Collins and Muchinsky 1993).

Hit Rates: Agreement with Predetermined Benchmark Structures

The hit rate approach focuses on the ability of the job-evaluation plan to replicate a predetermined, agreed-upon job structure. The agreed-upon structure, as discussed earlier, can be based on one of several criteria. The jobs’ market rates, a structure negotiated with a union or a management com- mittee, and rates for jobs held predominately by men are all examples.

Figure 5 shows the hit rates for a hypothetical job-evaluation plan. The agreed-upon structure has 49 benchmark jobs in it. This structure was derived through negotiation among managers serving on the job-evaluation committee along with market rates for these jobs. The new point factor job- evaluation system placed only 14, or 29%, of the jobs into their current (agreed-upon) pay classes. It came within ±1 pay class for 82% of the jobs in the agreed-upon structure. In a study conducted at Control Data Corporation, the reported hit rates for six different types of systems ranged from 49 to 73% of the jobs classified within ±1 class of their current agreed-upon classes (Gomez-Mejia et al. 1982). In another validation study, Madigan and Hoover applied two job-evaluation plans (a modification of the federal government’s factor evaluation system and the position analysis question- naire) to 206 job classes for the State of Michigan (Madigan and Hoover 1986). They reported hit rates ranging from 27 to 73%, depending on the scoring method used for the job-evaluation plans.

Is a job-evaluation plan valid (i.e., useful) if it can correctly slot only one-third of the jobs in the ‘‘right’’ level? As with so many questions in compensation, the answer is ‘‘it depends.’’ It depends on the alternative approaches available, on the costs involved in designing and implementing these plans, on the magnitude of errors involved in designing and implementing these plans, and on the magnitude of errors involved in missing a ‘‘direct hit.’’ If, for example, being within ±1 pay class translates into several hundred dollars in pay, then employees probably are not going to express much confidence in the ‘‘validity’’ of this plan. If, on the other hand, the pay difference between ±1 class is not great or the plan’s results are treated only as an estimate to be adjusted by the job-evaluation committee, then its ‘‘validity’’ (usefulness) is more likely.

Convergence of Results

Job-evaluation plans can also be judged by the degree to which different plans yield similar results. The premise is that convergence of the results from independent methods increases the chances that the results, and hence the methods, are valid. Different results, on the other hand, point to lack of validity. For the best study to date on this issue, we again turn to Madigan’s report on the results of three job-evaluation plans (guide chart, PAQ, and point plan) (Madigan 1985). He concludes that the three methods generate different and inconsistent job structures. Further, he states that the measure- ment adequacy of these three methods is open to serious question. An employee could have received up to $427 per month more (or less), depending on the job-evaluation method used.

These results are provocative. They are consistent with the proposition that job evaluation, as traditionally practiced and described in the literature, is not a measurement procedure. This is so because it fails to consistently exhibit properties of reliability and validity. However, it is important

Job Evaluation in Organizations-0058

to maintain a proper perspective in interpreting these results. To date, the research has been limited to only a few employers. Further, few compensation professionals seem to consider job evaluation a measurement tool in the strict sense of that term. More often, it is viewed as a procedure to help rationalize an agreed-upon pay structure in terms of job- and business-related factors. As such, it becomes a process of give and take, not some immutable yardstick.

Utility: Cost-Efficient Results

The usefulness of any management system is a function of how well it accomplishes its objectives (Lawler 1986). Job evaluation is no different; it needs to be judged in terms of its objectives. Pay structures are intended to influence a wide variety of employee behaviors, ranging from staying with an employer to investing in additional training and willingness to take on new assignments. Conse- quently, the structures obtained through job evaluation should be evaluated in terms of their ability to affect such decisions. Unfortunately, little of this type of evaluation seems to be done.

The other side of utility concerns costs. How costly is job evaluation? Two types of costs asso- ciated with job evaluation can be identified: (1) design and administration costs and (2) labor costs that result from pay structure changes recommended by the job evaluation process. The labor cost effects will be unique for each application. Winstanley offers a rule of thumb of 1 to 3% of covered payroll (Winstanley). Experience suggests that costs can range from a few thousand dollars for a small organization to over $300,000 in consultant fees alone for major projects in firms like Digital Equipment, 3M, TRW, or Bank of America.

Nondiscriminatory: Legally Defensible Results

Much attention has been directed at job evaluation as both a potential source of bias against women and as a mechanism to reduce bias (Treiman and Hartmann 1981). We will discuss some of the studies of the effects of gender in job evaluation and then consider some recommendations offered to ensure bias-free job evaluation.

It has been widely speculated that job evaluation is susceptible to gender bias. To date, three ways that job evaluation can be biased against women have been studied (Schwab and Grams 1985).

First, direct bias occurs if jobs held predominantly by women are undervalued relative to jobs held predominantly by men, simply because of the jobholder’s gender. The evidence to date is mixed regarding the proposition that the gender of the jobholder influences the evaluation of the job. For example, Arvey et al. found no effects on job evaluation results when they varied the gender of jobholders using photographs and recorded voices (Arvey et al. 1977). In this case, the evaluators rightfully focused on the work, not the worker. On the other hand, when two different job titles (special assistant—accounting and senior secretary—accounting) were studied, people assigned lower job-evaluation ratings to the female-stereotyped title ‘‘secretary’’ than to the more gender-neutral title, ‘‘assistant’’ (McShane 1990).

The second possible source of gender bias in job evaluation flows from the gender of the indi- vidual evaluators. Some argue that male evaluators may be less favorably disposed toward jobs held predominantly by women. To date, the research finds no evidence that the job evaluator’s gender or the predominant gender of the job-evaluation committee biases job-evaluation results (Lewis and Stevens, 1990).

The third potential source of bias affects job evaluation indirectly through the current wages paid for jobs. In this case, job-evaluation results may be biased if the jobs held predominantly by women are incorrectly underpaid. Treiman and Hartmann argue that women’s jobs are unfairly underpaid simply because women hold them (Trieman and Hartmann 1995). If this is the case and if job evaluation is based on the current wages paid in the market, then the job-evaluation results simply mirror any bias that exists for current pay rates. Considering that many job-evaluation plans are purposely structured to mirror the existing pay structure, it should not be surprising that the current wages for jobs influence the results of job evaluation, which accounts for this perpetual reinforcement. In one study, 400 experienced compensation administrators were sent information on current pay, market, and job-evaluation results (Rynes et al. 1989). They were asked to use this information to make pay decisions for a set of nine jobs. Half of the administrators received a job linked to men (i.e., over 70% of job holders were men, such as security guards) and the jobs given the other half were held predominately by women (e.g., over 70% of job holders were women, such as secretaries). The results revealed several things: (1) Market data had a substantially larger effect on pay decisions than did job evaluations or current pay data. (2) The jobs’ gender had no effects. (3) There was a hint of possible bias against physical, nonoffice jobs over white-collar office jobs. This study is a unique look at several factors that may affect pay structures. Other factors, such as union pressures and turnover of high performers, that also affect job-evaluation decisions were not included.

The implications of this evidence are important. If, as some argue, market rates and current pay already reflect gender bias, then these biased pay rates could work indirectly through the job- evaluation process to deflate the evaluation of jobs held primarily by women (Grams and Schwab 1985). Clearly the criteria used in the design of job evaluation plans are crucial and need to be business and work related.

Several recommendations help to ensure that job evaluation plans are bias free (Remick 1984). Among them are:

1. Ensuring that the compensable factors and scales are defined to recognize the content of jobs held predominantly by women. For example, working conditions should include the noise and stress of office machines and the working conditions surrounding computers.

2. Ensuring that compensable factor weights are not consistently biased against jobs held pre- dominantly by women. Are factors usually associated with these jobs always given less weight?

3. Ensuring that the plan is applied in as bias-free a manner as feasible. This includes ensuring that the job descriptions are bias free, that incumbent names are excluded from the job- evaluation process, and that women are part of the job-evaluation team and serve as evaluators.

Some writers see job evaluation as the best friend of those who wish to combat pay discrimination. Bates and Vail argue that without a properly designed and applied system, ‘‘employers will face an almost insurmountable task in persuading the government that ill-defined or whimsical methods of determining differences in job content and pay are a business necessity’’ (Bates and Vail 1984).

Acceptability: Sensible Results

Acceptance by the employees and managers is one key to a successful job-evaluation system. Any evaluation of the worthiness of pay practices must include assessing employee and manager accep- tance. Several devices are used to assess and improve the acceptability of job evaluation. An obvious one is the inclusion of a formal appeals process, discussed earlier. Employees who feel their jobs are incorrectly evaluated should be able to request reanalysis and reevaluation. Most firms respond to such requests from managers, but few extend the process to all employees, unless those employees are represented by unions who have a negotiated grievance process. They often justify this differential treatment on the basis of a fear of being inundated with appeals. Employers who open the appeals process to all employees theorize that jobholders are the ones most familiar with the work performed and know the most about changes, misrepresentations, or oversights that pertain to their job. No matter what the outcome from the appeal, the results need to be explained in detail to any employee who requests that his or her job be reevaluated.

A second method of assessing acceptability is to include questions about the organization’s job structure in employee attitude surveys. Questions can assess perceptions of how useful job evaluation is as a management tool. Another method is to determine to what extent the system is actually being used. Evidence of acceptance and understanding can also be obtained by surveying employees to determine the percentage of employees who understand the reasons for job evaluation, the percentage of jobs with current descriptions, and the rate of requests for reevaluation.

As noted earlier, stakeholders of job evaluations extend beyond employees and managers to in- clude unions and, some argue, comparable worth activists. The point is that acceptability is a some- what vague test of the job evaluation—acceptable to whom is an open issue. Clearly managers and employees are important constituents because acceptance makes it a useful device; but others, inside and outside the organization, also have a stake in job evaluation and the pay structure.

SUMMARY

In exchange for services rendered, individuals receive compensation from organizations. This com- pensation is influenced by a wide variety of ever-changing dynamics, many of which are identified in this chapter. The central focus of this chapter, though, was on just one of these influences: the internal worth of jobs. We introduced the different systems for evaluating jobs, the procedures nec- essary to operationalize these systems, and the criteria for evaluating the effectiveness of these systems in an organization.

Comments

Popular posts from this blog

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

NETWORK OPTIMIZATION MODELS:THE SHORTEST-PATH PROBLEM