48,224
Views
13
CrossRef citations to date
0
Altmetric
Original Articles

Psychological testing for selection purposes: a guide to evidence-based practice for human resource professionals

Pages 2517-2532 | Published online: 07 Dec 2009

Abstract

In order to be effective, HR practitioners need to be informed of best practice in a wide array of activities (Pfeffer 1994; Griffeth and Hom 2001). A key activity that is crucial to firm performance is recruitment and selection, yet many practitioners are poorly informed of the latest findings. In this review, evidence on the use of psychological testing for selection purposes was examined. Specifically, the focus of this paper is evidence that can be applied to ensure that psychological tests (ability and personality tests) are used in a way that ensures their effectiveness is maximized. A review of the extant literature identified five broad issues that relate to the use of psychological tests for selection purposes. These are: (1) how constructs for a particular job should be identified for selection purposes; (2) how test scores should be reported to a manager; (3) whether test information should be previewed prior to the interview;, (4) how psychological test scores and interview data should be combined; and, (5) whether a hiring recommendation should be given by the provider of candidate psychological test scores. The evidence on each of these issues is summarized and recommendations are made for effective HRM practices.

There is a growing awareness that attracting and retaining talented employees can provide organizations with a sustained competitive advantage. The importance of attracting superior employees, together with low unemployment rates has led to intense competition for the best applicants in a wide variety of occupations (O'Leary, Lindholm, Whitford and Freeman Citation2002). The managing director of McKinsey and Company, Rajat Gupta, describes the search for outstanding people as the ‘war for talent’ (Singh Citation2001). Greater competition for talented employees has made organizations more aware that recruitment and selection are key functions of human resource management (HRM).

Studies in the domain of HRM have shown that organizations that have adopted selection best practices have increased organizational profitability (Huselid Citation1995; Terpstra and Rozell Citation1993). Psychological testing is generally regarded as an integral part of best practices for selection (Terpstra and Rozell Citation1993; Harel and Tzafrir Citation2001; Guest, Michie, Conway and Sheehan Citation2003). The use of psychological tests for selection purposes has been increasing in recent years (Bartram Citation2001; Taylor, Keelty and McDonnell Citation2002; Anderson Citation2005; Wolf and Jenkins Citation2006). There are several possible reasons that explain the growing use of psychological tests. First, the widespread availability of psychological testing over the internet has meant that the cost of testing has been reduced, the relative ease of testing candidates has increased and the turnaround times for reporting results have improved substantially. Second, there is a growing awareness that testing is an integral part of best practice for selection (Terpstra and Rozell Citation1993; Harel and Tzafrir Citation2001; Guest et al. Citation2003); it is demonstrably relevant, objective and fair. Thus, it reduces the possibility of legal challenges (Wolf and Jenkins Citation2006). Third, there is some evidence that there is improved awareness of the predictive validity and reliability of specified tests (Harris, Dworkin and Park Citation1990; Cascio and Aguinis Citation2005; Wolf and Jenkins Citation2006). Finally, it is argued that the increased professionalism of human resource management (Legge Citation1995; Caldwell Citation2001) and their greater involvement in strategic decision making (Budhwar Citation2000; Farndale Citation2005) has meant they have more influence in organizations and this is turn may have increased test use (Wolf and Jenkins Citation2006). Use of psychological tests is greater in organizations that have a human resource (HR) specialist compared with those that do not (Hoque and Noon Citation2001; Wolf and Jenkins Citation2006).

Professional guidelines about the use of psychological tests for selection purposes offer very clear recommendations about general assessment issues, for example reliability, validity, norm groups, cut-scores and feedback to the applicant (American Educational Research Association, American Psychological Association and National Council for Measurement in Education Citation1999; US Department of Labor Employment and Training Administration Citation2000; International Test Commission Citation2001; Society for Industrial and Organizational Psychology (SIOP) Citation2003). There has been an abundance of articles about the aforementioned issues (e.g., Cascio and Aguinis Citation2005; Cascio, Outtz, Zedeck and Goldstein 1991). On the other hand, the guidelines offer very limited practical advice about how psychological test data should be used to make selection decisions.

Discussion about the issue of psychological testing for selection purposes has been sparse. Thus, the aim of this paper is to review empirical evidence on the use of psychological testing and to describe how evidence can be applied to maximize the predictive value of psychological testing. I have chosen to focus on aspects of psychological testing for selection purposes that can contribute to the further development of what can be considered ‘best practice’ in occupational testing. In the context of occupational testing, I use the term ‘best practice’ here as methods that maximize the predictive value of the occupational test. Hence, the guiding principle in this review was to identify evidence about practical issues that the HR professional can implement to maximize the value of psychological tests for selection rather than focus on the technical aspects of selection, for example test validity (Muchinsky Citation2004). The majority of articles on psychological testing and selection focus on technical aspects and tend to be written for other psychologists (Terpstra and Rozell Citation1997).

Ability and personality tests are the most popular forms of psychological testing for selection purposes (Keenan Citation1995; Ryan and Sackett Citation1987, Citation1992; Taylor et al. Citation2002; Salgado and De Fruyt Citation2005; Carless Citation2007), yet HR practitioners are generally ill-informed about the importance of ability and personality (Rynes, Colbert and Brown Citation2002; Rynes, Giluk and Brown Citation2007). According to HR research experts, knowledge about the predictive value of ability and personality are two of five fundamental findings that all practicing managers should know (Rynes et al. Citation2007). Two other topics were also related to selection and the fifth was knowledge about goal setting and feedback. Thus, this discussion is limited to ability and personality tests.

Although extensive research has demonstrated the efficacy of valid selection techniques (e.g., Schmidt and Hunter Citation1998), usage is still relatively low. Concerns have been expressed about the growing schism between science and practice, especially in personnel selection (Johns Citation1993; Kehoe Citation2000; Nowicki and Rosse Citation2002; Klehe Citation2004; Muchinsky Citation2004; Ryan and Tippins Citation2004; Anderson Citation2005; Rynes et al. Citation2007). Some argue that the research–practice gap can be explained by a lack of knowledge about selection practices (Terpstra and Rozell Citation1997; Ryan and Tippins Citation2004; Anderson Citation2005). Evidence supports the knowledge hypothesis; a large discrepancy exists between research findings and practitioners' beliefs about selection practices (Rynes et al. Citation2002; Carless, Rasiah and Irmer Citation2009).

Others have argued that the scientific literature is inaccessible (Terpstra and Rozell Citation1998); articles are too long and of little practical relevance (Gelade Citation2006). Knowledge transfer from academics to practitioners is more likely if scientific research is presented in an appropriate format for practitioners (e.g., worldwide web) and published in a more accessible form. For example, academics could distil the relevant scientific literature into short articles, with practical recommendations (Gelade Citation2006; Hodgkinson Citation2006). Language is also perceived as contributing to the gap: ‘academics are perceived as having mastered the language of obfuscation’ (Latham and Latham Citation2003, p. 249). Hence the second aim of this study was to summarize succinctly the evidence on psychological testing for selection purposes for HR professionals. HR professionals are an important source of information about psychological testing (Hoque and Noon Citation2001; Wolf and Jenkins Citation2006) and thus influence whether or not managers adopt best practices with regard to psychological testing.

The approach taken in this review was to identify recent, empirical literature that linked practical issues associated with the use of psychological testing for selection purposes with a measure of job performance (e.g., supervisor ratings).Footnote1 By linking specific selection practices with job performance it is possible to determine which practices are more likely to lead to the selection of better job performers. Articles focusing on technical aspects of psychological testing, for example cutoff scores, adverse impact and validity, were not included in the review. The majority of the research was undertaken in the United States, with some from the United Kingdom, Australia, New Zealand and parts of Europe.

The following themes were identified in the review, the title of each section is included in parentheses: (1) how constructs for a particular job should be identified for selection purposes (Construct selection), specifically the usefulness of job analysis and validity generalization research is reported; (2) how test results should be reported to a manager (Reporting psychological test results to the manger), the issue of ability and personality test results are considered separately; (3) whether test information should be previewed prior to the interview (previewing test information prior to interview); (4) how psychological test scores and interview data should be combined (combining psychological test data with other selection methods), first the added value of psychological test data is discussed, this is followed by a discussion about the value of scoring psychological test data and interview data; and finally, (5) whether a hiring recommendation should be given by the provider of candidate psychological test scores (Hiring recommendation).

Construct selection

With regard to the specific constructs that are assessed in the selection process (e.g., interpersonal skills, cognitive ability), there are two broad approaches to construct selection: (a) job analysis; and (b) validity generalization research. The job component approach or synthetic validity can also be used to identify constructs and selection procedures (for a description and examples see Jeanneret Citation1992; Scherbaum Citation2005; Steel, Huffcutt and Kammeyer-Mueller Citation2006). Briefly, this process involves identifying the important job components or job elements via a job analysis and identifying appropriate selecting methods based on previous validation studies. The job component approach will not be discussed due to limited evidence. Practitioners, however, working in small organizations with a small sample size may find this technique useful.

Job analysis

The foundation of valid selection procedures is a job analysis (American Educational Research Association et al. Citation1999; US Department of Labor Employment and Training Administration Citation2000; International Test Commission Citation2001; SIOP Citation2003). A job analysis is used to obtain information about job complexity, job environment, job context, job tasks, behaviours and activities performed, or worker characteristics (SIOP). Of central importance for the selection process, is information about the worker requirements, that is, the knowledge, skills, abilities (KSA) and other personal characteristics. A useful feature of a job analysis is that it specifies whether an employee is expected to have all the important KSA before selection into the job or whether training will be provided after selection (SIOP). Job analysis is also used to identify the general level of KSA needed.

There are many methods that can be used to obtain information about the task and worker requirements for a job (see Brannick and Levine Citation2002 for an excellent guide; or O*NET, http://online.onetcenter.org). Job analysis is fundamental to properly designed selection procedures because it ensures that only those constructs that are job-related are assessed. Tett, Jackson and Rothstein's (Citation1991) meta-analysis reported that predictive validity of personality was significantly higher (p = .325) when a job analysis was used compared with no job analysis (p = .252). Similar findings have been reported by Wiesner and Cronshaw (Citation1988) with regard to the predictive validity of interviews. They reported that the mean corrected validity for structured interviews based on a formal job analysis was .87, whereas the validity for structured interviews based on an informal or ‘armchair’ job analysis was .59. These findings were replicated by McDaniel, Morgeson, Finnegan, Campion and Braverman (Citation2001) with situational judgement tests. They found that situational judgement tests based on a job analysis (p = .38) were substantially more valid than tests not based on a job analysis (p = .29). In summary, there is clear evidence that when job analysis is used to determine the choice of selection technique the predictive value is higher compared to no job analysis. Job analysis is the cornerstone of best practice in selection.

Validity generalization research

In contrast to meta-analyses, validity generalization research attempts to ‘draw inferences about the population values of the statistics obtained via meta-analyses’ (Murphy Citation2000). Meta-analyses statistically combine the research results from a large number of studies and derive an overall estimate of the validity coefficient. For practitioners, they address the problem of inconsistent research findings by ‘providing unequivocal answers to the questions that traditionally have been marred by conflicting research outcomes’ (Le, Oh, Shaffer and Schmidt Citation2007, p. 7). Thus, it is argued that by making selection research more comprehensive and convincing to practitioners, meta-analyses help bridge the divide between researchers and practitioners, (Le et al. Citation2007).

Validity generalization research uses statistical calculations to estimate the population values for validity coefficients. For example, correcting for criterion unreliability, sample size (i.e., sampling error), attenuation and range restriction (for a full discussion see Murphy Citation2000). The results of validity generalization research can be used as the basis for construct selection (Cascio and Aguinis Citation2005). For example, Schmidt and Hunter's (Citation1998) finding that the criterion-related validity of cognitive ability for job performance is .51 can be used to justify using tests of cognitive ability for selection purposes. Cascio and Aguinis (Citation2005) cautioned against using validity generalization research indiscriminately. It is important to ensure that the jobs and contexts are similar to those described in the validity generalization research.

Until the 1990s, personality testing was generally regarded to have limited value for selection purposes. The advent of the five-factor model of personality provided researchers with a common framework for organizing personality traits (Hurtz and Donovan Citation2000). There have been several validity generalization studies based on the five-factor model (e.g., Barrick and Mount Citation1991; Tett et al. Citation1991; Hurtz and Donovan Citation2000). In general, these studies have shown that conscientiousness is a good predictor of work performance and, to a lesser degree, emotional stability. In jobs that require interpersonal facilitation (i.e., contextual performance), agreeableness is also a predictor of work performance (Hurtz and Donovan 2000).

Recently, the debate about the value of personality assessment for selection purpose has been rejuvenated (Berry, Sackett and Landers Citation2007; Morgeson et al. Citation2007a; Morgeson et al. Citation2007b; Ones, Dilchert, Viswesvaran and Judge Citation2007; Tett and Christiansen Citation2007). On the one hand the value of personality testing for selection purposes has been questioned and alternatives to self-report personality measures called for. The counterview is that the criterion-related validities strongly support the continued use of personality testing. The debate is complex and each opposing camp cited evidence in support of their respective view; it is expected that there will be on-going discussion.

Several useful suggestions can be gleaned from these papers. One is that the criterion-related validities of peer ratings of personality are higher than self-ratings (Mount, Barrick and Strauss Citation1994; Judge, Higgins, Thoresen and Barrick Citation1999; Bratko, Chamorro-Premuzic and Saks Citation2006; Morgeson et al. Citation2007a; Ones et al. Citation2007). Second, the use of multiple personality tests to assess the same trait yields higher criterion-related validities compared to single assessments (Connelly and Ones 2007; cited in Ones et al. Citation2007). Finally, a study of reference checks showed that ratings of personality by referees had sound criterion-related validity (corrected r = .36; Taylor, Pajo, Cheung and Stringfield Citation2004). Together, these studies suggested that ratings of personality by others provided useful additional data for selection purposes. It needs to be acknowledged, that, as yet, there is not a large body of evidence on the predictive validity of ratings by others, hence caution needs to be exercised when generalizing these findings. In conclusion, best practice is to ensure that personality assessment is clearly job related and is accorded a weighting based the job requirements (Morgeson et al. Citation2007a).

Reporting psychological test results to the manager

Managers make employment decisions. Most managers, however, have no training in psychometrics, they may not understand the constructs being assessed, and they may hold unwarranted views about tests (Guion Citation1998). Thus it is necessary to consider what psychological test information should be reported to the manager.

Ability tests

Ability tests are typically used to assess verbal, numeric and spatial abilities. Ability test scores are usually reported to client organizations in the form of percentile scores which are based on published norms and/ or norms computed by the provider (Ryan and Sackett Citation1987). It is important to ensure that the reference or comparison group is appropriate and representative of the individuals being assessed. While some would assume that most line-managers could easily comprehend the meaning of a single percentile score, Muchinsky (Citation2004) stated that in his personal experience managers do not know how to interpret test scores. For example, ‘I have witnessed mangers that have a pronounced preference for a raw score of 90 or the 90th percentile. This preference is manifested without knowing the total number of questions on the test (they presume 100) or what norm group was used to derive the percentile score’ (p. 196). Thus, Muchinsky and others (Roose and Doherty Citation1976) proposed that ability test scores should be dichotomized into pass or fail. Taking this approach reduces the possibility of managers misinterpreting ability test scores. If test scores are reported to the organization, it is important that managers are trained in test interpretation and expert guidance (i.e., a psychologist) is available to assist managers interpret test results. There is a need for research that examines whether managers can correctly interpret test scores and the extent that training improves ability to interpret psychological test data.

Personality tests

A key issue in the use of personality testing for selection purposes is whether the full personality profile should be reported to the manager which may include job-irrelevant characteristics or only those characteristics that are job relevant, in other words a partial profile. The latter approach avoids reporting irrelevant personal characteristics that selectors may have difficulty ignoring (Cooper and Robertson Citation1995).

Several reasons are put forward by personality experts to support the whole personality approach. First, the practice of examining isolated, individual personality variables ignores configural interpretation (Smith Citation1994; Hogan, Hogan and Roberts Citation1996; Murphy and Davidshofer Citation1998). Configural interpretation recognizes that the way in which each trait operates depends, in part, on the pattern of other traits. Hence, Hogan and his colleagues stated that the practice of interpreting single personality traits is ‘risky’.

Personality traits interact with each other (Arthur, Woehr and Graziana Citation2001), for example, Witt, Burke, Barrick and Mount (Citation2002) showed that in jobs characterized by cooperative behaviour with others, the interaction between conscientiousness and agreeableness explains additional variance in job performance after the separate effects of conscientiousness and agreeableness are taken into account. Nevertheless, there is little evidence that configural interpretation of personality tests improves the predictive value (Highhouse Citation2002). Waters and Sackett (Citation2006) showed that the traditional linear approach to scoring the big five personality outperformed configural scoring when predicting organizational citizenship behaviour and counterproductive behaviour.

The argument against reporting a person's whole profile is that irrelevant, non-job related personality characteristics may be used as a basis for making a hiring decision. There is some evidence for this. Using a policy capturing design, Hitt and Barr (Citation1989) found that managers used job-irrelevant variables more than job-relevant variables to make selection decisions. Of concern, was the finding that job-irrelevant data were used despite being directed toward specific job-relevant information with job descriptors. In a study of interviewers' perceptions of personality, Van Dam (Citation2003) found that interviewers used three personality dimensions as a basis for hiring recommendations; these were emotional stability, conscientiousness and openness to experience. In general, openness to experience is not related to job performance (Hurtz and Donovan Citation2000). Thus, Van Dam's findings implied that managers use job irrelevant test results when making a selection decision.

A limitation of the whole person approach to personality assessment is that it does not translate easily into applied settings (Arthur et al. Citation2001). Due to the complexity of personality, the whole person approach does not give a meaningful score that can be used to rank order applicants. There is no ‘p’ in personality assessment to parallel ‘g’ in ability testing (Hesketh and Robertson Citation1993). In summary, best practice is to ensure that only job relevant personality data are reported to managers.

Previewing test information prior to interview

A key issue in using psychological tests is whether test information is previewed prior to the interview. From a practical viewpoint, previewing test information has a number of benefits (Dipboye, Fontenelle and Garner Citation1984). It may stimulate more productive questioning by providing the interviewer with leads to pursue in the interview. It should enable interviewers to gather unique information not readily available from other sources. In summary, practitioners are likely to argue that previewing enables them to make valid and useful judgements about an applicant's ‘hireability’. In practice, evidence indicates that interviewers do preview test scores (Ryan and Sackett Citation1987, Citation1992). However, if personality test results are used as the basis for interview questions and probes, the level of structure is diminished (Campion, Palmer and Campion Citation1997) and thus the predictive value of the interview is reduced.

Evidence on the issue of whether to preview test information suggests a range of unfavourable outcomes of previewing. McDaniel, Whetzel, Schmidt and Maurer's (Citation1994) meta-analysis showed that selection interview validities for job performance were lower when the interviewer previewed cognitive test results (p = .18) compared with no preview (p = .32). This was shown to be true for both structured and unstructured interviews. Dalessio and Silverhart (Citation1994) examined the impact of previewing biodata test results with a sample of life insurance sales applicants. The findings showed that the effects of previewing varied according to the score of the applicant; two outcomes were examined; decision to continue with the selection process and turnover over 12 months later. The interview information predicted interviewer decision and later turnover best for applicants with low passing scores on the biodata test and poorest for applicants with high passing scores. This finding suggested that interviewers may not give much weight to applicants' interview performance when they have a high biodata score.

Finally, an experimental study of the impact of previewing an application compared with no previewing on interview process and outcomes (Dipboye et al. Citation1984). Interviewers who did not preview made more reliable evaluations of applicants' fit to the job and interview performance compared with those who did preview. Previewing the application had no influence on the accuracy of personality perceptions nor did it give the interviewer much advantage in processing and retrieving information after the interview.

In addition to the impact of previewing test scores on interview structure, much has been written about how interviewers' information processing strategies and capabilities affect the interview outcomes (i.e., interviewer cognitive factors). Posthuma, Morgeson and Campion (Citation2002) identified two streams of interview research within a cognitive framework. These are: (a) pre-interview impressions, which refer to applicant evaluations that are formed from information prior to the interview; and (b) confirmatory biases, in which interviewers seek out information that supports or confirms their hypotheses. Posthuma et al. (Citation2002) concluded that pre-interview impressions seem to have varying degrees of influence; without studies of actual applicants it is difficult to draw conclusions. With regard to confirmatory biases a similar conclusion was drawn (see Posthuma et al. (Citation2002) for a more detailed account).

In summary, the evidence on previewing test information suggests it reduces the predictive validity of the interview and leads to less accurate interview decisions. The extent of information is very limited and therefore conclusions are tentative. There is a need for studies that examine the impact of examining test information, in particular, personality data, on interview processes and outcomes (Ryan and Sackett Citation1998). Campion and his colleagues (1997) suggested that if test scores (and other ancillary information) are previewed prior to the interview, the data should be standardized.

Combining psychological test data with other selection methods

Psychological testing is typically one of a number of selection techniques used to make a selection decision. This raises the question: what is the added value of psychological testing?

Added value of psychological test data

In almost all selection procedures an interview is used (Robertson and Makin Citation1986; Keenan Citation1995; Di Milia and Smith Citation1997; Rioux and Bernthal Citation1999; Chartered Institute of Personnel Development Citation2006; Carless Citation2007). Psychological tests are typically combined with interviews for selection purposes. Roth and Campion (Citation1992) showed that combining interviews with cognitive ability explained an additional 10% of variance in job performance.

However, when examining the utility of psychological testing, it is important to take into account the degree of structure of the interview. Schmidt and Hunter (Citation1998) reported that when cognitive ability tests were combined with structured interviews, an additional 12% of variance in overall job performance was explained, but in combination with unstructured interviews cognitive ability tests explained only an additional 4% of variance.

Following on from the work of Huffcutt and Arthur (Citation1994) on interview structure, Cortina, Goldstein, Payne, Davison and Gilliland (Citation2000) examined the incremental validity of three levels of structured interviews over cognitive ability tests and conscientiousness tests. Unfortunately for the purposes of this review the authors did not distinguish between the unique effects of cognitive ability tests and conscientiousness tests. The categories of interviews were: no standardization of questions or scoring (Level 1); slight to moderate constraints on questions and scoring (Level 2); and moderate to full constraints on both questions and scoring (Level 3). The meta-analysis results confirmed the findings of Schmidt and Hunter (Citation1998). Interviews explained incremental validity over and above that of cognitive ability and conscientiousness tests if they were highly structured (12.3–22.2% of unique variance). Unstructured interviews explained no unique variance over and above that of cognitive ability and conscientiousness tests (0.9–2.2% of unique variance). Moderately structured interviews were marginally better (1.8–6.2% of unique variance) than unstructured interviews, that is, they explained a small proportion of unique variance after cognitive ability and conscientiousness test scores were accounted for.

In summary, the combination of a structured interview with a cognitive ability test or a structured interview with a cognitive ability and conscientiousness test accrues in better prediction of work performance compared to just using an interview for selection purposes.

Scoring psychological test and interview data

Multiple selection ratings can be combined either by a statistical or a judgemental method (Born and Scholarios Citation2005). The statistical (or mechanical) method uses a formula based on multiple regression analysis, for example to combine interview and test scores. Other simple options include using rationally derived weights based on judgements of selection method validity or the importance of the characteristic derived through job analysis (Guion Citation1998; Highhouse Citation2002; Born and Scholarios Citation2005). The statistical method of combining psychological test and interview scores is completely objective (Grove, Zald, Lebow, Snitz and Nelson Citation2000). In the judgemental (or clinical) method, the decision maker reviews interview and test scores and makes a global decision about the applicant's suitability. The interviewer puts the data together using informal, subjective methods. The judgemental method is more common than the statistical method (Campbell, Dunnette, Lawler and Weick Citation1970; Ryan and Sackett Citation1998; Born and Scholarios Citation2005).

A meta-analysis by Grove, and his colleagues (2000) compared the predictive validity of statistical and judgemental methods across a range of business and non-business studies. Their sample consisted of 136 studies, varying from business start-up success to psychiatric diagnosis. They concluded that regardless of the criterion, the statistical method was superior to the judgemental method. Their results confirmed earlier conclusions (Dawes and Corrigan Citation1974; Sawyer Citation1966; Meehl Citation1986). Of interest was the finding that in half of the studies the judgemental method was approximately as good as the statistical method. The susceptibility of humans to errors is the general reason for the weaker judgemental prediction. Errors include the use of faulty heuristics, failure to understand regression to the mean, illusionary correlations (overestimating the co-occurrence of rare behaviours or characteristics), confirmatory biases and ignoring base rates (Born and Scholarios Citation2005; Posthuma et al. Citation2002).

Ganzach, Kluger and Klayman (Citation2000) using a very large sample of army applicants (N = 26,197) and interviewers (N = 116) confirmed the superiority of the statistical method over the judgemental method. However, the researchers also showed that using a combination of statistical and judgemental methods was associated with the highest predictive accuracy. That is, combining a global judgement about an applicant's likelihood of success in the military with a statistically combined score was more accurate than a statistically combined score. Caution needs to be used when generalizing these findings as the criterion was military disciplinary transgressions (e.g., desertion or imprisonment). There is a need for more workplace evidence.

The validity of multiple selection procedures is influenced by the way in which different selection method scores are combined to predict work performance (Sackett and Roth Citation1996; Murphy and Shiarella Citation1997; Schmitt, Rogers, Chan, Sheppard and Jennings Citation1997). When two or more selection methods are used in combination, and one method has lower predictive validity than the other, the results of those methods must be combined carefully to obtain the best prediction of how each candidate is likely to perform on the job. For example, Murphy and Shiarella found that about 23% of the variance in the correlations between predictor and criterion composites could be explained in terms of the weights assigned to cognitive ability and personality measures. In general, they found validities were highest when more emphasis was placed on cognitive abilities. However, if too much emphasis was placed on either cognitive ability or personality measures validity can decrease.

There are a number of circumstances in which the statistical method of combining psychological test and interview data may not be appropriate. For example, unusual jobs, executive positions, or lower level jobs in small organizations when only one applicant will be chosen (Guion Citation1998; Guion and Highhouse Citation2006). Executive level positions are complex and it is unlikely that adequate norms or ‘psychologically detailed behavioural job descriptions’ exist (Levinson Citation1998, p. 241). It is doubtful that an executive will fail because of technical mistakes (RHR International Citation1991). Thus, executive selection should differ from selection at other organization levels; a judgemental approach is more appropriate (RHR International Citation1991; Guion Citation1998; Guion and Highhouse Citation2006). When the judgement method is used, decision makers need to ensure that only relevant information is used. Ruderman and Ohlott (1990 in Sessa Citation2001) noted that the use of salient but irrelevant candidate characteristics and exclusion of summary data but inclusion of concrete, vivid information are among the factors that have the greatest potential to influence the choice of an executive. In summary, best practice is to use a statistical method of combining psychological test scores with other selection data.

Hiring recommendation

A vexing issue for HR professionals is whether the manager should be given a hiring recommendation based on test scores. Guion (Citation1998) strongly argued that the people responsible for the hiring decision should not abrogate the hiring decision to an ‘outsider’, but should get independent information and form their own view. He acknowledged that in his own professional work he refused to make recommendations. Taking this approach the assessor's role is to simply report and describe the candidate. De Wolff (Citation1989) expressed a similar view and stated that although the psychologist can give advice he/she ‘does well to leave the final responsibility with the manager’ (p. 89). On the other hand, hiring managers rely on psychologists to interpret psychological data, in particular, personality data. There is some evidence that psychologists involved in the interview process make more appropriate use of psychological test scores compared with non-psychologists (Vom Hofe and Levy-Leboyer Citation1993).

Whether a hiring recommendation is given may depend on the number of applicants assessed. In general, those who conduct individual assessments argue that a clear indication about the applicant's chances of success on the job should be given (Kwaske Citation2004; Meyer Citation1998; Prien, Schippmann and Prien Citation2003), for example, through the use of a numeric rating of the overall suitability of the applicant (Prien et al. Citation2003). Ryan and Sackett's (Citation1987) survey of industrial and organizational psychologists who conduct individual assessments found that the majority of respondents gave a hiring recommendation (68%) as well as describing the individual's strengths and weaknesses (68%). Ryan and Sackett's (Citation1992) later survey reported very similar findings (61% and 70% respectively).

Hiring managers are more likely to use information presented in a comprehensible manner (Dutton and Ashford Citation1993; Carson, Becker and Henderson Citation1998). Thus, it is likely that managers would prefer a hiring recommendation. If hiring recommendations are given, it is recommended that managers do not consult the report until the interview is completed and ratings made.

Conclusion

This review has identified a range of practices associated with the use of psychological testing that if adopted can maximize the predictive value of testing. In summary these are: (1) conducting a job analysis prior to selection or using validity generalization research to determine the constructs assessed by psychological tests; (2) it is not advisable to report numeric ability test scores to managers nor should full personality profiles be reported; (3) psychological test information should not be previewed prior to an interview; (4) psychological test data should be numerically combined with interview data; and (5) hiring managers should be encouraged to make their own decision about the suitability of an applicant. For a variety of reasons managers do not readily adopt evidence based selection practices (Johns Citation1993; Dipboye Citation1994; Terpstra and Rozell Citation1997; Subramony Citation2006). On the other hand, there is some evidence to indicate that line managers are keen to improve their selection skills and are receptive to evidence (Wright, McMahan, Snell and Gerhart Citation2001; Nowicki and Rosse Citation2002; Carless Citation2009). Line managers rely on HR professionals for advice about recruitment and selection (Budhwar Citation2000; Farndale Citation2005). The HR professional has an important role to play as a knowledge intermediary; they can promote the application of effective testing practices and more generally, the adoption of evidence-based practice with regard to selection.

Acknowledgement

I would like to acknowledge generous advice of Paul Taylor, Helen De Cieri and Felicity Allen.

Notes

1. Key search terms used were: selection; psychological testing; psychometric tests; general cognitive ability; cognitive ability testing; ability testing; and personality testing.

References

  • American Educational Research Association, American Psychological Association, and National Council on Measurement in Education . 1999 . Standards for Educational and Psychological Testing , Washington, DC : American Educational Research Association .
  • Anderson , N. 2005 . “ Relationships between Practice and Research in Personnel Selection: Does the Left Hand Know What the Right is Doing? ” . In The Blackwell Handbook of Personnel Selection , Edited by: Evers , A. , Anderson , N. and Voskuijl , O. 1 – 24 . Malden, MA : Blackwell .
  • Arthur , W.J. , Woehr , D. and Graziano , W. 2001 . Personality Testing in Employment Settings: Problems and Issues in the Application of Typical Selection Practices . Personnel Review , 30 : 657 – 676 .
  • Barrick , M.R. and Mount , M.K. 1991 . The Big Five Personality Dimension and Job Performance: A Meta-analysis . Personnel Psychology , 44 : 1 – 26 .
  • Bartram , D. 2001 . The Development of International Guidelines on Test Use: The International Test Commission Project . International Journal of Testing , 1 : 33 – 53 .
  • Berry , C.M. , Sackett , P.R. and Landers , R.N. 2007 . Revisiting Interview-cognitive Ability Relationships: Attending to Specific Range Restriction Mechanisms in Meta-analysis . Personnel Psychology , 60 : 837 – 874 .
  • Born , M.P. and Scholarios , D. 2005 . “ Decision Making in Selection ” . In The Blackwell Handbook of Personal Selection , Edited by: Evers , A. , Anderson , N. and Voskuijl , O. 286 – 290 . Malden, MA : Blackwell .
  • Brannick , M. and Levine , E. 2002 . Job Analysis: Methods Research, and Applications for Human Resource Management in the New Millennium , London : Sage Publications .
  • Bratko , D. , Chamorro-Premuzic , N.M.A. and Saks , Z. 2006 . Personality and School Performance: Incremental Validity of Self- and Peer-ratings over Intelligence . Personality and Individual Differences , 41 : 131 – 142 .
  • Budhwar , P.S. 2000 . Evaluating Levels of Strategic Integration and Devolvement of Human Resource Management in the UK . Personnel Review , 29 : 141 – 157 .
  • Caldwell , R. 2001 . Champions, Adaptors, Consultants and Synergists: The New Change Agents in HRM . Human Resource Management Journal , 11 : 39 – 52 .
  • Campbell , J.P. , Dunnette , M.D. , Lawler , E.E. III and Weick , K.E. 1970 . Managerial Behavior, Performance, and Effectiveness , New York : McGraw-Hill .
  • Campion , M.A. , Palmer , D.K. and Campion , J.E. 1997 . A Review of Structure in the Selection Interview . Personnel Psychology , 50 : 655 – 702 .
  • Carless , S.A. 2007 . Graduate Recruitment and Selection in Australia . International Journal of Selection and Assessment , 15 : 153 – 166 .
  • Carless, S.A. (2009), ‘Managers' Perceptions of Psychological Assessment for Selection Purposes,’ paper submitted for publication
  • Carless , S.A. , Rasiah , J. and Irmer , B. 2009 . The Discrepancy between HR Research and Practice: A Comparison of I/O Psychologists and HR Practitioner's Beliefs . Australian Psychologist , 44 : 105 – 111 .
  • Carson , K.P. , Becker , J.S. and Henderson , J.A. 1998 . Is Utility Really Futile? A Failure to Replicate and an Extension . Journal of Applied Psychology , 83 : 84 – 96 .
  • Cascio , W.F. and Aguinis , H. 2005 . Test Development and Use: New Twists on Old Questions . Human Resource Management , 44 : 219 – 235 .
  • Cascio , W.F. , Outtz , J. , Zedeck , S. and Goldstein , I.L. 1991 . Statistical Implications of Six Methods of Test Score Use in Personnel Selection . Human Performance , 4 : 233 – 264 .
  • Chartered Institute of Personnel and Development . 2006 . Recruitment, Retention and Turnover , London : CIPD .
  • Cooper , D. and Robertson , I.T. 1995 . The Psychology of Personnel Selection: A Quality Approach , New York : Routledge .
  • Cortina , J.M. , Goldstein , N.B. , Payne , S.C. , Davison , H.K. and Gilliland , S.W. 2000 . The Incremental Validity of Interview Scores Over and Above Cognitive Ability and Conscientiousness Scores . Personnel Psychology , 53 : 325 – 342 .
  • Dalessio , A.T. and Silverhart , T.A. 1994 . Combining Biodata Test and Interview Information: Predicting Decisions and Performance Criteria . Personnel Psychology , 47 : 303 – 315 .
  • Dawes , R.M. and Corrigan , D. 1974 . Linear Models in Decision Making . Psychological Bulletin , 81 : 95 – 106 .
  • De Wolff , C.J. 1989 . “ The Changing Role of Psychologists in Selection ” . In Assessment and Selection in Organizations , Edited by: Herriot , P. 81 – 91 . Chichester : John Wiley and Sons .
  • Di Milia , L. and Smith , P. 1997 . Australian Management Selection Practices: Why Does the Interview Remain Popular? . Asia Pacific Journal of Human Resources , 35 : 90 – 103 .
  • Dipboye , R.L. 1994 . Structured And Unstructured Selection Interviews: Beyond the Job–fit model . Research in Personnel and Human Resources Management , 12 : 79 – 123 .
  • Dipboye , R.L. , Fontenelle , G.A. and Garner , K. 1984 . Effects of Previewing the Application on Interview Process and Outcomes . Journal of Applied Psychology , 69 : 118 – 128 .
  • Dutton , J.E. and Ashford , S.J. 1993 . Selling Issues to Top Management . Academy of Management Review , 18 : 397 – 428 .
  • Farndale , E. 2005 . HR Department Professionalism: A Comparison between the UK and other European Countries . International Journal of Human Resource Management , 16 : 660 – 675 .
  • Ganzach , Y. , Kluger , A.N. and Klayman , N. 2000 . Making Decisions from an Interview: Expert Measurement and Mechanical Combination . Personnel Psychology , 53 : 1 – 17 .
  • Gelade , G.A. 2006 . But What Does it Mean in Practice? The Journal of Occupational and Organizational Psychology from a Practitioner Perspective . Journal of Occupational and Organizational Psychology , 79 : 153 – 160 .
  • Griffeth , R.W. and Hom , P.W. 2001 . Retaining Valued Employees , Thousand Oaks, CA : Sage .
  • Grove , W.M. , Zald , D.H. , Lebow , B.S. , Snitz , B.E. and Nelson , C. 2000 . Clinical versus Mechanical Prediction: A Meta-analysis . Psychological Assessment , 12 : 19 – 30 .
  • Guest , D. , Michie , J. , Conway , N. and Sheehan , M. 2003 . Human Resource Management and Corporate Performance in the UK . British Journal of Industrial Relations , 41 : 291 – 314 .
  • Guion , R.M. 1998 . Assessment, Measurement, and Prediction for Personnel Decisions , Mahwah, NJ : Lawrence Erlbaum .
  • Guion , R.M. and Highhouse , S. 2006 . Essentials of Personnel Assessment and Selection , Mahwah, NJ : Lawrence Erlbaum .
  • Harel , G. and Tzafrir , S. 2001 . HRM Practices in the Public and Private Sectors: Differences and Similarities . Public Administration Quarterly , 25 : 317 – 350 .
  • Harris , M.M. , Dworkin , J.B. and Park , J. 1990 . Pre-employment Screening Procedures: How Human Resource Managers Perceive Them . Journal of Business and Psychology , 4 : 279 – 292 .
  • Hesketh , B. and Robertson , I. 1993 . Validating Personnel Selection: A Process Model for Research and Practice . International Journal of Selection and Assessment , 1 : 3 – 17 .
  • Highhouse , S. 2002 . Assessing the Candidate as a Whole: A Historical and Critical Analysis of Individual Psychological Assessment for Personnel Decision Making . Personnel Psychology , 55 : 363 – 396 .
  • Hitt , M.A. and Barr , S.H. 1989 . Managerial Selection Decision Models: Examination of Configural Cue Processing . Journal of Applied Psychology , 74 : 53 – 61 .
  • Hodgkinson , G.P. 2006 . The Role of JOOP (and Other Scientific Journals in Bridging the Practitioner–researcher Divide in Industrial, Work and Organizational (IWO)) Psychology . Journal of Occupational and Organizational Psychology , 79 : 173 – 178 .
  • Hogan , R. , Hogan , J. and Roberts , B.W. 1996 . Personality Measurement and Employment Decisions: Questions and Answers . American Psychologist , 51 : 469 – 477 .
  • Hoque , K. and Noon , M. 2001 . Counting Angels: A Comparison of Personnel and HR Specialists . Human Resource Management Journal , 11 : 5 – 22 .
  • Huffcutt , A.I. and Arthur , W. Jr . 1994 . Hunter and Hunter (1984) Revisited: Interview Validity for Entry-level Jobs . Journal of Applied Psychology , 79 : 184 – 190 .
  • Hurtz , G.M. and Donovan , J.J. 2000 . Personality and Job Performance: The Big Five Revisited . Journal of Applied Psychology , 85 : 869 – 879 .
  • Huselid , M.A. 1995 . The Impact of Human Resource Management Practices on Turnover, Productivity, and Corporate Financial Performance . Academy of Management Journal , 38 : 635 – 672 .
  • International Test Commission . 2001 . International Guidelines for Test Use . International Journal of Testing , 1 : 93 – 114 .
  • Jeanneret , P.R. 1992 . Applications of Job Component/Synthetic Validity to Construct Validity . Human Performance , 5 : 81 – 96 .
  • Johns , G. 1993 . Constraints on the Adoption of Psychology-based Personnel Practices: Lessons from Organizational Innovation . Personnel Psychology , 46 : 569 – 588 .
  • Judge , T.A. , Higgins , C.A. , Thoresen , C.J. and Barrick , M.R. 1999 . The Big Five Personality Traits, General Mental Ability, and Career Success across the Life Span . Personnel Psychology , 52 : 621 – 652 .
  • Keenan , T. 1995 . Graduate Recruitment in Britain: A Survey of Selection Methods Used by Organizations . Journal of Organizational Behavior , 16 : 303 – 317 .
  • Kehoe , J.F. 2000 . “ Research and Practice in Selection ” . In Managing Selection in Changing Organizations , Edited by: Kehoe , J.F. 123 – 157 . San Francisco, CA : Jossey-Bass .
  • Klehe , U.C. 2004 . Choosing How to Choose: Institutional Pressures Affecting the Adoption of Personnel Selection Procedures . International Journal of Selection and Assessment , 12 : 327 – 342 .
  • Kwaske , I.H. 2004 . Individual Assessments for Personnel Selection: An Update on a Rarely Researched but Avidly Practiced Practice . Consulting Psychology Journal: Practice and Research , 56 : 186 – 193 .
  • Latham , G.P. and Latham , S. 2003 . Facilitators and Inhibitors of the Transfer of Knowledge between Scientists and Practitioners in Human Resource Management: Leveraging Cultural, Individual, and Institutional Variables . European Journal of Work and Organizational Psychology , 12 : 245 – 256 .
  • Le , H. , Oh , I. , Shaffer , J. and Schmidt , F. 2007 . Implications of Methodological Advances for the Practice of Personnel Selection: How Practitioners Benefit from Meta-analysis . Academy of Management Perspectives , 21 : 6 – 18 .
  • Legge , K. 1995 . Human Resource Management – Rhetorics and Realities , Basingstoke : Macmillan .
  • Levinson , H. 1998 . “ A Clinical Approach to Executive Selection ” . In Individual Psychological Assessment: Predicting Behavior in Organizational Settings , Edited by: Jeanneret , R. and Silzer , R. 228 – 242 . San Francisco, CA : Jossey-Bass .
  • McDaniel , M.A. , Morgeson , F.P. , Finnegan , E.B. , Campion , M.A. and Braverman , E.P. 2001 . Use of Situational Judgment Tests To Predict Job Performance: A Clarification of the Literature . Journal of Applied Psychology , 86 : 730 – 740 .
  • McDaniel , M.A. , Whetzel , D.L. , Schmidt , F.L. and Maurer , S.D. 1994 . The Validity of Employment Interviews: A Comprehensive Review and Meta-analysis . Journal of Applied Psychology , 79 : 599 – 616 .
  • Meehl , P.E. 1986 . Causes and Effects of my Disturbing Little Book . Journal of Personality Assessment , 50 : 370 – 375 .
  • Meyer , P. 1998 . “ Communicating Results for Impact ” . In Individual Psychological Assessment: Predicting Behavior in Organizational Settings , Edited by: Jeanneret , R. and Silzer , R. 243 – 282 . San Francisco, CA : Jossey-Bass .
  • Morgeson , F.P. , Campion , M.A. , Dipboye , R.L. , Hollenbeck , J.R. , Murphy , K. and Schmitt , N. 2007a . Reconsidering the Use of Personality Tests in Personnel Selection Contexts . Personnel Psychology , 60 : 683 – 729 .
  • Morgeson , F.P. , Campion , M.A. , Dipboye , R.L. , Hollenbeck , J.R. , Murphy , K. and Schmitt , N. 2007b . Are We Getting Fooled Again? Coming to Terms with Limitations in the use of Personality Tests for Personnel Selection . Personnel Psychology , 60 : 1029 – 1049 .
  • Mount , M.K. , Barrick , M.R. and Strauss , J. 1994 . Validity of Observer Ratings of the Big Five Personality Factors . Journal of Applied Psychology , 79 : 272 – 280 .
  • Muchinsky , P.M. 2004 . When the Psychometrics of Test Development Meets Organizational Realities: A Conceptual Framework for Organizational Change . Examples, and Recommendations,' Personnel Psychology , 57 : 175 – 208 .
  • Murphy , K. 2000 . Impact of Assessments of Validity Generalization and Situational Specificity on the Science and Practice of Personnel Selection . International Journal of Selection and Assessment , 8 : 194 – 206 .
  • Murphy , K.R. and Davidshofer , C.O. 1998 . Psychological Testing: Principles and Applications , Vol. (4th ed.) , Englewood Cliffs, NJ : Prentice Hall .
  • Murphy , K.R. and Shiarella , A.H. 1997 . Implications of the Multidimensional Nature of Job Performance for the Validity of Selection Tests: Multivariate Frameworks for Studying Test Validity . Personnel Psychology , 50 : 823 – 854 .
  • Nowicki , M.D. and Rosse , J.G. 2002 . Managers' View of How to Hire: Building Bridges between Science and Practice . Journal of Business Psychology , 17 : 157 – 169 .
  • O'Leary , B.S. , Lindholm , M.L. , Whitford , R.A. and Freeman , S.E. 2002 . Selecting the Best and Brightest: Leveraging Human Capital . Human Resource Management , 41 : 325 – 340 .
  • Ones , D.S. , Dilchert , S. , Viswesvaran , C. and Judge , T.A. 2007 . In Support Of Personality Assessment in Organizational Settings . Personnel Psychology , 60 : 995 – 1027 .
  • Pfeffer , J. 1994 . Competitive Advantage through People: Unleashing the Power of the Workforce , Boston, MA : Harvard Business School Press .
  • Posthuma , R.A. , Morgeson , F.P. and Campion , M.A. 2002 . Beyond Employment Interview Validity: A Comprehensive Narrative Review of Recent Research and Trends over Time . Personnel Psychology , 55 : 1 – 52 .
  • Prien , E.P. , Schippmann , J.S. and Prien , K.O. 2003 . Individual Assessment: As Practiced in Industry and Consulting , Mahwah, NJ : Lawrence Erlbaum .
  • RHR International . 1991 . “ The Psychological Assessment of Top Executives: An Interview-based Approach ” . In A Handbook of Psychological Assessment in Business , Edited by: Hansen , C.P. (Vol. 13) , 131 – 139 . New York : Quorum Books .
  • Rioux, S.M., and Bernthal, P. (1999), Recruitment and Selection Practices, Development Dimensions International, available at: www.ddiworld.com/pdf/recruitmentandselectionpractices_fullreport_ddi.pdf (http://www.ddiworld.com/pdf/recruitmentandsalectionpractices_fullreport_ddi.pdf)
  • Robertson , I.T. and Makin , P.J. 1986 . Management Selection in Britain: A Survey and Critique . Journal of Occupational Psychology , 59 : 45 – 57 .
  • Roose , J.E. and Doherty , M.E. 1976 . Judgment Theory Applied to the Selection of Life Insurance Salesmen . Organizational Behavior and Human Performance , 16 : 231 – 249 .
  • Roth , P.L. and Campion , J.E. 1992 . An Analysis of the Predictive Power of the Panel Interview and Pre-employment Tests . Journal of Occupational and Organizational Psychology , 65 : 51 – 60 .
  • Ryan , A.M. and Sackett , P.R. 1987 . A Survey of Individual Assessment Practices by I/O Psychologists . Personnel Psychology , 40 : 455 – 488 .
  • Ryan , A.M. and Sackett , P.R. 1992 . Relationships between Graduate Training Professional Affiliation, and Individual Psychological Assessment Practices for Personnel Decisions . Personnel Psychology , 45 : 363 – 387 .
  • Ryan , A.M. and Sackett , P.R. 1998 . “ Individual Assessment: The Research Base ” . In Individual Psychological Assessment: Predicting Behavior in Organizational Settings , Edited by: Jeanneret , R. and Silzer , R. 54 – 87 . San Francisco, CA : Jossey-Bass .
  • Ryan , A.M. and Tippins , N.T. 2004 . Attracting and Selecting: What Psychological Research Tells Us . Human Resource Management , 43 : 305 – 318 .
  • Rynes , S. , Colbert , A. and Brown , K. 2002 . HR Professionals' Beliefs about Effective Human Resource Practices: Correspondence between Research and Practice . Human Resource Management , 41 : 149 – 174 .
  • Rynes , S.L. , Giluk , T.L. and Brown , K.G. 2007 . The Very Separate Worlds of Academic and Practitioner Periodicals in Human Resource Management: Implications for Evidence-based Management . Academy of Management Journal , 50 : 987 – 1008 .
  • Sackett , P.R. and Roth , L. 1996 . Multi-stage Selection Strategies: A Monte Carlo Investigation of Effects on Performance and Minority Hiring . Personnel Psychology , 49 : 549 – 572 .
  • Salgado , J.F. and De Fruyt , F. 2005 . “ Personality in Personnel Selection ” . In The Blackwell Handbook of Personnel Selection , Edited by: Evers , A. , Anderson , N. and Voskuijl , O. 174 – 198 . Malden, MA : Blackwell .
  • Sawyer , J. 1966 . Measurement and Prediction Clinical and Statistical . Psychological Bulletin , 66 : 178 – 200 .
  • Scherbaum , C.A. 2005 . Synthetic Validity: Past, Present, and Future . Personnel Psychology , 58 : 481 – 515 .
  • Schmidt , F.L. and Hunter , J. 1998 . The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings . Psychological Bulletin , 124 : 262 – 274 .
  • Schmitt , N. , Rogers , W. , Chan , D. , Sheppard , L. and Jennings , D. 1997 . Adverse Impact and Predictive Efficiency of Various Predictor Combinations . Journal of Applied Psychology , 82 : 719 – 730 .
  • Sessa , V.I. 2001 . “ Executive Promotion and Selection ” . In How People Evaluate Others in Organizations , Edited by: London , M. 91 – 110 . Mahwah, NJ : Lawrence Erlbaum Associates .
  • Singh , J. 2001 . McKinsey's Managing Director Rajat Gupta on Leading a Knowledge-based Global Consulting Organization . Academy of Management Executive , 15 : 34 – 44 .
  • Smith , M. 1994 . A Theory of Validity of Predictors in Selection . Journal of Occupational and Organizational Psychology , 67 : 13 – 31 .
  • Society for Industrial and Organizational Psychology, Inc. (SIOP) . 2003 . Principles for the Validation and Use of Personnel Selection Procedures , Vol. (4th ed.) , Bowling Green, OH : SIOP .
  • Steel , P.D.G. , Huffcutt , A. and Kammeyer-Mueller , J. 2006 . From the Work One Knows the Worker: A Systematic Review of the Challenges, Solutions, and Steps to Creating Synthetic Validity . International Journal of Selection and Assessment , 14 : 16 – 36 .
  • Subramony , M. 2006 . Why Organizations Adopt Some Human Resource Management Practices and Reject Others: An Exploration of Rationales . Human Resource Management , 45 : 195 – 210 .
  • Taylor , P. , Keelty , Y. and McDonnell , B. 2002 . Evolving Personnel Selection Practices in New Zealand Organisations and Recruitment Firms . New Zealand Journal of Psychology , 31 : 8 – 18 .
  • Taylor , P.J. , Pajo , K. , Cheung , G.W. and Stringfield , P. 2004 . Dimensionality and Validity of a Structured Telephone Reference Check Procedure . Personnel Psychology , 57 : 745 – 772 .
  • Terpstra , D.E. and Rozell , E.J. 1993 . The Relationship of Staffing Practices to Organizational Level Measures of Performance . Personnel Psychology , 46 : 27 – 48 .
  • Terpstra , D.E. and Rozell , E.J. 1997 . Why Some Potentially Effective Staffing Practices are Seldom Used . Public Personnel Management , 26 : 483 – 493 .
  • Terpstra , D.E. and Rozell , E.J. 1998 . Human Resource Executives' Perceptions of Academic Research . Journal of Business and Psychology , 13 : 19 – 29 .
  • Tett , R.P. and Christiansen , N.D. 2007 . Personality Tests at the Crossroads: A Response to Morgeson, Campion, Dipboye, Hollenbeck, Murphy, and Schmitt (2007) . Personnel Psychology , 60 : 967 – 993 .
  • Tett , R.P. , Jackson , D.N. and Rothstein , M. 1991 . Personality Measures as Predictors of Job Performance: A Meta-analytic Review . Personnel Psychology , 44 : 703 – 742 .
  • US Department of Labor, Employment and Training Administration . 2000 . “ Testing and Assessment: An Employer's Guide to Good Practices ” . Washington, DC : US Department of Labor, Employment and Training Administration .
  • Van Dam , K. 2003 . Trait Perception in the Employment Interview: A Five-factor Model Perspective . International Journal of Selection and Assessment , 11 : 43 – 53 .
  • Vom Hofe , A. and Levy-Leboyer , C. 1993 . Evaluation of the Use of Personality Tests in Personnel Selection in France . European Review of Applied Psychology , 43 : 221 – 227 .
  • Waters, S.D., and Sackett, P.R. (2006), ‘On the Possibility of Using Configural Scoring to Enhance Prediction,’ paper presented at the 21st Annual Conference Society of Industrial and Organizational Psychology, Dallas, 5–7 May
  • Wiesner , W.H. and Cronshaw , S.F. 1988 . A Meta-analytic Investigation of the Impact of Interview Format and Degree of Structure on the Validity of the Employment Interview . Journal of Occupational and Organizational Psychology , 61 : 275 – 290 .
  • Witt , L.A. , Burke , L.A. , Barrick , M.R. and Mount , M. 2002 . The Interactive Effects of Conscientiousness and Agreeableness on Job Performance . Journal of Applied Psychology , 87 : 164 – 169 .
  • Wolf , A. and Jenkins , A. 2006 . Explaining Greater Test Use for Selection: The Role of HR Professionals in a World of Expanding Regulation . Human Resource Management Journal , 16 : 193 – 213 .
  • Wright , P.M. , McMahan , G.C. , Snell , S.A. and Gerhart , B. 2001 . Comparing Line and HR Executives' Perceptions of HR Effectiveness: Services, Roles, and Contributions . Human Resource Management , 40 : 111 – 123 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.