ABSTRACT
Although interchangeability of results across computer and paper modes of administration is commonly assumed, recent meta-analyses and individual studies continue to reveal mean differences in scores for measures of socially desirable responding (SDR). Results from these studies have also failed to include new methods of scoring and crucial aspects of scaling, reliability, validity, and administration emphasized in professional standards for assessment that are essential in establishing equivalence. We addressed these shortcomings in a comprehensive, repeated measures investigation for 6 ways of scoring the Balanced Inventory of Desirable Responding (BIDR), one of the most frequently administered companion measures of SDR in research and practice. Results for many previously unexamined, standards-driven aspects of scaling, reliability, and validity strongly supported the interchangeability of scores across modes of administration. Computer questionnaires also took considerably less time to complete and were overwhelmingly favored by respondents in relation to physical characteristics of the measures, appraisals of the the assessment experience, and perceived quality of information obtained. Collectively, these results highlight the importance of following professional standards when constructing and administering computerized assessments and the evolution of computer technology in providing viable, effective, and accepted platforms for administering and scoring the BIDR in numerous ways.