Abstract
Standardized tests have been increasingly controversial over recent years in high-stakes admission decisions. Their role in operationalizing definitions of merit and qualification is especially contested, but in law schools this challenge has become particularly intense. Law schools have relied on the Law School Admission Test (LSAT) and an INDEX (which includes grade point average [GPA]) since the 1940s. The LSAT measures analytic and logical reasoning and reading. Research has focused on the validity of the LSAT as a predictor of 1st-year GPA in law school, with almost no research on predicting lawyering effectiveness. This article examines the comparative potential between the LSAT versus noncognitive (e.g., personality, situational judgment, and biographical information) predictors of lawyering effectiveness. Theoretical links between 26 lawyering effectiveness factors and potential predictors are discussed and evaluated. Implications for broadening the criterion space, diversity in admissions, and the practice of law are discussed.
ACKNOWLEDGMENTS
The research on the LSAT described in this article received funding from the Law School Admission Council (LSAC). The opinions and conclusions are those of the authors and do not necessarily represent the positions or policy of the LSAC.
Notes
In a study of University of California law school admission statistics, Kidder (Citation2000) found that in 1998, holding undergraduate institution and major constant, for applicants who had GPAs of 3.75 or more, a 5-point difference in LSAT score cut the chance of admission from 89% to 44% at Berkeley Law School; for the same year at UCLA, the chance of admission dropped from 66% to 10%.
Most factors in the ranking depend on reputation as evaluated by various relevant audiences, or matters related to institutional wealth. One factor is based on the LSAT. Until recently, the rankings used LSAT scores of entering students at the 25th and 75th percentile of a class; currently the ranking uses median LSAT scores for the class.
Barrett (Citation2008) undertook an analysis of the project's ratings within rater groups. He concluded that averaging both the two peer ratings and two supervisor ratings was reasonable. Additional analyses indicated that sufficient similarity existed between averaged supervisor and averaged peer ratings to average the two averages to yield an “Other” rating viewpoint. See Shultz and Zedeck (Citation2008) for a complete presentation of the intercorrelations among all performance perspectives.
All reported correlations in this section were significant at the .05 level.
These correlations were significant at the .05 level.
A Self-Monitoring Scale type test rewritten specifically for law performance might show better results.
An ER test allowing longer time intervals, presenting fewer emotions, and using more consistent photographs might have improved results.