ABSTRACT
We use 77,420 courses’ grade sheets from all 34 Texas public universities for 2012–2019. Significant differences in grades exist across majors. In 2019, economics (29%) and maths (31%) have the lowest percent of A’s, while education (nearly 70%) and arts-performance and studio (64%) have the highest percent of A’s. A’s rose from 42.0% in 2012 to 48.5% in 2019, of which 73.2% of the increase is unexplained, i.e. could be deemed ‘grade inflation’. Introductory classes have a lower percentage of A’s than non-introductory classes, but both experienced a similar increase in A’s.
Acknowledgements
The authors thank Ekemini Usuah, Andrew Cross, and Matt Preston for excellent research assistance.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Data availability
The data that support the findings of this study are available upon reasonable request.
Notes
1 In the whole sample with 113,310 courses, we have 31.6 million student grades. In the sample with a matching university name, no missing grades nor missing university characteristics, we have 77,420 grade sheet observations (17.5 million student grades). There are 37 state universities, but we had to omit three small universities because they have no SAT scores.
2 We omit Education-higher and Library and Archival Studies because there are no undergraduate courses in these two fields in Texas.
3 We did a validity check for our matching process by comparing our grade data with the complete grade data at Texas A&M University (TAMU) at College Station. Our matched data from THECB for TAMU in 2018 and 2019 are similar to the grade data based on the full set of classes. For example, the percentage of A’s in our matched data in 2018 and 2019 are 50.7 and 52.0 which are close to 51.1 and 52.2 in the complete data for TAMU.
4 Some caution should be used when interpreting the impact of the SAT score since it is a nationally normed. But the SAT variation across Texas universities may detect some impact. We also used the percent of the incoming freshmen who ranked in the top 10% of their high school graduating class. Black, Denning, and Rothstein (Citation2023) find a modest impact of the Top Ten Percent rule in Texas. This variable is state normed. In our results the variable was less significant than the SAT score and was not included.
5 We defined courses introductory (1) if a course title includes ‘intro’, ‘prin’, or ‘fund’ and (2) if a course is a lower-level course: for example, Principles of Economics at a lower level is an introductory course.
6 Note that some small sections are included in the data if the course is taught more than one time in the academic year, e.g. a section of 24 students is taught in fall, and 27 students are taught in spring, for a total of 51 students.
7 The number of courses with identified university information and available SAT data is 77,420. There were about 1.5% points more A’s in our matched sample compared to the whole sample. This is due to the fact that there are fewer introductory classes in our matched sample (because introductory classes often have the same course number across many universities.) It is important to note that the rate of matched data does not vary much over time. First, there was a slight rise in matching from 67.1% in 2012 to 70% in 2015, and it hovered around 70% through 2019. Second, the upward trend in A’s is similar in the matched (6.5% points increase in A’s) and non-matched samples (6.0% points increase in A’s) from 2012 to 2019. We thank an anonymous reviewer for suggesting this comparison.
8 We choose to not mention their names so as to not imply any judgements about their grading policies.
9 The truncation of the data due to the year-long course enrolment fewer than 51 does not cause bias in our fixed effects estimates since this truncation is not based on the dependent variable.
10 We also estimated the fractional response model (Papke and Wooldridge Citation2008) since the dependent variable can be measured as a ratio between 0 and 1, inclusive, and found similar partial effects.
11 An anonymous reviewer noted that a possible Gelman Type M error could be present due to using average data at the university level. As a way of investigating the scope of this issue, we approximated the average GPA (using reasonable assumptions about the distribution of D’s and F’s) so that we could compare our measure of the impact of the SAT and male variables on the GPA to Hernández-Julián and Looney’s (Citation2016) estimates. Our estimate for impact of the SAT variable is somewhat smaller, while the male variable impact is very similar. While our estimates seem reasonable, we still must be mindful of possible Gelman Type M errors. As another anonymous reviewer noted, using the averaged regressors would be less efficient (although not leading to inconsistency) than using the regressors at the student or class level data.