91
Views
0
CrossRef citations to date
0
Altmetric
Articles

Choosing College Wisely: Comparative Earning Data as a Key Factor in Selecting Colleges and Majors

In Short

  • The data offered by the College Scorecard (CSC) have facilitated the development of a number of different ranking methodologies that attempt to measure the “value” gained from attending different colleges and programs.

  • This article analyzes five CSC-based systems that measure educational return on investment in various ways.

  • There are important differences in how these methodologies balance the costs of college and median earnings of graduates to assign relative value.

  • Limitations in data granularity prevent important analysis, particularly comparisons of how universities perform with members of underserviced demographic groups.

  • Attempts to augment CSC data with other sources, including Census Bureau data in particular, are flawed because of important differences in how data cohorts are constructed.

The number one reason that students give for going to college is their expectation that a degree will improve their chances of landing a desirable job and earning a good salary. Historically, however, it has been difficult for students and their families to compare how different colleges and programs deliver on this promise. The U.S. Department of Education’s 2015 release of student earning data was a major step toward remedying this problem. The report, called the College Scorecard (CSC; https://collegescorecard.ed.gov/), is now updated annually.

CSC represents a watershed for the college ranking industry. It offers an alternative to traditional ranking systems like U.S. News and World Report, which evaluates schools using inputs like average standardized testing scores, teacher–student ratios, reputation, and alumni giving. U.S. News and World Report also relies on information reported by colleges themselves, which has led to periodic scandals and questions about integrity of data. The old ranking systems are increasingly out of touch at a time of declining enrollment, a major consequence of spiraling costs and soaring levels of student debt. This leads to rising skepticism about the value of higher education.

More and more, families and students are questioning whether college is worth the required investment. CSC is the best source for providing answers to this question. While CSC can only measure average financial outcomes, these are the metrics that students and their families desire most.

In addition, CSC shapes what other ranking systems may now include in their conceptions and methodologies. This has a reforming effect in the higher education selection marketplace.

We review five CSC-based systems that measure educational return on investment (ROI) in various ways. We highlight the value of the different methodologies, explain the shortcomings of CSC data that limit the breadth of analyses possible, and, finally, illustrate how cost and earnings are weighted to varying degrees in the different models.

Third Way

Third Way’s Price-to-Earnings Premium (PEP; https://www.thirdway.org/report/price-to-earnings-premium-a-new-way-of-measuring-return-on-investment-in-higher-ed) is an indicator for students to estimate how quickly their educational investment may pay off economically. Third Way’s PEP measures the relationship between cost and earnings at each reported college and university using two basic metrics:

  1. Total cost

  2. Marginal earnings

Third Way calculates total cost by multiplying the annual net price paid by students against the average time, in years and months, it takes to graduate.

Marginal earnings are calculated by subtracting the average high-school graduate’s wage—an estimate of what a student would otherwise earn—from the median earnings of graduates from a particular college or university.

Dividing total cost by marginal earnings presents one estimation of how long it takes the average student to pay back their educational investment with postgraduate earnings in months and years. This equation represents the way Third Way derives their PEP:

Total Cost ÷ Marginal Earnings = price to earnings premium

We can use Third Way’s top 4-year institution, California State University, Dominguez Hills (CSUDH), as an example:

Average annual cost $3,221* average 4 years to complete = Total cost of $12,884

$44,700 Median earnings – $24,820 average statewide high-school graduate earnings = $19,880 marginal earnings

$12,884 ÷ $19,880 = a 0.648 price to earnings premium, representing about seven months of postgraduation earnings.

The lower the PEP, the better. It is a performance indicator that shows the relationship between each institution’s reported costs and how much a student earns after graduation.

PEP is a useful tool for policy makers to determine which schools provide the highest return on investment into higher education, including for costs paid through federal and state grants and loans, and which do not. Colleges producing negligible marginal earnings for students, particularly sums that do not offset their costs, should have to explain why they are worthwhile recipients of Title IV funds.

One downside to this approach is that PEP does not account for real differences in costs and earnings. Two different institutions might exhibit the same relationship between their costs and graduates’ projected earnings, but if one institution’s graduates’ earnings are substantially higher, those students benefit from a better economic outcome after their initial investment is paid off.

Postsecondary Value Commission

The Bill and Melinda Gates Foundation also measures relative earnings through its Postsecondary Value Commission (PVC; https://postsecondaryvalue.org/). PVC measures a college’s graduates’ median earnings compared to four different earning averages within its home state. These are the averages, or earning benchmarks, that they use:

  • minimum economic return: median high-school earnings (the same estimates used to calculate Third Way’s marginal earnings);

  • earnings premium: degree-level earnings of all students at the same degree level, estimated at a similar age;

  • earnings parity: median earnings of white males at the same degree level. This metric indicates the degree to which the school elevates the earnings of students from underserved demographics; and

  • economic mobility: the fourth earnings quintile (60th–80th percentile earnings), indicating the extent to which a school produces graduates who are relatively affluent.

PVC’s earning benchmarks are based on the Census Bureau’s American Community Survey (ACS). The ACS is a Census Bureau survey sent to approximately 300,000 households each month. The survey collects information on educational attainment, demographics, income, and employment status, along with other social data.

For example, according to the ACS data cited by PVC, the average bachelor’s degree holder in California earns $64,802. This is approximately $20,000 more than the median earnings of students from CSUDH, the top performing 4-year college according to Third Way, painting the university in a slightly less flattering light.

Postsecondary Value’s stated goal is to estimate the overall value that colleges contribute to the U.S. economy, rather than to individual students. Specifically, the method aims to identify how different institutions contribute to the social mobility of underserved demographic groups and to their communities more generally. However, their ability to achieve part of this mission—specifically the earnings parity benchmark—is restricted by limitations in the CSC data.

Higher educational institutions and policy makers are the primary stakeholders who can use PVC’s analysis. PVC does not consider the real costs of different institutions, nor does it consider program-level variations in economic returns at different institutions. Differences by major is an important metric to many students.

Degreechoices

The Degreechoices (DC) ranking methodology (https://www.degreechoices.com/methodology/) attempts to synthesize Third Way’s cost-to-earnings methodology with PVC’s earning-benchmark comparisons. Unlike PVC, the benchmarks utilize the same CSC data and attempt to account for different earning rates among majors, geography, and colleges. DC seeks to provide students with a measure of which colleges and programs serve them better economically, by taking into account both relative costs and relative earnings.

In brief, here is an outline of the main differences between the DC methodology and those of PVC and Third Way:

  • Average completion time per school: The DC payback metric, like Third Way’s PEP, uses average completion times for each institution to calculate total cost rather than a single industry-wide average.

  • Program-level earnings: DC calculates weighted average earnings for each major across all colleges at both national and state levels.

  • Program adjustments: DC compares the earnings of each major at a particular school against program-level weighted average earnings.

  • Student body composition: DC compares earnings of in-state students to state earnings averages and those of out-of-state students against national averages, in each program-level weighted averages.

  • Institutional-level weighted averages: DC aggregates program-level performance, according to the proportion of the total college program ecology, into one comparative earnings metric called EarningsPlus. EarningsPlus shows the amount, in both dollars and percentages, by which a college or university over- or underperforms their representative benchmark.

presents a random selection of CSUDH programs as an example.

Table 1. A Random Selection of CSUDH Programs as an Example

In this example, the weighted average EarningsPlus calculation would be

(620*–$6,485 + 497*–$1,281 + 195*$13,834)/1312 = –$1,493

DC combines Payback and EarningsPlus to create a single economic score that can be used to compare different schools’ performance at the program and institutional level. There are two important problems with this economic model:

  1. First, earnings are reported early in a student’s career. Certain majors perform better in the short term; others perform better over time. CSC-based rankings are biased toward majors that perform better in the short term. CSC continues to track earnings for longer periods after graduation (generally adding an additional year of data each year, with 4 years postgraduation as the most recent point at the time of writing). This matter partially resolves itself over time.

  2. Second, it is difficult to calibrate objectively the relative importance of costs and earnings. While some individuals prioritize short-term cost effectiveness, others emphasize long-term earnings. The DC rankings lean toward cost effectiveness, as the cost-to-earnings ratio (payback) is balanced equally against relative earnings (EarningsPlus) without compounding ongoing earning differences over time.

Georgetown University Center on Education and the Workforce (CEW)

Georgetown University’s CEW ROI method (https://cew.georgetown.edu/wp-content/uploads/College_ROI.pdf) compounds earning differences moving forward. It calculates the “net present value” (NPV) of student earnings at 5-year intervals, 40 years into their eventual career. NPV sums the total—at the present value—that a school’s graduates’ median earnings represent over different time periods.

Students may decide for themselves which is more important—projected long-term earnings differences or shorter-term cost considerations. Short-term returns are heavily influenced by cost, but the model illustrates how even minor earning differences add up over the course of a career. In time, they offset and often surpass the initial economic value of cost disparities.

Net present value does not incorporate fluctuations in earnings over time. Projected future earnings are locked into the 10-year postenrollment CSC earnings data. In fact, the figure is discounted by 2 percent annually to account for the diminished present value of dollars the farther into the future they are calculated.

Using CSUDH as an example:

  • Total cost: $19,227

  • Annual earnings: $47,340

  • Calculation of NPV: $47,340/(1 + 0.02) + $47,340/(1 + 0.02)2 + $47,340/(1 + 0.02)3

As students are assumed to be studying and not earning during their studies, the NPV earnings only start from year 5. The 10-year NPV for CSUDH is $186,951, and the 40-year NPV is $1,073,973.

This model assumes that careers generally exhibit growth over time, and that present higher earnings will correlate with higher earnings in the future. Rather than attempt to account for this growth, CEW uses shorter-term earning differences as a proxy for earning potential in general.

While the CEW method does not resolve the issue of short- versus long-term earnings performance, in our view, their net present value method is the best way to shift emphasis from cost to earnings.

The Foundation for Research on Equal Opportunity (FREOPP)

Unlike CEW, FREOPP’s (https://freopp.org/) lifetime ROI analysis attempts to account for long-term earnings performance. It goes to great lengths to estimate how the earnings of graduates from different majors increase over a career, based on estimated earnings, counterfactual earnings, and college costs. To do this, they must circumvent CSC’s timeframe limitations. They attempt to do this by augmenting the data with ACS age-level earnings figures.

There are many important differences between ACS and CSC earnings data. Most significantly, the specific cohorts are different. The CSC reports on students by institution and groups them by enrollment or graduation year when reporting earnings. ACS earnings data are reported by age and degree level. In other words, the data sets report on two different, although often overlapping, groups of people. In addition, CSC utilizes federal tax returns to compile earnings data, whereas ACS relies on surveys of representative populations. We argue that survey-based information is less reliable.

FREOPP links the 2-year postgraduate CSC earnings data to the earnings cohort of 23–25-year-olds with the same major in the ACS. They then determine the percentage of over- or underperformance of each school’s median student compared to the ACS earnings.

This same under- or overperformance percentage is used across the entire lifetime of ACS data, adjusting every ACS age/earning bracket accordingly. The final output represents the lifetime earnings estimate for the median student at each program at each school.

FREOPP calculates the difference between a school’s graduates’ earnings and the ACS earning average at 23 years of age, and then carries that difference forward for the entirety of a student’s career. After determining these earnings, FREOPP attempts to measure opportunity cost, which they call counterfactual earnings: what students would otherwise earn if they did not go to college (similarly to “marginal earnings” calculated by other methodologies).

It is relatively easy for CSC-based methodologies to measure marginal earnings because their timeframe is limited. As FREOPP calculates a lifetime ROI, it must extrapolate what students would have otherwise earned over the course of their career. This is necessarily a much more complicated and problematic exercise, which also has the unfortunate side effect of making the work extremely difficult, if not impossible, to replicate.

Among other issues, FREOPP’s counterfactual earnings estimates consider and adjust for individual differences between students who choose to attend college compared to those who choose not to attend (based on the assumption that graduates would likely have earned more than the average high-school graduate, even if they had not attended college). These differences include the following:

  1. ethnicity, race, and gender;

  2. geography (state, metropolitan/rural location); and

  3. “[c]ognitive ability, motivation, health, and family background.”

FREOPP then applies these adjustments to each school, based on geographic location and whatever disaggregated gender/racial earnings breakdowns are available via Integrated Postsecondary Education Data System/CSC, or, in the case of point 3, according to a NLSY97 survey (https://www.nlsinfo.org/content/cohorts/nlsy97).

This adjustment is then applied to high-school-level earnings data at ACS, at each age bracket, and then deducted (along with another total cost amount) from total student earnings to get a lifetime ROI. With each step, we are farther away from the original data. There are a number of problems with this approach. While each assumption by itself may seem reasonable, the end result recalls the Ship of Theseus paradox: after all these adjustments, it is questionable if the same ship remains.

This is largely a result of working with two different datasets that measure two different cohorts. Even without all these adjustments and the tricky, sticky details, a lifetime ROI analysis has at the very best a tenuous relationship with any student’s reality. The farther from the beginning of a career, for graduates and nongraduates alike, the more variables, life experiences, choices, opportunities, and failures intervene to shape earnings potential in ways that cannot be measured. We also mention that adjusting counterfactual earnings based on an assumption that those who went to college would have earned more than the average high-school graduate if they did not go to college is, to us, an unnecessary and highly subjective assumption.

presents calculations for the same CSUDH programs presented for Degreechoices (earnings data do not match because FREEOPP uses data from 2020).

Table 2. Calculations for the Same CSUDH Programs Presented for Degreechoices

Final Thoughts

As this discussion demonstrates, CSC makes possible a new type of higher education analysis based on comparative earning data. These economic models should be used by students as one factor—a very important factor, but not the only one—in choosing a college and major.

While these methodologies become more sophisticated, they are all limited by the quality, breadth, and inclusiveness of available data, both in terms of limited time frame and/or insufficient data disaggregation. The challenge is for students and their families to become aware of these tools, while also understanding their limitations.

Disclosure Statement

No potential conflict of interest was reported by the author(s). 

Additional information

Notes on contributors

David Levy

David Levy was a cofounder of Degreechoices. Recently he has been spending much of his time developing and honing the economic valuation methodology used for ranking colleges and programs. David graduated in 2001 from Kenyon College in Gambier, Ohio, with a major in political science.

Harvey J. Graff

Harvey J. Graff is Professor Emeritus of English and History, inaugural Ohio Eminent Scholar in Literacy Studies, and Academy Professor at The Ohio State University. Author of many books, he writes about a variety of contemporary and historical topics for Times Higher Education, Inside Higher Education, Washington Monthly, Publishers Weekly, Against the Current, Columbus Free Press, and newspapers. Searching for Literacy: The Social and Intellectual Origins of Literacy Studies was published by Palgrave Macmillan in 2022; My Life With Literacy: The Continuing Education of a Historian: The Intersections of the Personal, the Political, the Academic, and Place (WAC and University of Colorado Press) is in press.