Abstract
Online fiscal transparency is a concept to describe a government’s practice to disseminate financial information on their websites. In this research, we apply the Technology-Organization-Environment (TOE) framework to examine the determinants of online fiscal transparency in U.S. states. Using a panel dataset of all 50 U.S. states from 2010 to 2016, we find that the overall development of digital government is positively related to online fiscal transparency. We also find that short-term financial resources, as measured by the annual budget surplus, enable state government to develop better online fiscal transparency programs, while the long-term financial conditions, as measured by the accumulative fund balance, are negatively related to online fiscal transparency. Citizen’s education level, voter turnout rate, Internet infrastructure, and state legislature’s requirement to disclose financial information via Internet are all positively related to the development of transparency websites.
Acknowledgments
The authors would like to thank Professor Jonathan Justice of University of Delaware for his valuable comments on this paper.
Notes
1 In the TOE literature, “Internet infrastructure” is described as both technological factor and environmental factor (Oliveira & Martins, Citation2011), although in DePietro et al.’s (Citation1990) original work, they categorized “Technology support infrastructure” as an environmental factor.
2 One caveat of using this index is that the evaluation criteria and point allocation went through minor changes from 2010 to 2014. The evaluation criteria and point allocation for each criterion remained consistent since 2014. The changes of evaluation criteria are necessary to reflect “rising standards for government transparency and new frontiers of transparency best practices” (Davis & Baxandall, Citation2014, p. 23). Different evaluation criteria and point allocation, however, can weaken the robustness of our empirical analysis. We chose following strategies to deal with this issue. First, five evaluation criteria which were consistent from 2010 to 2016 were selected. Second, we calculated a standardized score for each year based on 2016 point allocation for the five criteria. For instance, 4-points for disclosing off-budget agencies in 2013 was inflated to 6-points, based on 2016 grading scheme. Third, in regression models, we checked whether we gain consistent estimates from the original analysis when the dependent variable – Online_Fiscal_Transparency – is replaced with the standardized online fiscal transparency score. The results show that the standardized score and the original dependent variable have a strong positive association (r = .9110, p < .001). The regression results are consistent with the standardized dependent variable.
3 The Annual Surveys of State and Local Government Finances conducted by the U.S. Census Bureau defines revenue as “all amounts of money received by a government from external sources—net of refunds and other correcting transactions—other than from issuance of debt, liquidation of investments, and as agency and private trust transactions.” Expenditure is defined as “all amounts of money paid out by a government—net of recoveries and other correcting transactions—other than for retirement of debt, investment in securities, extension of credit, or as agency transactions.” (U.S. Census Bureau, Citation2018).
4 New York State Comptroller’s Office explains the difference between fiscal balance and budget surplus for financial condition analysis: “The fund balance is the total accumulation of all operating surpluses and deficits since the beginning of a local government’s or school district’s existence. Each year’s operating surplus or deficit is added to or subtracted from the prior fund balance.” (Office of the New York State Comptroller, Citation2008, p. 16).
5 We have also run tests to check for heteroscedasticity and autocorrelation. Test results suggest that we need to use clustered standard errors.
6 We have run Chow test to decide whether state fixed effects have to be controlled. We have run the Hausman test to determine whether random effect estimator is more appropriate than fixed effect estimator. Results from the tests suggest that fixed effect model should be used.
7 Test results suggest that GMM-style lagged dependent variables are valid instruments for this model.
8 Grades A, A−, B+, B, B−, C+, C, C−, D are converted to numbers 8 to 0, respectively. CDG survey is conducted every other year. For the year when the survey is not conducted, the Digital_government score is the average between the previous year and the following year.
9 The NASBO fall fiscal survey is used for all the years except for the fiscal year 2016 when only the spring report is available. Fiscal 2015 are actual figures, fiscal 2016 are preliminary actual figures, and fiscal 2017 are enacted figures.
10 Data for debt level, subsidies level, and educational attainment are not available in 2016. We use the 2015 number plus the change from 2014 to 2015 to impute the data for 2016.
11 From 2010 to 2015, data retrieved from the reports which measured Internet access service at the end of each year. For 2016 report, the data is measured in the middle of the year. Data in 2014 is missing. We use the average of 2013 and 2015 subscribership ratio to impute the missing value for 2014.
Notes:
8 Grades A, A−, B+, B, B–, C+, C, C–, D are converted to numbers from 8 to 0, respectively. CDG survey is conducted every other year. For the year when the survey is not conducted, the Digital_government score is the average between the previous year and the following year.
9 The NASBO fall fiscal survey is used for all the years except fiscal year 2016, when only the spring report is available. Fiscal 2015 are actual figures; fiscal 2016 are preliminary actual figures; and fiscal 2017 are enacted figures
10 Data for debt level, subsidies level, and educational attainment are not available for 2016. We use the 2015 number plus the change from 2014 to 2015 to impute 8. data for 2016.
11 From 2010 to 2015, data retrieved from the reports measured Internet access service at the end of each year. For 2016, data are measured in the middle of the year. Data for 2014 are missing. We use the average of 2013 and 2015 subscribership ratio to impute the missing value for 2014.
Additional information
Notes on contributors
Gang Chen
Gang Chen is an Assistant Professor at the Rockefeller College of Public Affairs and Policy, University at Albany, SUNY. His research interests include public budgeting and finance, public pension systems, disaster management, and fiscal transparency.
Hyewon Kang
Hyewon Kang is a doctoral student at the Rockefeller College of Public Affairs and Policy, University at Albany, SUNY. Her research interests include fiscal transparency and public financial management.
Luis F. Luna-Reyes
Luis F. Luna-Reyes is an Associate Professor in the Department of Public Administration and Policy. He is also a Faculty Fellow at the Center for Technology in Government, a Research Affiliated at the Universidad de las Americas, Puebla and a member of the Mexican National Research System. His research is at the intersection of Public Administration, Information Systems and Systems Sciences.