1,149
Views
49
CrossRef citations to date
0
Altmetric
Articles

Why Performance Information Use Varies Among Public Managers: Testing Manager-Related Explanations

Pages 174-201 | Published online: 28 May 2014
 

ABSTRACT

This article examines the use of performance information by public managers. It begins with a systematic review of the literature which concludes that we know relatively little about the individual characteristics of managers who report a frequent use of these data. Studies that have focused on people-related explanations have mostly tested socio-demographic variables but have only found inconclusive evidence. This article suggests theorizing more complex individual explanations. Drawing from other fields' middle-range theories, the article speculates about the effects of thus far disregarded manager-related factors. An empirical test based on survey data from German cities reveals the following preliminary findings: performance information use is explained by a high level of data ownership, creative cognitive learning preferences, the absence of cynicism, and a distinct public service motivation. Identity and emotional intelligence were found to be insignificant along with the managers' socio-demographic characteristics.

Notes

Note: Significance levels were determined according to the two-tailed test logic.

Note: To provide a more illustrative overview of the descriptive statistics, all factors have been displayed as additive indices in the first three columns of this table.

*p < 0.05.

Notes: OLS Regression. Standardized beta coefficients are reported; p values in parentheses.

*Significant at 0.1; **Significant at 0.05. The tests are two-tailed.

A systematic review or “vote count analysis” is different from a statistical meta-analysis. The latter does not compare the number of positive, negative, and insignificant findings but estimates average effect sizes of the variables weighted by their sample sizes. A meta-analysis is mainly applicable if the studies of interest have reached a high quantity, if they report standardized correlational information for all used variables, and if they have applied similar research designs. The following paragraphs in this section will show that these requirements are not fulfilled by the studies under investigation.

In a first step, 27 studies were selected. Due to the missing disclosure of all used operationalizations or the use of rather idiosyncratic explanation factors, in a second step three studies were excluded from the sample. In order to avoid an overrepresentation of singular data sets, only one regression model per publication (if several had been presented) was included in the review. I always selected the model that took the most control variables into account and where the dependent variable came the closest to my presented understanding of purposeful information use.

Since some studies only reported either the R 2 or the adjusted R 2, my estimation took both indicators into account. In cases where both measures were reported, their mean was used.

I consider a “vote count” as clear-cut if a variable has been tested at least four times, three out of the four studies reported the same significant effect, and the fourth study did not entirely contradict the findings from the other ones. The results are regarded as even more definite the more often a factor has been tested and has shown the same effect over and over again.

More mature measurement systems provide a greater range of information, align their reporting to the demands of the addressees, and link information to goals and strategic plans. Though I coded information availability as a sub-item of “measurement system maturity,” two studies treated both variables as conceptually different and tested both factors. Hence, “information availability” appears as a separate variable in Table .

The sample seems to represent the population quite well. For example, larger cities as well as the different divisions are neither over- nor underrepresented (p > 0.05). Only one division is slightly overrepresented, but further correlational analysis revealed that the respondents from this division did not differently evaluate their use of performance information than all the other respondents (p > 0.05). Another concern could be that managers who are more enthusiastic about using performance data might be more likely to participate in the survey and thus could jeopardize the internal and external validity of this article's results. One way to examine such a bias is to compare differences between early and late respondents, assuming that if the former report more extensive data use than the latter, the same problematic pattern could be expected between respondents and non-respondents more broadly (Groves Citation2006). However, participants of the first wave did not report a significantly different use of performance information than managers who only responded after the second, third, or fourth reminder (p > 0.05). Furthermore, the results of the fully controlled model in Table remain the same, even when we additionally control for differences between early and late survey participants.

Please note that this is only a theoretical consideration about the population of interest, not the sample. No cases were dropped or excluded. Also, these 36.6% of the non-respondents are not the “non-users” of performance data but the “non-collectors.” Non-use is captured through six survey items which measured a range of responses between using performance information “never ever” and “very often” for different purposes.

Performing factor analysis of a single latent variable's empirical indicators to ensure convergent validity is particularly useful when the number of indicators is small and thus the exact individual weight of every indicator matters a great deal for the composition of the overall factor. Factor analysis was furthermore used to ensure discriminant validity among all manager-related variables. As can be seen in the results section, for the latter purpose PCF was performed on all the indicators of several different factors.

The axes for grasping and transforming were generated by subtracting the ranking scores of the opposing ways of learning from each other. The four cognitive styles—doer, creator, decision maker, and planner—were generated by combining different combinations of the grasping and transforming axes. For example, to be counted as a planner, the participant had to prefer thinking (T) over experiencing (E) (grasping axis) and observation (O) over action (A) (transforming axis). Both axes were then combined to a joint scale with the opposing endpoints planner and doer by adding the values for the differences of the grasping and transforming axes and dividing them by two: ((T − E) + (O − A))/2.

Additional information

Notes on contributors

Alexander Kroll

Alexander Kroll ([email protected]) is an Assistant Professor of Public Administration at Florida International University. His research interest is in studying organizational effectiveness, employee behavior, and particularly the roles of performance information, strategy, leadership, and motivation. His research has been published (or is forthcoming) in journals, including the American Review of Public Administration, Public Administration, and Public Administration Review.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 236.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.