824
Views
1
CrossRef citations to date
0
Altmetric
Original Articles

Can the Current Ministry of Defence Performance Management Regime Cope With Cognitive Effects?

Pages 323-345 | Published online: 17 Jul 2006

‘If you can’t measure it, you can’t manage it.’ Anon.Footnote 1

A plethora of government and departmental initiativesFootnote 2 over recent years have led to a huge convergence of commercial and public sector practices. In the Ministry of Defence’s case, this has promoted ‘joined up working’ and improved effectiveness.Footnote 3 Central to this approach is performance management, which not only ‘stands behind the creation of targets, contracts and agreements that control service delivery’,Footnote 4 but also helps ‘…departments to develop policy, to manage their resources cost‐effectively, to improve effectiveness and to report their performance to Parliament and the general public’.Footnote 5 This all‐embracing role – at the centre of government strategy – is consistent with the importance placed on performance management in industry. In both sectors, the ‘Balanced Scorecard’ has become the tool of choice for performance management regimes.

The Balanced Scorecard is an impressive concept. Its utility is not limited to the measurement of performance; it also – as the name suggests – examines organisational balance and is central to both the derivation and execution of strategy. It is unsurprising, therefore, that it has found such popularity in industry or that the public sector have been so keen to adapt and integrate it into its management practices. The Ministry of Defence (MOD) adopted the ‘Defence Balanced Scorecard’ in April 2000,Footnote 6 tailored from the commercial version and using four bespoke perspectives – ‘Output/Deliverables’, ‘Enabling Processes’, ‘Resource Management’ and ‘Building for the Future’.Footnote 7 As these perspectives suggest, there is commonality between the public and private sector: the setting of strategy, the execution of strategy, the enabling processes and organisational needs are all similar if not the same.

Despite this commonality, the Defence Balanced Scorecard has been specifically adjusted with the aim of meeting the unique requirements of Defence. It has now been used as the basis of four Departmental Performance Reports, is in mandatory use to two levels below the Defence Management Board (DMB) and is ‘rapidly becoming the preferred performance management tool at all levels’.Footnote 8 The challenge for Defence has always been in quantifying the department’s deliverables. This is, of course, a necessarily subjective assessment as there are not the easily quantifiable financial figures concerning output that any commercial organisation would have.

There is the additional complexity of measuring the deterrent effect of maintaining capable forces at a degree of readiness for operations. The Defence Balanced Scorecard addresses this by aggregating ‘performance on operations’ with ‘readiness’ and ‘policy’ to determine the outputs – which are termed ‘what the Government expects’.Footnote 9

On the surface, this appears to be a tidy and sensible adaptation of the commercial tool to public sector use, it represents a logical, credible and proven regime that helps facilitate the efficient running of a department – particularly within a framework of ‘top‐down’ scrutiny by the Cabinet Office or Treasury. It is this very facet, however, that means the Defence Balanced Scorecard struggles to properly account for the largely ‘bottom up’Footnote 10 effects that the Armed Forces deliver on operations. While not its only output, these effects are very much the raison d’être of the Department. The difficulty lays in the fact that success – or otherwise – of military effect is less wedded to efficiency than effectiveness. The net result is that operational output cannot easily be shoehorned into a monochrome performance measure such as ‘degree of operational success….based on the Armed Forces’ performance in the theatre of operations’Footnote 11 and still be meaningful.

In part, this misalignment results from a divergence between the MOD’s business strategy and its warfighting strategies. This stems from the MOD’s schizophrenic existence as both a military headquarters and a department of state. One part of the organisation is pursuing a business strategy adapted from industry while the other is embarking on a doctrine using Network Enabled Capability (NEC) to deliver a set of ‘Strategic Effects’Footnote 12 in a way that has no genuine precedence in the commercial sector. The Effects Based Approach (EBA)Footnote 13 sees these ‘bottom up’ military effects acting in concert with effects derived from other instruments of national power with an ‘outcome’, rather than an ‘activity’ focus in influencing the ‘will, understanding and capability of adversaries’.Footnote 14

It is the recognition of the complexity of contemporary military operations and the fact that the Armed Forces, seamlessly orchestrated with the other instruments of national power, do not just deliver the capability to ‘Destroy’ or ‘Defeat’, but also have utility in ‘Preventing’, ‘Stabilising’, ‘Containing’, ‘Deterring’, ‘Coercing’ and ‘Disrupting’Footnote 15 that renders existing simplistic performance measures inadequate if not irrelevant. The root of this lies in the realisation that armed force not only delivers physical but also cognitive effects.Footnote 16

There is also a resource complication here, in that cognitive effects can potentially be achieved without the direct application of military resources – although those resources are required to underwrite the effect with a physical option should the cognitive effect fail. While the private sector may also deliver cognitive effects through marketing, there are two fundamental differences; first, ‘brand’ is not the same as credibility and, second, in commerce the objective end state is tangible – a purchase and a receipt.

Military cognitive effects are far from tangible, they may be second, third or fourth order and attribution could be difficult as they may be used in conjunction with effects derived from other instruments of power. Furthermore, they cannot be measured easily through self‐examination but must be measured by target response;Footnote 17 this in turn requires measures of effectiveness based on intelligence about the target’s behaviour rather than subjective judgement based on internal datasets. This issue brings sharply into focus the incompatibilities that exist between some business models and Defence. Running a business and conducting a military campaign cannot necessarily be templated in the same way. The cognitive aspects of the Effects Based Approach simply do not fit within a commercial model; thus the very existence of this doctrine represents a potential fissure between the ‘Business Space’Footnote 18 and the ‘Battle Space’.

This article will argue that the current MOD performance management regime is incapable of coping with the complexity of the emerging doctrine of cognitive effect. In developing this argument, it is necessary first to examine commercial performance management practices and how MOD policy has been derived to utilise them – characterising the business space. The emerging strategic doctrine of the battlespace will then be scrutinised, concentrating on the strategic effects framework and the application of cognitive effects. The two initiatives will then be juxtaposed, to illustrate that the current performance management regime is incapable of handling developing doctrine and why this is so. This will allow the key challenges to be identified and policy recommendations to be made.

Characterising the Business Space – Commercial Performance Management Practice

Since the term ‘performance management’ was first coined by Beer and Ruh in 1976,Footnote 19 there has been an increasing realisation in the private sector over the last 10–15 years that it is a more complex phenomenon than it first appeared to be. The difficulty is that there are two distinct types of performance; organisational, ‘how the organisation is doing today’ and strategic ‘how the organisation will perform on its strategic journey’. Britten uses a useful analogy for this; on a fictitious car journey, organisational performance ‘is analogous to speed, fuel consumption and the state of the car’ whereas strategic performance is ‘like knowing you need a new map before you are lost’.Footnote 20 The traditional fixation on Key Performance Indicators (KPIs) not only centred too much on organisational performance, but also, by doing so, caused strategic paralysis through focusing management on the current balance sheet – to the detriment of future strategy.

This myopic over‐reliance on summary financial‐performance measures – mainly addressing organisational performance – was the catalyst behind the search for a system that better facilitated the creation of future economic value through strategic performance management. In 1990, a study entitled ‘Measuring Performance in the Organization of the Future’ remedied this situation – to a degree – by identifying the embryonic ‘Corporate Scorecard’ – that included non‐financial measures of performance. It was, however, the multidimensionality of the ‘Scorecard’ construct that allowed it to be developed into a ‘financial, customer, internal and innovation & learning’ format, in turn allowing balance between long and short‐term objectives; thus, the ‘Balanced Scorecard’ was spawned.Footnote 21

Despite the intention that the Balanced Scorecard would give a better understanding of an organisation’s performance, it soon grew beyond this. The holistic analytical framework delivered by the system, its innate simplicity and flexibility soon caused business leaders to recognise its utility as a core management tool: capable of ‘individual and team goal setting, compensation, resource allocation, budgeting and planning, strategic feedback and learning’.Footnote 22 This list omits the overriding – and perhaps most important – contribution the Balanced Scorecard can make to an organisation; which is in setting strategic direction and the identification of strategic principles, thus avoiding strategic ‘drift’.Footnote 23 The antithesis of ‘drift’ is ‘strategic deployment’ – the journey from now to the strategic objective.Footnote 24

It is the centrality of dynamic vision and strategy in the Balanced Scorecard that has led to its huge success and the trend for organisations that use it to be labelled ‘strategy focused’.Footnote 25 That is not to say that other organisations do not have a strategy; Kaplan and Norton’s key argument is that the Balanced Scorecard allows an organisation to prosecute strategy, and that this is more important than the strategy itself. This is based on evidence that less than 10 per cent of effectively formulated strategies are implemented.Footnote 26 The holistic nature of the Balanced Scorecard also lends itself well to vertical and horizontal integration within an organisation and therefore can deliver better focus and direction. At the strategic level, the Balanced Scorecard is as good a means as any of setting the course for an organisation and getting there. One could even argue that it simply brings together a raft of strategic management theory in a more easily consumable construct.

Likewise, when its component parts are dissected, there is a huge amount of flexibility in terms of both interpretation and implementation and many of the principles are not new. Kaplan and Norton’s ‘Internal‐Business‐Process Perspective’Footnote 27 is essentially concerned with the quest for efficiency. Admittedly, it examines the issue by process rather than through structures, but this is hardly new; ever since matrix management Footnote 28 was first espoused – well before the Balanced Scorecard – a process view had been used. Nor, for that matter, is the Balanced Scorecard unique in providing a framework for internal rationalisation and efficiency, the European Foundation for Quality Management (EFQM),Footnote 29 the International Organization for Standardization’s ISO 9000,Footnote 30 Six SigmaFootnote 31 and the Japanese Kaizen Footnote 32 concept all provide equally popular solutions. The advantage the Balanced Scorecard approach has is that its holistic nature allows other tools, such as these, to be included within its framework.

The ‘Learning and Growth Perspective’Footnote 33 is also generic and allows the grouping of other initiatives. It offers the advantage over other systems in that it puts these initiatives – largely long term – into balance with other investments, thus avoiding a myopic approach to spending. Three generic groupings are cited for this perspective: Employee capabilities, information system capabilities and motivation/empowerment/alignment.Footnote 34 This perspective is fundamental in achieving balance within an organisation and it underpins the other three perspectives. This is the real ‘value added’ of the Balanced Scorecard compared with its forerunners; it balances the health of the organisation with its current position – which is shown in the ‘Financial Perspective’.Footnote 35 This final perspective examines the resource – the revenue and assets – and is the most traditional part of the Balanced Scorecard.

These perspectives have been deliberately discussed out of order. In reality, the Financial Perspective is the natural start point – the ‘what’s in the cupboard?’ This should be followed by the ‘Customer Perspective’,Footnote 36 but this has been left to last as it is both the most complex of the perspectives and the least generic when the Balanced Scorecard is applied to the public sector. Kaplan and Norton suggest the ‘core measurement group of customer outcomes’ are supposedly ‘generic across all organisations’, but when the measures are examined it becomes clear that they are inapplicable to the public sector. They are: market share, customer retention, customer acquisition, customer satisfaction and customer profitability.Footnote 37 Whereas the other three perspectives are sufficiently generic, this perspective poses significant problems in its application to the public sector.

The Imposition of the Business Space in the Public Sector

The challenge of translating the Customer Perspective to the public sector does not deter the Balanced Scorecard’s creators from claiming that ‘the opportunity for the scorecard to improve the management of governmental enterprises [versus private companies] is, if anything, even greater’.Footnote 38 The cynic would suggest this assertion might be largely driven by the potential profits to be made by broadening the utility of the tool as much as possible.

In fairness, however, the potential problems are not insurmountable in most public sector applications. For example, in the healthcare sector, a hospital or trust could use their customer satisfaction, complaints, waiting lists and timeliness of discharge as measures of output.Footnote 39 Furthermore, the majority of organisational processes, ranging from training through procurement, are common to both the public and private sector.

Some of these problems in public sector management reside in the Balanced Scorecard’s context. For instance, for a variety of personal and political reasons, ministers may not have the same interest in their department’s performance – or aspects of it – that a shareholder may.Footnote 40 Even if this is not the case, the incentives available to senior government officials may not be sufficient to genuinely encourage good departmental performance.Footnote 41 There is a counter argument to this assertion, based on the performance‐related financial remuneration of senior military officers and civil servants; but, however it is spun, public sector officials are not subject to the same cut‐throat ‘do or die’ incentives of their private sector equivalents. Either way, the context of a public sector versus a private sector Balanced Scorecard is different, however it is dressed up, and this must be considered when analysing other aspects of public sector performance management.

The principal difficulty in public sector application of the Balanced Scorecard is that of attribution. Whereas business generally operates well‐defined Strategic Business Units (SBUs), the UK Government policy of ‘joined‐up government’Footnote 42 specifically aims to break down barriers between government departments. While this policy is laudable in terms of overall efficiency and effectiveness, it makes outcome and resource attribution extremely problematic. This is less the case for distinct governmental processes such as revenue collection, and agency status overcomes the problem to a large degree. The real difficulty occurs when government applies its full spectrum of capability – as it does when warfighting. In this particular case, ‘joined‐up government’, if exercised properly, is diametrically opposed to effective outcome and resource attribution at departmental level. This is an irreconcilable quandary at departmental level, but ironically could be overcome by joined‐up cross‐departmental, outcome‐based performance assessment.

This issue warrants the further dissection of the exact nature of outputs and outcomes. The former is conventionally used in commerce, whereas the spirit of ‘joined‐up government’ demands the latter. There is very little literature that covers this degree of complexity in public sector performance management, however a useful definition of ‘output’ and ‘outcome’ in this context is ‘Outputs are goods and services produced by departments; outcomes are consequences for the community [the target audience] of the interventions of government.’Footnote 43 This concept is clearly illustrated by the National Audit Office model of Inputs, Outputs and Outcomes shown at Figure .

FIGURE 1 THE RELATIONSHIP BETWEEN INPUTS, OUTPUTS AND OUTCOMES

FIGURE 1 THE RELATIONSHIP BETWEEN INPUTS, OUTPUTS AND OUTCOMES

This figure crystallises the foremost problem of public sector performance management by showing that outputs, through effectiveness, create outcomes in conjunction with other influences. The Balanced Scorecard extends from the left of this model to the output stage, but does not address outcome. This is because the commercial sector generally deals with sales‐based output measures and is less concerned with outcomes in the sense meant here. Additionally, the potential complexity of attribution of outcome would unnecessarily overcomplicate a simple and robust Balanced Scorecard that works well at the level for which it is designed.

This limitation of the Balanced Scorecard varies from department to department. For instance, in health, the number of coronary by‐pass operations – as an output – might directly link with the outcome of fewer heart attacks. Similarly, an employment agency’s output of information about job vacancies may link directly to the outcome of less unemployment.Footnote 45 For the MOD, Foreign and Commonwealth Office (FCO) and Department for International Development (DfID) in particular, the Balanced Scorecard falls short as these departments are all attempting to create outcomes or effects within a multifarious international system centred on a target or adversary that may not operate rationally. In this context, measurement and attribution are clearly challenging, and the emphasis in delivery is founded in effectiveness rather than efficiency. This raises cultural difficulties between the traditional civil servant whose focus tends towards efficiency, and servicemen and diplomats whose primary concern is effectiveness in the outcome they are seeking to achieve.

Frock Coats and Brass Hats – Performance Management in the MOD

Nowhere is this problem as acute as in defence, where the department of state and the military headquarters coexist as deliverers of efficiency and of effect respectively. The problem is not new. The desire to mount a huge frontal offensive during the Great War was viewed by Field‐Marshal Sir Douglas Haig as a necessary military effect, but by his civil masters as an unnecessary committal of resources – hence the term ‘frock coats and brass hats’.Footnote 46 Yet despite these potential cultural difficulties, the Balanced Scorecard sits ‘….at the heart of the MOD’s Performance Management regime’ and ‘is used to communicate the Defence Management Board’s priorities and forms the framework for planning across the Department’.Footnote 47

The level of buy in and perception of the success of the Defence Balanced Scorecard is such that it is ‘firmly embedded across the whole Department’ and the MOD is ‘…considering its potential for selling into wider markets as a commercially available management tool’.Footnote 48 The National Audit Office (NAO) joined in the acclaim of the Defence Balanced Scorecard, crediting it with providing ‘a clear focus on Departmental priorities’ and ‘aligning effort at all levels behind the Department’s strategic objectives’.Footnote 49 The MOD was even awarded the ‘Balanced Scorecard Collaborative Hall of Fame’ status in June 2002Footnote 50 – an accolade of which the department is rightly proud. The Defence Balanced Scorecard model can be seen at Figure .

FIGURE 2 THE DEFENCE BALANCED SCORECARD

FIGURE 2 THE DEFENCE BALANCED SCORECARD

The construct of the Defence Balanced Scorecard owes much to the fact that it was created using a ‘peer group approach’ rather than engaging professional consultants.Footnote 52 This is an interesting aspect of its genesis, which is important when we come to consider its limitations. As can be seen in Figure , it has four perspectives – in common with the original Balanced Scorecard template. A number of strategic objectives reside within the perspectives, which, in turn, are subdivided into performance measures.Footnote 53 The output/deliverables perspective, which is the subject of this argument, is the equivalent of the Customer Perspective in Kaplan and Norton’s original concept.Footnote 54

The catalyst behind the adoption of the Defence Balanced Scorecard was the ‘modernising government’ initiative, which mandated all departments to deliver results through public service agreements.Footnote 55 The then Permanent Under Secretary – Sir Kevin Tebbit – decided to embrace the 1997 ‘modernising government’ initiative by expanding the role of the Defence Management Board (DMB) to include ‘the strategic management of performance in all its dimensions’. Hitherto, the DMB had traditionally focused on ‘financial planning and management’Footnote 56 and unsurprisingly, therefore, fixated on the issue of ‘whether outputs could be defined and was there any clarity of what constituted success in delivering them’.Footnote 57

This output‐centric approach, while perfectly good for examining the department’s Resource Management and Enabling Processes – very much the civil servant’s domain – dodges the question when it comes to outcomes. The NAO’s acclaim for the Defence Balanced Scorecard is startling in this respect as they also espouse the importance of ‘the application of outcome‐based targets’,Footnote 58 which the Defence Balanced Scorecard singularly fails to address, owing to its fixation with outputs. This does not mean the Defence Balanced Scorecard is universally lacking – far from it – it simply emphasises that its optimisation towards output means it cannot cope so well with outcome. The dilemma borne of this analysis is that strategic effects are all about cross‐governmental outcomes, not departmental outputs.

The NAO recognises this difficulty and identifies that three‐quarters of government departments face challenges in agreeing what their joint outcomes are and how they can be attributed at the departmental level.Footnote 59 In defence, the joint outcomes will largely be the result of strategic effects using the combined instruments of national power. This raises the question of whether these outcomes should be measured at all at the departmental level or, if they are cross‐governmental in nature, a cross‐governmental means of measurement should be adopted that sits above the individual departments’ performance management systems. This is a complex area that will be examined later, but – as a marker in the sand at this stage – it is worth noting that the current ‘Operations’ part of the Defence Balanced Scorecard ‘Output/Deliverables’ dimension fails to address the cross‐departmental challenges of the Effects Based Approach (EBA).

The Battlespace’s Problem Child – Cognitive Effect

These generic problems with the EBA are compounded when the added complications offered by cognitive effects are factored into the equation. Cognitive effects are an emerging part of UK doctrine, but have their antecedents – as so many ‘new’ theories do – in a more venerable body of literature. Before examining their coming of age, it is worth summarising the conception of cognitive effects in military, psychological and philosophical theory. The best start point – if only because it appears to go back the furthest ‐ is with military theorists. This framework will then be complemented by philosophical and psychological theory where this adds value.

Although his musings are open to broad interpretation, Sun Tzu first made the point in the fourth century BC that ‘the supreme act of war is to subdue the enemy without fighting….’Footnote 60 . This was probably the first written acknowledgement of cognitive effect. Clausewitz failed to address cognitive effect even in its most basic form of deterrenceFootnote 61 and, despite every conflict in history featuring cognitive effects in some form or another, surprisingly little was written until the Cold War. It was the unique nature of the Cold War – during which there were few physical effects – that spawned a morass of theory on deterrence, often to the neglect of broader study of cognitive effect and particularly compellence.Footnote 62 The fundamental difference between deterrence theory and compellence is that the former demands inaction while the latter demands action;Footnote 63 in this sense, compellence is time‐bounded in a way that deterrent effects are not. This distinction is important when the strategic effects framework is considered in the next section.

It is also worth noting that military theorists do not generally talk of cognitive effects as a concept, but do acknowledge the link between deterrence and compellence and how these relate to physical effect. The main difficulty with the theory is the inconsistency in applying terms such as coercion, compellence, containment and deterrence and in establishing their hierarchy and levels of interdependence. This complex interrelationship between cognitive effects is also what defines the dynamic cost/benefit equations of the actor and subject during a period of engagement. This dynamic relationship is important, as it is one of the main obstacles to measurement of effectiveness, in that the subject and actor may not perceive themselves as being at the same point within the process. The dynamic relationship between compellence and deterrence is shown at Figure .

FIGURE 3 THE DYNAMIC RELATIONSHIP BETWEEN COGNITIVE EFFECTS

FIGURE 3 THE DYNAMIC RELATIONSHIP BETWEEN COGNITIVE EFFECTS

The philosophy of cognitive psychology is divided between two schools. The Behaviourist School, exemplified by the famous Pavlov’s dogs, was centred on a Stimulus‐Response (S‐R) framework; when Pavlov’s dogs were presented with food, they unavoidably salivated. This model, however, fails to account for the well‐developed human neo‐cortex, which adds the capacity for cognition. This gave rise to the Cognitive School’s Stimulus‐Cognition‐Response (S‐C‐R) model and the birth of cognitive psychology.Footnote 64

Cognitive psychology theory adds further definition to the military theorists’ framework. First is the important differentiation in perception between sensory information and relevant past knowledge and experience.Footnote 65 This is important when considering cognitive effect, as the adversary or subject’s perception will be affected by the credibility and past performance of the attacker or actor. Indeed, this could be to the extent that no effect per se is actually necessary. For example, the relative compliance of Libya in the wake of the invasion of Iraq in 2003 is testament to this. This particular example also illustrates the complex interrelationship that can exist between physical and cognitive effect. At its most basic level this means that if a physical effect is made on an individual, any witness to that effect will be subject to a corresponding cognitive effect of some form or another.

The second contribution made by cognitive psychology is the introduction of the psychological state of the subject or – as it has become known in business – the rational model.Footnote 66 The psychological state of a subject and the species or culture will determine reaction and the ‘affordances’ placed in an act or object; for instance: ‘…a hungry person will perceive the affordance of edibility when presented with an orange, whereas an angry wife may detect the affordance of a projectile and throw the orange at her bemused husband’.Footnote 67

This easily recognisable example introduces both the concept of rationality and the fragile boundary between desired and undesired outcomes. It is the former, however, that is important when considering a subject’s cost/benefit equation, an accurate assessment of which is the key to determining the success or otherwise of a cognitive effect. This theory also raises the issue of whether cognitive effects targeted against irrational actors would necessarily be futile.

The cardinal theme from this review of cognitive theory is that contemporary understanding of cognitive effects has been distilled from several specific academic disciplines that have each focused on distinct aspects of cognitive effect. For instance, military theorists tend to examine the effect itself as an ‘ends’, whereas psychologists are fixated more on the process. This hinders conceptual development of those cognitive effects that are the subject of this study, as it demands a multi‐disciplinary approach and – more importantly – renders both definition and application problematic.

Cognitive Effect in Adolescence – Spotty but Potent

It is because of this difficulty in definition and application that, despite top‐level endorsement,Footnote 68 cognitive effects remain very much in adolescence. There is presently no agreed lexicon for cognitive effects – or even the EBA – within the MOD,Footnote 69 let alone a cross‐departmental or multinational understanding. Against this organisational inertia at the strategic level is an operational imperative – stemming from the Strategic Defence Review – that is fuelling conceptual and doctrinal development of the EBA at the operational level. Emerging operational doctrine is beginning to realise the complexity of cognitive effect, and in particular its measurement, but within a purely military stovepipe and without multi‐agency involvement.Footnote 70

This complexity arises from the web of cause and effect that arises from any action. There is the initial complication of a strong interrelationship between physical and cognitive effect, as described in the last section. This is augmented with the complexity of ‘desired’,Footnote 71 ‘undesired’,Footnote 72 ‘intended’ and ‘unintended’Footnote 73 effects and the realisation that unintended effects must be identified as ‘it will be the mitigation of undesirable unintended effects, many no doubt caused by the adversary or by the operation of chance, and the exploitation of desirable unintended effects that will improve the likelihood of successfully achieving objectives and reducing risk’.Footnote 74 ‘Cascading’Footnote 75 of effects serves to further complicate an already multifarious construct.

In an attempt to add order to this chaos, emerging doctrine divides effects into two categories: ‘decisive’ and ‘enabling’. Decisive effects ‘directly contribute to the achievement of an objective’Footnote 76 and enabling effects ‘set the conditions for either the implementation of a further Action or the realisation of a decisive effect’.Footnote 77 McNicoll usefully augments the latter definition with the broad statement that enabling effects ‘will be necessary to shape the battlespace’.Footnote 78 What is not made clear in the doctrine, but has significance when we examine performance, is that an agency – possibly the military – could be required to have an enabling effect at the strategic level to set the conditions for, by example, a diplomatic decisive effect. This is challenging when it comes to attribution of the broader outcome and resource justification.

This challenge should be coupled with the fact that an action at any level (Tactical, Operational or Strategic) will not necessarily have a corresponding effect at that level. This means that while tactical effects resulting from tactical actions will clearly not enter the Defence Balanced Scorecard’s horizon, tactical action that has a strategic effect or outcome – whether intended/unintended or desired/undesired – may impact on departmental‐level performance. A recent example of this is the mistreatment of some Iraqi prisoners at the tactical level having an undesired and unintended strategic effect that transgresses all four perspectives of the Defence Balanced Scorecard.

The significant aspect of this example is that the any damage to reputation within the ‘Resource Management’ perspective is reasonably easy to grasp by internal examination. Likewise the training implications of this example the Enabling Processes perspective or the personnel development issues under Building for the Future perspectiveFootnote 79 are also evident. However, the effect of these atrocities on our current and future ability to ‘achieve success in the tasks we undertake’, as part of the Output DeliverablesFootnote 80 perspective, is far more difficult to ascertain.

It is these problems in assessing cognitive effects that, when coupled with complexity and difficulty of attribution, render measurement within the existing perspective excessively difficult. It is recognised in emerging doctrine that, for the purposes of the battlespace, ‘both quantitative and qualitative methodologies that inform a probability‐based estimation of overall effectiveness’Footnote 81 augmented with ‘military judgement and intuition’Footnote 82 will form the basis of this assessment. In other words, the intention is to hazard a guess. It is not being glib to suggest that this sort of intuitive assessment is adequate for informing dynamic military planning, as such decisions are invariably made under uncertainty. Nor is it necessarily lacking as a basis for operational decisions relating to adding or reallocating resources to a given effect depending on the degree of success being enjoyed. Intuition is not, however, suitably robust for making long‐term judgements as to organisational strategy.

This important distinction is rooted in culture. A soldier conducts an estimate to establish what he knows within an uncertain battlespace – ‘…all action must be planned in a mere twilight…what this feeble light leaves indistinct to the sight talent must discover, or must be left to chance’.Footnote 83 This is in stark contrast to the culture of the performance manager in the business space, epitomised by the philosophy described at the very beginning of this paper that asserts ‘If you can’t measure it, you can’t manage it.’Footnote 84 The question is whether this difference represents a schism between the two cultures.

Killing the Queen’s Enemies versus Counting the Queen’s Pounds – a Schism?

While the title of this section may appear flippant, it strikes at the heart of this question and serves to highlight the cultural divide, epitomised by the EBA on one side and Performance Management on the other, that is central to this analysis. Of course, there are many servicemen who are also adept businessmen and there are probably civil servants who would willingly and effectively close with the enemy, but the generalisation made to preface this section is underscored by the inescapable fact that the Chief of the Defence Staff (CDS) is the Secretary of State’s principal military adviser while the Permanent Under Secretary of State – the head of the MOD civil service – is formally the department’s ‘Accounting Officer’.Footnote 85

If this cultural context is taken down a level and the precise nature of the EBA is juxtaposed with performance management, at least three inconsistencies identified in this paper stand out:

First, effects are aimed at producing outcomes whereas performance management measures output. In this sense, one is about effectiveness and one about efficiency.

Second, strategic effects can only be attained by aggregating the instruments of national power – potentially across a coalition – and are, therefore, certainly multi‐agency and potentially multinational; but the defence balanced scorecard – as the name suggests – is an internally‐led process.

Third, the balanced scorecard was designed around tangible output metrics such as sales or production rather than esoteric cognitive effects. These inconsistencies should all be considered within the cultural context alluded to in the introduction to this section; but does this represent a schism and, if so, is it insurmountable?

The acid test in determining whether a schism exists is to assess whether the system is capable of handling the complexities of cognitive effects. It is worth remembering when making this assessment that one of the original challenges in introducing the balanced scorecard to defence was whether output could be defined and what constituted success.Footnote 86 This judgement was made prior to the emergence of the EBA when the operational output of the Armed Forces was largely dominated by easily measurable and generic military objectives in a series of similar operations in the Balkans. ‘Maintain a safe and secure environment’ for instance, is reasonably easy to apply metrics to. The monochrome of the Cold War was only beginning to gain colour at this stage; certainly, no one could have imagined the psychedelic patterns of the twenty‐first century that were to emerge.

In this new security context, where the ‘principal security challenges of the future are international terrorism, the proliferation of Weapons of Mass Destruction, and weak and failing states’,Footnote 87 metrics are more problematic. The ‘soft’ end of the strategic effects framework – ‘prevent’, ‘stabilise’, ‘contain’, ‘deter’ and ‘coerce’Footnote 88 – would appear to have greater utility against these challenges than the more‐traditional ‘hard effects’.

This raises challenges at two levels. First, if the strategic outcome is in itself one of these soft effects – achieved mainly through cognitive means – such as containment or deterrence, it may be impossible to establish whether the outcome is both desired and intended and to whose contributions or capabilities the outcome should be attributed. Second, if the outcome is a combination of these effects within a dynamic model – the fact that an effect has been intended, even if it is followed by another, possibly ‘harder’, effect before its Measures of Effectiveness (MOE) have been established, means its success or otherwise cannot be accounted for.

In a complex series of effects interacting within a dynamic security environment such as that leading up to the 2003 war in Iraq, the attribution of success to strategic effects such as ‘contain’ (while United Nations Security Council Resolution 687 remained extant) through ‘deter’ and ‘coerce’ to ‘defeat’Footnote 89 is impossible. Should this performance be reflected on in performance management as failure in the first three or success in the last? ‘Contain’ may have worked if the decision‐makers (who are also the ‘customers’) had allowed time for the effect to take place. Regardless, this example highlights the fact that military outcomes, and those outcomes they contribute to, cannot always be measured in a way that contributes anything meaningful to performance management or organisational strategy.

This is not to say, of course, that the EBA and the Balanced Scorecard are completely incompatible. There are many instances where a cognitive effect may be had as part of a military objective or outcome in a way that is easily measurable. An example of this is the contribution Psychological Operations (PSYOPS) had to surrender rates of insurgents in Malaya.Footnote 90 The problem is that, for the reasons explained in the last section, cognitive effects will rarely be so discernible in terms of attribution and outcome. One agency having a cognitive effect in isolation at a distinct level of warfare is almost inconceivable in the contemporary battlespace.Footnote 91 It is this multi‐agency, multinational, multi‐dimensional framework in which cognitive effects must now be delivered that is simply too difficult for the current performance management regime and is the cause of the schism described in this section. The existence of this schism warrants the conclusion that the current performance management regime cannot cope with cognitive effect.

Overcoming the Challenge

To overcome this schism is to assimilate two concepts that are diametrically opposed both in terms of culture and start point. The irony is that cognitive effect is potentially the most efficient means of pursuing an outcome in terms of resources, but the most difficult to assimilate into business models aimed at increasing efficiency. Notwithstanding this twist, the problem is in bringing together two incompatible doctrines. There are essentially two ways that this first order problem can be overcome; either accept that cognitive effect – and the broader EBA – are too difficult to measure in business terms, or adapt the performance management regime outside the commercially‐accepted orthodoxy.

The existing system already errs towards the first option, yet it may not be selling the Defence Balanced Scorecard short. If performance management of military output is centred completely on an ability to respond to a set of tasks – the current ‘effectiveness’ measureFootnote 92 – then the more challenging ‘operations’Footnote 93 metric could potentially be sidelined or even ignored. The resulting regime would represent a robust measure of the state of the toolset, with either a diluted judgement as to its use or none at all – and the accompanying acceptance that this is the case. This is the path of least resistance, but to have either no judgement or a meaningless ‘fudge’ on performance on operations in the EBA era would probably affect the credibility of the otherwise robust Defence Balanced Scorecard. The assessment would be a ‘fudge’, because the challenges of attribution, measurement itself, undesired and unintended effects and cascade are too difficult for the current system and, if attempts to address them were made, they would overwhelm it.

The second option would be to adapt the Balanced Scorecard system to cater for cross‐governmental outcomes. The NAO model shown earlier in this paper provides a useful basis for this solution. Figure shows an adaptation of the NAO model, with the MOD and two illustrative Other Government Departments (OGDs) working towards an outcome. The MOD part of this process has adapted Defence Balanced Scorecard perspectives superimposed on it. The model differentiates between efficiency and effectiveness and shows a ‘Deliverables’ perspective that utilises measures centred on output – in terms of the state of the Diplomatic, Information, Military and Economic (DIME) toolsetFootnote 94 – but principally outcome, in terms of that toolset’s contribution to a multi‐agency objective.

FIGURE 4 THE CROSS‐DEPARTMENTAL RELATIONSHIPS BETWEEN INPUTS, OUTPUTS AND OUTCOMES AND HOW THE DEFENCE BALANCED SCORECARD COULD BE ADAPTED TO FIT THIS CONSTRUCT

FIGURE 4 THE CROSS‐DEPARTMENTAL RELATIONSHIPS BETWEEN INPUTS, OUTPUTS AND OUTCOMES AND HOW THE DEFENCE BALANCED SCORECARD COULD BE ADAPTED TO FIT THIS CONSTRUCT

The assumption behind this construct is, of course, that a cross‐governmental assessment be made of both the effectiveness in delivering an outcome and attribution of the outcome between capabilities from different departments. To achieve this would require a far greater degree of ‘joined‐up government’ than exists at the moment and raises some second‐order challenges.

The requirement for a cross‐governmental assessment is dependent on other departments adopting the EBA. The acknowledgement is made in emerging doctrine that the MOD’s aims ‘in the very near‐term’ are to encourage the use of EBA ‘in UK cross‐Departmental strategic change management’ and ‘explore ways to integrate allied and friendly nations’ instruments of power’,Footnote 97 but this is a tall order while there is still no agreed lexicon and many of the terms that are being used are of a very military nature and, therefore, probably unpalatable to OGDs. This is by no means insurmountable, but is a reflection on the difficulty that is likely to be encountered when trying to achieve consensus. Such consensus would be central to the cross‐departmental assessment and attribution that is being described here.

There is then the difficulty of the actual assessment and who would make it. This article has suggested that aspects of the assessment of whether an outcome has been achieved or not may rely on intelligence about the subject/adversary rather than judgements about our own actions. This, in itself, creates issues of ownership. There is also the problem of how undesired effects are accounted for and whether these should somehow be counted against successes. Finally, there is a temporal consideration. Second or third order effects may happen well after the period of engagement yet may still impact on national interests or objectives. As an example, the US’s involvement with the Taliban during the Cold War, for instance, had second and third order effects in a different century.

Multidimensional Policies

The conclusion this paper makes is that the current performance management regime cannot cope with cognitive effects. The last section suggested a methodology that could overcome this problem. The principal recommendation here, which stems from this proposed methodology, is that the MOD’s performance should purely be measured in terms of the maintenance of a toolset. The orchestrated use of this toolset, along with the other instruments of power, should then be subject to a cross‐governmental assessment of performance in using this capability. The only other option is to accept that the existing performance management regime is unable to cope with emerging doctrine. This course of action would undermine the credibility of an otherwise robust system.

The difficulty of implementing a methodology as described is underscored by the assumption that was made in qualifying the model: that a cross‐governmental assessment be made of both the effectiveness in delivering an outcome and attribution of the outcome between capabilities from different departments. This demands a level of integration that may prove elusive in the short term. In this respect, the EBA may have come before its time, as the structures to support it simply are not in place. A previous Director General of the Joint Doctrine and Concepts Centre (JDCC) perhaps missed the enormity of this problem when he said ‘…the traditional structure of our military needs to be changed, to one which is truly joint in future operations, in order to unlock the full potential of the future battlespace’.98 It is now apparent that jointery is only one dimension of the problem. If the existing schism between the battle and business spaces is to be overcome, so called joined‐up policy must achieve a level of multidimensionality that not only harnesses the synergies of jointery but also those of multi‐agency, cross‐departmental and multinational effects, all delivering similarly joined‐up outcomes.

Additional information

Notes on contributors

H. M. Tomlyn

Major H.M. Tomlyn, British Army, ACS CB.

Notes

1 M. Armstrong, Performance Management: Key Strategies and Practical Guidelines (London: Kogan Page 2000) p.52.

2 Including, Modernising Government, National Measurement System, Resource Account Budgeting and Public Service Agreements.

3 National Audit Office, Measuring the Performance of Government Departments (London: TSO 22 March 2001) p.9.

4 Ibid. p.1.

5 Ibid.

6 ⟨ http://www.official.documents.co.uk ⟩ [Accessed 27 Oct. 2004].

7 ⟨ http://www.MOD.uk/img/aboutus/dmb/scorecard⟩ [Accessed 27 Oct. 2004].

8 Ministry of Defence, Managing Organisational Performance in the (UK) Ministry of Defence.

9 Ministry of Defence, Performance Report 2001/2002 (Ministry of Defence).

10 The combat power to deliver effect – regardless of the level at which the effect is targeted – will normally be derived at the tactical level. See Air Vice‐Marshal I. McNicoll, ‘Effects Based Air Operations: Air Command and Control and the Nature of the Emerging Battlespace’, RUSI Journal 148/3 (June 2003) pp.38–44, p.39.

11 The principal performance measure used in determining that operations and military tasks are being conducted as directed by Ministers.

12 Ministry of Defence, Delivering Security in a Changing World – Future Capabilities (London: TSO 2004) p.6.

13 The most up‐to‐date definition of EBA is: ‘…an approach whereby the advantages of a global perspective of the battle‐space, the idea of an operational environment without operational boundaries and the full integration and orchestration of the instruments of national power can be profitably exploited’. Joint Doctrine and Concepts Centre (JDCC), UK Military Effects‐Based Operations – an Analytical Concept (JDCC, 2005) p.1.

14 Ibid.

15 Strategic Effects as listed in: Ministry of Defence, Delivering Security in a Changing World – Supporting Essays (London: TSO 2003) p.6.

16 McNicoll (note 10) differentiates cognitive effects as those ‘principally targeted against will’. Current doctrine offers no definition, but states that cognitive effects are ‘behavioural’ rather than ‘against capability’. See JDCC, UK Military Effects‐Based Operations (note 13)

17 This could, of course, include inaction.

18 Although this term appears to have no formal provenance, it is widely used to describe the commercial aspects of the MOD’s processes.

19 Armstrong (note 1) p.1.

20 N. Britten, ‘A bigger picture of performance: Managers often neglect strategic goals when measuring how their company is doing.’ Financial Times. 7 Aug. 2001, p.16.

21 R.S. Kaplan and D.P. Norton, The Balanced Scorecard: Translating Strategy into Action (Boston, MA: Harvard Business School Press 1996) pp.vii–viii.

22 Kaplan and Norton (note 21) p.ix.

23 O. Gadiesh, ‘Transforming Corner‐Office Strategy into Frontline ACTION [sic]’, Harvard Business Review 79/5 (May 2001) p.72, gives an account of the use of strategic principles.

24 Britten (note 20) p.16.

25 This is the central theme of R.S. Kaplan and D.P. Norton, The Strategy Focused 0rganisation: How Balanced Scorecard Companies Thrive in the New Business Environment (Boston, MA: Harvard Business School Press 2001)

26 Kaplan and Norton (note 25) p.1.

27 Kaplan and Norton (note 21) p.92.

28 See A.D. Chandler Jr., Strategy and Structure: Chapters in the History of the American Industrial Enterprise (Cambridge, MA: MIT Press 1986) for the full history and application of matrix management.

29 EFQM has a slightly broader remit than purely internal practices; it can, however, reside within the Internal Business Process Perspective. For more detail on EFQM see Armstrong (note 1) pp.58–9.

30 See ⟨http://www.iso-9000.co.uk/⟩ for further details [Accessed 19 March 2005].

31 See ⟨http://www.onesixsigma.com/ for further details⟩ [Accessed 19 March 2005].

32 Kaizen roughly translates as ‘continuous improvement’. For more information on Kaizen see N. Oliver and B. Wilkinson, The Japanization of British Industry: Developments in the 1990s (Oxford: Blackwell 1992)

33 Kaplan and Norton (note 21) p.126.

34 Ibid. p.127.

35 Ibid. p.49.

36 Ibid. p.61.

37 Ibid. p.67.

38 Ibid. p.179.

39 This example is taken from the nonprofits and government section of Kaplan and Norton (note 25) p.155.

40 Organisation for Economic Co‐operation and Development, Performance Management in Government: Contemporary Illustrations (Paris: OECD 1996) p.15.

41 Ibid.

42 Examples of cross cutting ‘joined‐up government’ initiatives can be found at the Prime Minister’s Strategy Unit website: ⟨http://www.strategy.gov.uk/output/Page3684.asp⟩ [Accessed 19 March 2005].

43 OECD, Performance Management (note 40) p.19.

44 NAO (note 3) p.2.

45 OECD (note 40) p.15.

46 The terms ‘brass hats’ and ‘frock coats’ are often used to describe the relationship between generals and their civil masters. For instance, see David French, The Strategy of the Lloyd George Coalition, 1916–1918 (Oxford: OUP 1995).

47 Ministry of Defence, Annual Reports and Accounts 2003/2004 (Ministry of Defence).

48 Ibid.

49 National Audit Office (note 3) p.6.

50 MOD, Managing Organisational Performance in the (UK) Ministry of Defence (note 8).

51 Available at ⟨http://www.MOD.uk/img/aboutus/dmb/scorecard.jpg⟩ [Accessed 27 Oct. 2004].

52 Ibid.

53 MOD (note 8).

54 See descriptions of these perspectives above.

55 MOD (note 8).

56 Ibid.

57 Ibid.

58 NAO (note 3) p.9.

59 Ibid. p.2.

60 Sun Tzu, The Art of War (London: Hodder & Stoughton 1981) p.7.

61 Carl von Clausewitz, On War (edited with an introduction by A. Rapoport) (London: Penguin 1968) p.413. The Editor makes the point that Deterrence was not in Clausewitz’s lexicon as ‘Preparation for war, in Clausewitz’s estimation, had only one objective – war.’ In a sense this mindset – if known to an adversary – represents a deterrent in its own right, but it seems Clausewitz and the Editor failed to realise this subtlety.

62 Pre‐eminent in this period was Thomas Schelling, whose thinking was central to Western deterrence theory during the Cold War. It was the failure of this theory in Vietnam that led to a more cautious approach and work such as that by George and Simon. A good account of the development of these theories can be found in Lawrence Freedman (ed.), Strategic Coercion: Concepts and Cases (Oxford: OUP 1998) pp.1–14.

63 Freedman (note 62) p19.

64 An excellent account of the history of Cognitive Psychology can be found in J.R Anderson, Cognitive Psychology and its Implications (New York: W.H. Freeman 1990) pp.5–10.

65 M.W. Eysenck, A Handbook of Cognitive Psychology (London: Lawrence Erlbaum Associates 1998) p.27.

66 The state where decision‐maker weighs up all alternatives and always produces a rational decision. There is also the concept of bounded rationality as introduced by Herbert Simon, which recognises the use of heuristics in decision‐making. For further information see I Brooks, Organisational Behaviour – Individuals, Groups and Organisations (London: Prentice Hall, 2003) p.36.

67 Eysenck (note 65) p.29.

68 As evidenced in the Strategic Effects listed in the Strategic Defence Review . See Ministry of Defence, Strategic Defence Review (London:TSO 1998) and Ministry of Defence, Delivering Security in a Changing World (London: TSO 2003) p.6.

69 JDCC (note 13) aims to overcome this difficulty within the department, but at the time of writing this paper had not been endorsed by the Chiefs of Staff.

70 Although there is an aspiration to formally expose OGDs to the EBA, the concept currently resides within MOD. This is recognised in JDCC (note 13) Annex A.

71 Ibid. p.B‐1. ‘…supporting the attainment of an objective’.

72 Ibid. ‘…undermining the attainment of an objective’.

73 Ibid. p.B‐2.

74 Ibid.

75 Ibid. ‘…a number of Effects may occur as the result of a single Action and these Effects in turn lead to further Effects’.

76 Ibid. p.B‐1. McNicoll (note 10) p.39 limits the definition to the ‘achievement of strategic or operational military effects’.

77 JDCC (note 13) p.B‐1.

78 McNicoll (note 10) p.39.

79 See Figure for these perspectives in the context of the Defence Balanced Scorecard.

80 Again, Figure provides the context of this perspective.

81 JDCC (note 13) p.12.

82 Ibid.

83 Clausewitz (note 61) p.189.

84 Armstrong (note 1) p.52.

85 Sir Ewen Broadbent, The Military and Government (New York: St. Martin’s Press 1988) p.184.

86 MOD (note 8).

87 Secretary of State for Defence, the Rt. Hon. Geoff Hoon in: Ministry of Defence, Delivering Security in a Changing World – Future Capabilities (London:TSO 2004) p.1.

88 Ministry of Defence, Delivering Security in a Changing World – Supporting Essays (London:TSO 2003) p.6.

89 Ibid.

90 Empirical data to support this is in: Malayan Communist Party 1948–1960. Available at ⟨http://www.britains-smallwars.com/malaya/mcp.html⟩ [Accessed 16 March 2005].

91 This is a strong assertion, but the level of jointery that exists within the UK Armed Forces and those of our allies almost certainly makes this the case even before multi‐agency activities are factored in.

92 See Figure .

93 This would be similar to the existing ‘Effectiveness’ measures based on readiness and equipment.

94 Available at ⟨ http://www.MOD.uk/img/aboutus/dmb/scorecard.jpg⟩ [Accessed 27 Oct. 2004].

95 NAO (note 3) p.2.

96 JDCC (note 13) p.A‐1.

97 McNicoll (note 10) p.44.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.