3,265
Views
12
CrossRef citations to date
0
Altmetric
Original Articles

Standards as catalyst for national innovation and performance – a capability assessment framework for latecomer countries

, , &

Abstract

This paper develops a framework for assessing national standards capability. The framework draws on total quality management (TQM) models as its theoretical basis and was developed through an expert group panel, interviews, and focus group workshops. To verify the framework, pilot implementation was conducted in four countries. The results show that the proposed framework is useful for measuring the strengths and weaknesses of overall capability per pillar and category in a national standards system. The paper contributes to the expansion of the TQM model in assessing national level capability by more effective evaluation and systematic development of national standards systems for developing countries.

1. Introduction

The objective of this research was to develop a framework to evaluate national capability for a standards system. To date, there has been no single developed country without a national standards system for setting and implementing national standards, because such a system is essential to the socio-technical development of a nation. Considering the critical roles this national standards system can play, its importance has been often neglected or discussed only as part of a quality system or a trade system.

However, changes in global trade flow in the past couple of decades have gradually increased the recognition of the important role of standards and quality in effective economic development (Guasch, Racine, Sánchez, & Diop, Citation2007). As Choi, Lee, and Sung (Citation2011) point out, there are growing public policies and academic literature that perceive standardisation as a catalyst for national or company innovation by facilitating access to markets and enabling interoperability between new and existing technologies, products, services, and processes (Blind, Citation2009; DIUS, Citation2009; Farrell & Saloner, Citation1985; Galvin & Rice, Citation2008; Kano, Citation2000; Swann & Britain, Citation2000). For these reasons, emerging late-comer countries, like Korea and China, have considered standards a strategic mechanism for their transition from technology-based rule users or followers to rule generators in the international market (Choung, Hwang, & Choi, Citation2000; Choung, Ji, & Hameed, Citation2011; Lee & Oh, Citation2008).

One of the key concerns in the area of standards is the impact of standards on trade, including technical barriers or mutual recognition of conformity assessment results (Chen & Novy, Citation2012; Heckelei & Swinnen, Citation2012; Swinnen & Vandemoortele, Citation2012). That influence should not be limited to specific sectors, however, but instead linked to practically almost all sectors of an economy, from food and agriculture (Henson, Masakure, & Cranfield, Citation2011; Maertens, Minten, & Swinnen, Citation2012; Neilson & Pritchard, Citation2007; Ponte, Raakjær, & Campling, Citation2007; Ruben & Fort, Citation2012) to automotive, electronics, environment, and services (Brenton, Edwards-Jones, & Jensen, Citation2009; Eden, Bear, & Walker, Citation2008; Geng, Dong, Xue, & Fu, Citation2012; Rich & Perry, Citation2011; Shiroyama, Citation2007).

The organisations for standardisation, metrology, and accreditation make up a country's national quality infrastructure (Frota et al., Citation2010). National standards' related activities, therefore, should be considered an indispensable component of national quality and a technological system and also a clear basis for economic competitiveness, as well as being the appropriate evaluation mechanism that is needed for successful national development.

2. Literature review on national standards capability

There is to date only limited academic or empirical research on the development of an evaluation tool to determine national standards capability. Even though some international organisations and developed countries have provided assistance programmes for developing countries to apply and build capacity in national standards activities, few specific tools are as yet available to measure national standards capability overall. Although no tool has been developed to assess overall national standards capability and cover all standards-related sectors, there do exist a few meaningful conceptual frameworks or approaches that can be noted here.

The World Bank and the United Nations Industrial Development Organization (UNIDO) have dealt with certain components of a national standards system in their policy reports and capacity-building programmes and provided positive guidelines for developing the major components of a national standards system infrastructure (Guasch et al., Citation2007; UNIDO, Citation2003, Citation2010). However, neither organisation has tried to create an actual specific framework that can evaluate national standards capability wherein national standards activities are defined for the purpose of trade capacity building. UNIDO's standards, metrology, testing, and quality, a so-called Standards, Metrology, Testing and Quality (SMTQ) program, has not developed any assessment framework, but rather raised the competitive capability so as to conform to market requirements and thus connect to the markets of developing countries. The World Bank manual considers standards and quality systems critical for a national competitive edge (Guasch et al., Citation2007). The manual contains very useful and comprehensive descriptions of concepts, components, and interactive relations of national standards infrastructure, but it does not develop an assessment framework for a national standards system.

The International Organization for Standardization (ISO) started the Institutional Strengthening (INS) project in 2011 based on the ISO Action Plan for Developing Countries 2011–2015 (ISO, Citation2010, Citation2011). The ISO's INS project objective was not to develop an evaluation tool, but instead assist the developing countries in strengthening their own institutional capacity. The ISO's INS project is still in the very early development stages, and the outcome is planned so as to improve the processes of developing countries and formulate meaningful and pragmatic business plans that will drive their organisations forward to greater achievements, especially in terms of their involvement in international standardisation. The ISO's INS project output plan addresses good practice and a manual for the formulation of a national strategy; marketing and communication; an information and communication technology (ICT) infrastructure; an enhanced role in national quality infrastructure building; and stakeholder involvement (ISO, Citation2011).

The International Telecommunication Union (ITU) and the Korea Communication Commission (KCC) developed the Tool for Assessing Standards Capability (TASC) in the ICT sector. The TASC tool was developed as part of the ITU Bridging the Standardization Gap (BSG) program (ITU, Citation2009). One specific objective of this ITU tool is to identify primary gaps and provide recommendations for how to improve standards development, its implementation, and the usage capacities of developing countries for ICT standards. The TASC tool contains four evaluation categories – standards development capacity, standardisation of human resources, government standards policy, national standards use, and adoption. The result of the TASC evaluation is described as being one of four different capacity levels, ranging from ‘low’ Level 1, ‘basic’ Level 2, and ‘intermediate’ Level 3, to ‘advanced’ Level 4. The TASC tool is important, as it is the very first tool available to assess national standards capability. Some of its limitations include first of all that the TASC tool is not covering the overall national system but is ICT sector-specific, and second, that it was developed without a theoretical model.

This absence of an evaluation framework for a national standards system and its capability or performance is problematic because there exist currently no systematic mechanisms to measure and improve a national standards system. There is thus a need to develop a more comprehensive framework to describe and assess actual national standards capability. This assessment framework should cover all the key dimensions of a national standards system, such as testing, certification, accreditation, and metrology; and it should also consider various existing approaches and aspects offered in the above approaches, such as institutions, human resources development, stakeholders, and the infrastructure.

Within this context, total quality management (TQM) models fit best as a national standards system is the foundation for a national quality system. TQM is an integrative philosophy of management used to continuously improve the quality of products and processes (Ahire, Citation1997). Dahlgaard, Chen, Jang, Banegas, and Dahlgaard-Park (Citation2013) reviewed the assessment criteria and limitations of TQM, and further emphasised that TQM is a very flexible model for adaption and its core dimensions should be understood and respected. As such, the core value or the criteria of TQM models has evolved since the mid-1980s. The 25 years of evolution of TQM and related quality studies have been well summarised by Dahlgaard-Park, Chen, Jang, and Dahlgaard (Citation2013). During the last two decades, TQM has been also expanded to non-manufacturing service sectors including higher education, health care, customer satisfaction, tourism, and recently, innovation (van Iwaarden & van der Valk, Citation2013; Wang, Citation2012; Xiaorong, Bojian, & Huili, Citation2012). Asif and Searcy (Citation2014) argued that performance excellence in higher education institutions requires the management of the capabilities related to research, programme design and delivery, and service performance. Chin and Tsai (Citation2012) established a service quality evaluation model for luxurious restaurants in international hotel chains to guarantee the success of the restaurants. Fotiadis and Vassiliadis (Citation2013) explored the effects of new facilities on patients’ perception of the quality of services in the general hospitals in Greece. Recent TQM literature discusses the linkages between: the TQM model and innovation; quality management and innovation; organisational innovation capability and product platform development; and the impact of organisational learning on innovation performance (Chien, Lin, & Lien, Citationin press; Dadfar, Dahlgaard, Brege, & Alamirhoor, Citation2013; Leavengood, Anderson, & Daim, Citationin press). Also the TQM models have been well developed to evaluate an organisational performance quantitatively. Therefore, as specified in Section 4, TQM criteria are selected as a basis to develop assessment criteria of national standards system and its innovation capability.

3. Methodology

This research used qualitative approaches to develop an assessment framework for national standards capability. Qualitative research is a form of scientific inquiry that spans different disciplines, fields, and subject matter and includes many varied approaches (Denzin & Lincoln, Citation2005). Qualitative methods are frequently chosen in the certain specific research contexts, which also exactly match the research scope and goal of this current study, namely, to investigate complex phenomena that are difficult to measure quantitatively and generate the kind of data that is necessary for a more comprehensive understanding of a problem, gain insights into potential causal mechanisms, develop sound quantitative measurement processes or instruments, and/or study special populations (Curry, Nembhard, & Bradley, Citation2009). This paper thus applies a mixture of individual interviews, focus group workshops, and pilot implementation of qualitative study methods in order to investigate the complex phenomenon of a national standards system and identify mechanisms to assess its capability as demonstrated in .

Figure 1. Outline of the research method.

Figure 1. Outline of the research method.

First, a group of eight experts were organised as a research team and panel. These eight experts had professional experience that averaged around 19 years, in one or multiple aspects of standards-related areas, including national and international standardisation, conformity assessment for products and management systems, scientific and legal metrology, regulations and trade, and quality management systems. This expert group served as a development team to study the theoretical groundwork and define draft characteristics and create the skeleton framework to prepare and conduct interviews, workshops, and pilot implementation for verification of the final draft framework. In the process, the TQM model was chosen as a theoretical basis.

Second, interviews with individual experts and focus group workshops were held between March 2011 and August 2012. Participating experts came from policy makers, laboratories, industry and academia. The focus group workshops included sessions with experts from four countries that had participated in pilot implementation, thus giving the framework cross-national characteristics.

Third, a pilot implementation was conducted in four countries from August 2011 to October 2012 to validate and upgrade the draft framework. The trial assessments of these four countries’ activities were held in conjunction with focus groups in workshops, in particular with the experts from the assessed countries. This interactive design enabled the experts to participate in focus group workshops and then discuss and provide direct feedback to the framework development team so as to improve the quality of the final framework.

4. Results: the National Standards Capability Assessment Framework

4.1. Defining the three key pillars of national standards system: SCaM

Prior studies have often referred to a standards-related infrastructure as having a national quality infrastructure as summarised in (Guasch et al., Citation2007; Sanetra & Marbán, Citation2007). These prior studies used a slightly different scope or definition for the components of national standards. For instance, a national standards system sometimes was referred to differently as SMTQ (UNIDO, Citation2010) or Standardisation, quality insurance and conformity assessment, accreditation, metrology (SQAM) services (Dinu, Citation2008). It is noted as well that standardisation and metrology were commonly included in the scope, but conformity-related terms were often differently described. These conformity-related terms include testing, inspection, certification, calibration, accreditation, and conformity assessment.

Table 1. Defining the key components of national standards system and a comparison to prior studies.

However, considering that 157 nations are members of the World Trade Organization (WTO) and 131 nations provided the Agreement on Technical Barriers to Trade (TBT) inquire-point services as of August 2012, the definition of conformity assessment by the WTO can be considered one of the most widely referenced. WTO (Citation1995) defines ‘conformity assessment’ as follows: Conformity assessment procedures include, inter alia, procedures for sampling, testing and inspection; evaluation, verification and assurance of conformity; registration, accreditation and approval as well as their combinations. ISO (Citation2006) and ISO, UNIDO (Citation2009) use almost the same definition for the term ‘conformity assessment’.

As presented in , drawing on these definitions of the WTO (Citation1995) and ISO, UNIDO (Citation2009), this paper considers conformity assessment as including overarching testing, inspection, certification, and accreditation of laboratories and further defines the three key pillars of a national standards system as Standardisation, Conformity Assessment, and Metrology (SCaM).

Figure 2. SCaM: The three key pillars of a national standards system.

ScaM – related system and their activities constitute the national standards system of a country interdependently.
Figure 2. SCaM: The three key pillars of a national standards system.

First, the Standardisation pillar includes the developing, adopting, and disseminating standards for products, processes, and management systems. Second, the Conformity assessment pillar includes activities and procedures to fulfil specific requirements of standards, such as testing, inspection, certification, and their accreditation. Third, the Metrology pillar includes the process for the scientific and legal metrology system that can ensure that measurements are made with appropriate accuracy and reliability both domestically and internationally (ISO, UNIDO, Citation2009). A national standards system or infrastructure may also serve as the foundation for a national quality system and, further, a national technology innovation system.

4.2. Developing the seven assessment categories using TQM models

After characterising the key pillars of a national standards system, the first task is how to develop a framework that can evaluate national capability, considering the complicated organisational dynamics of the three SCaM pillars. Different models were discussed for consideration, including Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis (Humphrey, Citation2005), the diamond model (Porter, Citation1990), and TQM models. Among these, the TQM models were selected as the basis for an assessment framework for specific reasons.

First, TQM models fit best when measuring and describing national standards capacity, as that capability depends on the management system of standards activities of a nation. Second, a national standards system is the foundation for a national quality system, as previously illustrated in . It is the primary target of TQM, and therefore, TQM criteria matches well in its nature to identify the current status and improve the national standards capability as the basis of a national quality system. Third, the TQM models have been well developed to evaluate an organisational performance quantitatively with scores that are useful for describing national differences in standards system development.

The TQM models, often called performance (business) excellence models, have been used in different regions and countries. Such as the Australian Business Excellence Framework, European Foundation for Quality Management (EFQM) Excellence Model, Korean National Quality Awards, Singapore Quality Award Criteria, and the Baldrige Criteria for Performance Excellence in the USA. In , the selected assessment criteria for the TQM models in the USA and Europe and the criteria for a national standards system developed by ITU and ISO are compared to develop precise assessment categories for a national standards system.

Table 2. Seven assessment categories of the NACAF.

There are many theories about what constitutes TQM and the scope of the criteria needed to implement it. As Alonso-Almeida and Fuentes-Frías (Citation2012) describe TQM, researchers can likely use the most heterogeneous element as a guide and for evaluation, depending on geographical place, even though various models are the viable form to achieve, and under these criteria do offer excellence in quality topics. Reviewing the existing approaches to a national standards system and the various evaluation criteria of TQM, seven categories are distilled to assess a national standards system. The definitions of these seven categories are offered in .

Table 3. Seven assessment categories of the NSCAF.

4.3. Combining three pillars and seven evaluation categories: The NSCAF Framework

The three SCaM pillars of a national standards system – SCaM – are merged into seven assessment categories to develop the assessment criteria for the full framework. This national standards capability assessment framework (NSCAF) is conceptualised in . Therefore, this combination of three SCaM pillars and 7 categories constitutes 21 assessment categories in the NSCAF. These 21 descriptions are also extended to assessment items and a questionnaire for actual assessment of the framework.

Figure 3. Concepts of the NSCAF.

The NSCAF is a combination of three SCaM pillars and seven assessment criteria.
Figure 3. Concepts of the NSCAF.

4.4. Scoring and interpreting the results

The total score for the NSCAF evaluation is 1000 points like the Baldrige or EFQM Criteria. Based on the research interviews, focus group workshops and pilot implementation, 400:300:300 are allocated for the three SCaM pillars – 400 sub-total points are allocated for the standardisation pillar, 300 points for the conformity assessment pillar, and 300 points for the metrology pillar.

The underlying philosophy for this score allocation is that the three SCaM pillars should be balanced and interrelated, and the allocation is more of analytical weight. As standardisation is the core aspect of a national standards system, it was discussed and agreed by participating experts in the focus group workshops that standardisation requires more scores than the other two pillars. Although conformity assessment covers a wide range of activities in standardisation, metrology, especially in the developing countries, is a very important area to be developed by any national standards system. In that sense, conformity assessment and metrology are weighted equally when allocating scoring points. Also, it was discussed and noted that there may be a possibility or a need to have to evaluate each pillar separately considering the purpose of the assessment.

Because each pillar has different characteristics, NSCAF gives different scores for the assessment criteria to each pillar. For instance, laws, systems, and institutions are more important and complicated in terms of standardisation and conformity assessment pillars than the metrology pillar; therefore, these categories receive higher scores. An infrastructure that includes hardware facilities and equipment is more important for conformity assessment and metrology pillars than for the standardisation pillar; outcome category is equally allocated 20% of the sub-total scores for each pillar. The scoring for each pillar and category and its percentage share of the (sub-) total score for NSCAF is described in .

Table 4. Scoring for NSCAF.

The scoring items for the NSCAF assessment criteria are classified by the kind of information and data to furnish relative scores for the two evaluation groups – process and outcome, as done in the Baldrige Performance Criteria (NIST, Citation2011). First, the check sheet for the ‘process’ group is used in evaluating category , and their assessment items are evaluated by using combinations of X-axis for system-building levels and Y-axis for implementation levels of national standards activities. Second, the check-sheet ‘outcome’ group is used when evaluating category , and assessment items are assessed using the two-dimensional combinations of X-axis for outcome levels and the Y-axis for outcome tendency.

The assessment result, per the SCaM pillars, or in total, can categorise the assessed country as being in one of the five stages of national standards capability. Some least-developed countries with very limited capability may fall into the first, (i) introductory stage, while the majority of developing countries will likely fall into the two middle stages, namely, (ii) the establishing stage, and (iii) the growing stages. Many of the newly developed countries and also the developed countries may fall into the two highest stages, namely, (IV) maturity and (v) innovation as presented in .

Figure 4. Five stages of countries using NSCAF.

Figure 4. Five stages of countries using NSCAF.

5. Results: validation by pilot implementation of four countries

5.1. Profile of four countries

This section briefly presents the overview and key results of the pilot implementation. Pilot implementation was conducted in four countries in order to improve and verify the NSCAF. The feedback from the pilot implementation was already incorporated into the final NSCAF framework in the previous section in the interest of simplicity here. The participating countries could not be simply selected, however, to suit the convenience of the framework development team. The participating country had to be willing to conduct weeks of pre-assessment and have the resources to hold a few days of face-to-face meetings for actual assessment. That endeavour usually requires a hard decision made by senior executives of the participating institutions. During the pilot implementation period, all four countries were able to participate.

highlights the basic data for the four countries that participated in the pilot implementation. A country is a least developed country, by definition of the United Nations (UN-OHRLLS, Citation2012) – low gross domestic product (GDP), low income, and low exports. B country and C country are both developing countries, but their socio-economic statistics make some differences. While B country has higher GDP per capita than C country, C country has a higher population, GDP, and export volumes than B country. D country is a newly industrialised nation with high GDP and high income. Both B and D countries have high level exports of goods as a present of GDP. Pilot assessments were conducted by a self-assessment by the assessed country and then by a face-to-face main assessment by the framework development experts panel.

Table 5. Profile of four participating countries involved in pilot implementation.a

5.2. Assessment results for the three SCaM pillars

The results of the NSCAF assessment can be presented in different forms based on the multiple purposes of the analysis. shows the assessment results of the pilot implementation by total scores and the sub-total of each SCaM pillar. These scores indicate that the capabilities of the four countries are now at different development stages, in total or within each pillar.

Figure 5. Pilot implementation results based on three SCaM pillars.

The ranking of overall scores of the four countries do not always match the ranking of each pillar.
Figure 5. Pilot implementation results based on three SCaM pillars.

In total, the results of this study indicate that A country with 373 score points was positioned at the establishing stage; B country with 541 points and C country with 563 points were similarly at the same level or the growing stage; D country with 737 points was at the mature stage. Generally, these results match the economic development levels described in . However, it is also notable that B and C countries showed the same stage of overall national standards capability even though the GDP per capita for B country was almost three times that of C country (around a 3000 US$ gap). This analysis also applies to the comparison for A and C countries, which indicates a different stage of capabilities although their GDP difference per capita is not that large (around a 500 US$ gap). By reasonable assumption then, this variation is related, at least in part, to the institutional development history and the gap in ‘GDP:Export ratio’ for the countries. C country had the highest ‘GDP:Export ratio’ (78%) compared to the three other countries and A (33%), B (42%), and D (50%) countries; also C country established its national standards law and institutions much earlier than the other two countries.

Specifically, these results suggest that each of the three SCaM pillars were developed with a different balance for the four countries. A country was assessed to have better capability in the metrology pillar than Country B, while its overall capability, especially for conformity assessment and metrology pillars, was quite far behind B country. Among A, B, and C countries, B country had the most superior capability in the conformity assessment pillar, while C country had the highest capability in the standardisation and metrology pillars. D country was assessed to have superior capability than the other three countries for all SCaM pillars, and especially for conformity assessment. These results suggest that the three SCaM pillars in any nation may be developed having a different balance due to different socio-economic, technical and cultural scenarios.

5.3. Results for the seven assessment categories

The results of the NSCAF assessment can be presented as seven categories to show which capability category in a country is well-developed or under-developed. illustrates the standards capabilities for the four countries under review by seven pillars, in a heptangular radar chart.

Figure 6. Pilot implementation results using the seven categories. (a) Overall SCaM capability. (b) Standardisation capability. (c) Assessment of conformity capability. (d) Metrology capability.

Figure 6. Pilot implementation results using the seven categories. (a) Overall SCaM capability. (b) Standardisation capability. (c) Assessment of conformity capability. (d) Metrology capability.

(a) demonstrates their overall capability using seven assessment categories for all the SCaM pillars. A country had the least capability in all assessment categories among the four countries, but it is also equipped with a relatively good infrastructure that is much closer to B and C countries. D country had superior capabilities than the other countries in general, and its human resources capability and outcome capability are relatively low compared to the other five categories. B and C countries presented as quite close in overall capability, with some ups and downs; in most pillars – it was noted that B country lacked in process management, while C country lacked in stakeholder engagement in all SCaM pillars.

(b) demonstrates the capability for the standardisation pillar using seven assessment categories. A country had the least capability in all categories of the four countries, and in particular limited capabilities in the second category for developing strategies and implementation plan. D country had superior capabilities than the other countries in general. However, it showed a similar level of capabilities in certain categories with B or C country, for example, its first category of laws, systems, and institutions and its sixth category of process management. B and C countries showed almost the same capabilities although there were certain small gaps per assessment category. Particularly, B country had a superior infrastructure while C country had superior human resources.

(c) demonstrates capability in the conformity assessment pillar using seven assessment categories. In all categories for the four countries, A country had inferior capability, and D country had superior capability compared to the other countries. The overall capability of B and C countries was quite comparable, but B country clearly demonstrated a higher capability in stakeholder engagement and human resources development, while C country was stronger in infrastructure development.

(d) demonstrates capability in metrology pillar using seven assessment categories. D country clearly demonstrated larger capability in all categories than did the other three countries. The comparisons of those three countries produced a somewhat more complex picture – A country was assessed as most capable in terms of infrastructure and outcome; B country was in laws/systems/institutions, strategies and implementation plan, stakeholder engagement; and C was in process management, human resource development, and infrastructure.

6. Summary and conclusions

No truly available framework has yet been offered for assessing a national standards system. The proposed NSCAF in this paper can serve as a foundation model for future theoretical development and more pragmatic application for national development in the area of standards. In particular, the pilot implementation undertaken here demonstrates that the NSCAF is useful for recognising overall development status and distinguishing whether a standards system is developed in a balanced manner that addresses all three SCaM pillars and the seven categories.

The results of NSCAF can describe detailed strengths and weaknesses of countries using either absolute points or by a comparison with other countries per seven categories. Overall, the results of the research indicate that the development stage of a national standards system may fit an industrial development stage, but the detailed development of national standards systems will vary as a whole or for more than three SCaM pillars for the assessed countries, depending on their individual industrial development and trade characteristics. The feedbacks on the NSCAF from pilot implementation were already incorporated into the framework described in the previous section.

Two years of development and pilot implementation in a limited number of countries for the NSCAF, however, may not be enough to develop a fully mature framework. There is still room to improve. Some issues can be overcome by revising the framework itself, while others may need better implementation of the best know-how for assessment teams and evaluators as has been done for most TQM models. Regardless of this limitation, the NSCAF can be considered as a promising theoretical framework that be used to describe complex components and the dynamic interrelationships of a national standards system. In practice, the NSCAF can be beneficial not only for individual developing countries, but also for international organisations or developed countries who seek to provide capacity building for the developing countries. Also, the NSCAF can be useful for TQM scholars as it adapts TQM model to assess national standards capability and its innovation performance.

Future research may include undertaking in-depth case studies of one or multiple countries. The framework could be further stabilised by wide-ranging assessment implementation in countries that have different population sizes, economies, import and export volumes, industrial structures, and overall industrial development statuses. The NSCAF assessment could be further developed to be used for comparison purposes for countries not only in a horizontal study of a specific year, but also in a longitudinal study over several years so the development trajectory of a national system can be fully developed and then analysed and studied precisely.

Acknowledgments

This research, developing National Standards Capability Assessment Framework (NSCAF), has been part of the International Standards Cooperation Program (ISCP) project, which is funded by the Korean Agency for Technology and Standards (KATS) and operated by the Korean Standards Association (KSA). Special thanks should be given to Dr Kap-Dong Park of the University of Science and Technology (UST) and Mr Byung-soo Yoon of Korean Association of Standards and Testing Organizations (KASTO) for their contributions in developing assessment criteria for the metrology section of the assessment framework; and to Mr Jong-Yoon Jun of KATS and Ms Sunghyun Park of KSA for their support in enabling the authors to conduct this study.

References

  • Ahire, S.L. (1997). Management science – Total quality management interfaces: An integrative framework. Interfaces, 27, 91–105.
  • Alonso-Almeida, M.M., & Fuentes-Frías, V.G. (2012). International quality awards and excellence quality models around the world. A multidimensional analysis. Quality & Quantity, 46, 599–626.
  • Asif, M., & Searcy, C. (2014). Determining the key capabilities required for performance excellence in higher education. Total Quality Management & Business Excellence, 25, 22–35.
  • Blind, K. (2009). Standardisation: A catalyst for innovation, inaugural address series – Research in management. Rotterdam: Erasmus Research Institute of Management, Erasmus University Rotterdam.
  • Brenton, P., Edwards-Jones, G., & Jensen, M.F. (2009). Carbon labelling and low-income country exports: A review of the development issues. Development Policy Review, 27, 243–267.
  • Chen, N., & Novy, D. (2012). On the measurement of trade costs: Direct vs. indirect approaches to quantifying standards and technical regulations. World Trade Review, 11, 401–414.
  • Chien, C.-C., Lin, H.-C., & Lien, B.Y.-H. (in press). Capability contingent: The impact of organisational learning styles on innovation performance. Total Quality Management & Business Excellence, doi:10.1080/14783363.2012.746193
  • Chin, J.-B., & Tsai, C.-H. (2012). Developing a service quality evaluation model for luxurious restaurants in international hotel chains. Total Quality Management & Business Excellence, 24, 1160–1173.
  • Choi, D.G., Lee, H., & Sung, T. (2011). Research profiling for ‘standardization and innovation’. Scientometrics, 88, 259–278.
  • Choung, J.-Y., Hwang, H.-R., & Choi, J.-H. (2000). Transition of latecomer firms from technology users to technology generators: Korean semiconductor firms. World Development, 28, 962–982.
  • Choung, J.-Y., Ji, I., & Hameed, T. (2011). International standardization strategies of latecomers: The cases of Korean TPEG, T-DMB, and Binary CDMA. World Development, 39, 824–838.
  • Curry, L.A., Nembhard, I.M., & Bradley, E.H. (2009). Qualitative and mixed methods provide unique contributions to outcomes research. Circulation, 119, 1442–1452.
  • Dadfar, H., Dahlgaard, J.J., Brege, S., & Alamirhoor, A. (2013). Linkage between organisational innovation capability, product platform development and performance. Total Quality Management & Business Excellence, 24, 819–834.
  • Dahlgaard, J.J., Chen, C.-K., Jang, J.-Y., Banegas, L.A., & Dahlgaard-Park, S.M. (2013). Business excellence models: Limitations, reflections and further development. Total Quality Management & Business Excellence, 24, 519–538.
  • Dahlgaard-Park, S.M., Chen, C.-K., Jang, J.-Y., & Dahlgaard, J.J. (2013). Diagnosing and prognosticating the quality movement – A review on the 25 years quality literature (1987–2011). Total Quality Management & Business Excellence, 24, 1–18.
  • Denzin, N.K., & Lincoln, Y.S. (2005). The Sage handbook of qualitative research. Thousand Oaks: Sage Publications.
  • Dinu, V. (2008). Policies concerning SQAM services development in Romania. Amfiteatru Economic, 9, 33–39.
  • DIUS. (2009). The UK Government Public Policy Interest in Standardisation 2009. London: Department for Innovation, Universities and Skills.
  • DTI/DFID. (2003). Interdepartmental workshop on capacity building in standards, quality, accreditation and metrology (SQAM) sector. London: Department of Trade and Industry & Department for International Development.
  • Eden, S., Bear, C., & Walker, G. (2008). Understanding and (dis)trusting food assurance schemes: Consumer confidence and the ‘knowledge fix’. Journal of Rural Studies, 24, 1–14.
  • Farrell, J., & Saloner, G. (1985). Standardization, compatibility, and innovation. The RAND Journal of Economics, 16, 70–83.
  • Fotiadis, A.K., & Vassiliadis, C.A. (2013). The effects of a transfer to new premises on patients’ perceptions of service quality in a general hospital in Greece. Total Quality Management & Business Excellence, 24, 1022–1034.
  • Frota, M., Racine, J., Blanc, F., Rodrigues, P., Ibragimov, S., Torkhov, D., & Osavolyuk, S. (2010). Assessment of the Ukrainian quality infrastructure: Challenges imposed by the WTO and commitments to EU accession. Key Engineering Materials, 437, 611–615.
  • Galvin, P., & Rice, J. (2008). A case study of knowledge protection and diffusion for innovation: Managing knowledge in the mobile telephone industry. International Journal of Technology Management, 42, 426–438.
  • Geng, Y., Dong, H.J., Xue, B., & Fu, J. (2012). An overview of Chinese green building standards. Sustainable Development, 20, 211–221.
  • Guasch, J.L., Racine, J.-L., Sánchez, I., & Diop, M. (2007). Quality systems and standards for a competitive edge (Directions in Development). Washington, DC: World Bank Publications.
  • Heckelei, T., & Swinnen, J. (2012). Introduction to the special issue of the World Trade Review on ‘standards and non-tariff barriers in trade’. World Trade Review, 11, 353–355.
  • Henson, S., Masakure, O., & Cranfield, J. (2011). Do fresh produce exporters in sub-Saharan Africa benefit from GlobalGAP certification? World Development, 39, 375–386.
  • Humphrey, A. (2005). SWOT analysis for management consulting. SRI Alumni Association Newsletter, December, 7–8.
  • ISO. (2006). Metrology, standardization and conformity assessment: Building an infrastructure for sustainable development. Geneva: ISO Central Secretariat.
  • ISO. (2010). ISO action plan for developing countries 2011–2015. Geneva: ISO Central Secretariat.
  • ISO. (2011). Project for strengthening ISO members in developing countries at the institutional level (INS project) – position paper outline. Geneva: ISO Central Secretariat.
  • ISO, UNIDO. (2009). The Conformity Assessment Toolbox. Geneva: Author.
  • ITU. (2009). Bridging the standardization gap. ITU-T research project: Measuring and reducing the standards gap. Geneva: ITU.
  • van Iwaarden, J., & van der Valk, W. (2013). Controlling outsourced service delivery: Managing service quality in business service triads. Total Quality Management & Business Excellence, 24(9–10), 1046–1061.
  • Kano, S. (2000). Technical innovations, standardization and regional comparison – A case study in mobile communications. Telecommunications Policy, 24, 305–321.
  • Leavengood, S., Anderson, T.R., & Daim, T.U. (in press). Exploring linkage of quality management to innovation. Total Quality Management & Business Excellence, doi:10.1080/14783363.2012.738492
  • Lee, H., & Oh, S. (2008). The political economy of standards setting by newcomers: China's WAPI and South Korea's WIPI. Telecommunications Policy, 32, 662–671.
  • Maertens, M., Minten, B., & Swinnen, J. (2012). Modern food supply chains and development: Evidence from horticulture export sectors in Sub-Saharan Africa. Development Policy Review, 30, 473–497.
  • Neilson, J., & Pritchard, B. (2007). Green coffee? The contradictions of global sustainability initiatives from an Indian perspective. Development Policy Review, 25, 311–331.
  • NIST. (2011). The 2011–2012 criteria for performance excellence. Gaithersburg, MD: Author.
  • Park, J., Bahng, G.W., & Choi, J.O. (2010). The role of metrology communities under the WTO system: Measurement science and conformity assessment procedures. Accreditation and Quality Assurance: Journal for Quality, Comparability and Reliability in Chemical Measurement, 15, 445–450.
  • Ponte, S., Raakjær, J., & Campling, L. (2007). Swimming upstream: Market access for African fish exports in the context of WTO and EU negotiations and regulation. Development Policy Review, 25, 113–138.
  • Porter, M. (1990). The competitive advantage of nations. Harvard Business Review, 68, 73–93.
  • Rich, K.M., & Perry, B.D. (2011). Whither commodity-based trade? Development Policy Review, 29, 331–357.
  • Ruben, R., & Fort, R. (2012). The impact of fair trade certification for coffee farmers in Peru. World Development, 40, 570–582.
  • Rusjan, F.o.E.B. (2005). Usefulness of the EFQM excellence model: Theoretical explanation of some conceptual and methodological issues. Total Quality Management and Business Excellence, 16, 363–380.
  • Sanetra, C., & Marbán, R.M. (2007). The answer to the global quality challenge: A national quality infrastructure. Braunschweig: PTB, OEA and SIM.
  • Shiroyama, H. (2007). The harmonization of automobile environmental standards between Japan, the United States and Europe: The ‘depoliticizing strategy’ by industry and the dynamics between firms and governments in a transnational context. The Pacific Review, 20, 351–370.
  • Swann, G.M.P., & Britain, G. (2000). The economics of standardization: Final report for standards and technical regulations directorate. London: Department of Trade and Industry, Manchester Business School.
  • Swinnen, J., & Vandemoortele, T. (2012). Trade and the political economy of standards. World Trade Review, 11, 390–400.
  • UNIDO. (2003). Trade capacity building: Supply-side development, conformity and integration in the global market (UNIDO position paper). Industrial Development Forum and Associated Round Tables, UNIDO, Vienna, pp. 151–180.
  • UNIDO. (2010). UNIDO activities in the area of standards, metrology, testing and quality (SMTQ). Vienna: Author.
  • UN-OHRLLS. (2012). The criteria for the identification of the LDCs. Retrieved November 11, 2012, from http://www.unohrlls.org/en/ldc/25/.
  • Wang, M.-L. (2012). An evaluation of customer relationship management in hospital-based and privately run nursing homes in Taiwan. Total Quality Management & Business Excellence, 24(9–10), 1004–1021.
  • WTO. (1995). Agreement on technical barriers to trade. Geneva: Author. Retrieved September 8, 2012, from http://www.wto.org/english/docs_e/legal_e/17-tbt_e.htm
  • Xiaorong, N., Bojian, X., & Huili, Z. (2012). The application of total quality management in rural tourism in the context of new rural construction – The case in China. Total Quality Management & Business Excellence, 24(9–10), 1188–1201.