2,658
Views
4
CrossRef citations to date
0
Altmetric
Original Articles

Beyond outputs: pathways to symmetrical evaluations of university sustainable development partnerships

&
Pages 1-19 | Received 12 Dec 2013, Accepted 08 Jan 2015, Published online: 27 Feb 2015

Abstract

As the United Nations Decade of Education for Sustainable Development (2005–2014) draws to a close, it is timely to review ways in which the sustainable development initiatives of higher education institutions have been, and can be, evaluated. In their efforts to document and assess collaborative sustainable development program outcomes and impacts, universities in the North and South are challenged by similar conundrums that confront development agencies. This article explores pathways to symmetrical evaluations of transnationally partnered research, curricula, and public-outreach initiatives specifically devoted to sustainable development. Drawing on extensive literature and informed by international development experience, the authors present a novel framework for evaluating transnational higher education partnerships devoted to sustainable development that addresses design, management, capacity building, and institutional outreach. The framework is applied by assessing several full-term African higher education evaluation case studies with a view toward identifying key limitations and suggesting useful future symmetrical evaluation pathways. University participants in transnational sustainable development initiatives, and their supporting donors, would be well-served by utilizing an inclusive evaluation framework that is infused with principles of symmetry.

Obamba and Mwema insightfully observed in Citation2009 (359–360) that ‘the field of international development has been broadened to include cooperation and partnerships in higher education and other knowledge-based sectors’. Around the same time, the higher education specialists who participated in the earlier two-round UNESCO-GUNI (Global University Network for Innovation) Delphi poll ranked ‘including the environmental, economic and social aspects of sustainability in the curricula of all students and in the institutional activity of universities’ as one of the top three measures that institutions of higher learning around the world should undertake to promote human and social development (for poll details, see Lobera Citation2008, 316; also de Haan, Bermann, and Leicht Citation2010, 200–201). As the United Nations Decade of Education for Sustainable Development (2005–2014) draws to a close,Footnote1 it is timely to review the ways in which the sustainable development initiatives of higher education institutions have been, and can be, evaluated.

Although university involvement in sustainable development activity has increased substantially, the extent to which higher education initiatives have influenced development outcomes is widely contested. In their efforts to understand and document sustainable development program impacts, universities are challenged by many of the same conundrums that confront donors and development agencies. Advancing the evaluation of academic programs devoted to sustainable development further promises to enhance our understanding of development processes and their implications for policy.

In this article, we explore pathways to enhanced evaluation of university sustainable development initiatives that are illuminated by international development experience. Our primary academic program interest is with university research,Footnote2 curricula, and public outreach focused on poverty, climate and environment, energy, and natural-resource management. At the center of attention is the transnational higher education partnership (THEP), an increasingly popular vehicle for collaborative North–South, South–South, and triangular (North–South–South) development undertakings (Koehn Citation2012a). While we seek insights with applicability broadly to higher education institutions in the global South, our choice of evaluation methods, questions, and illustrations is especially influenced by the sub-Saharan African context. Among other advocates, the African Union has emphasized that the revitalization of Africa's universities to play their critical role in development ‘will require partnerships not only with local and regional actors and stakeholders, but also with the universities, businesses and governments of the developed world’ (NEPAD Citation2005, 21).

International initiatives, universities, and sustainable development

Since the mid-1970s, a series of international declarations that recognize the critical link between sustainable development and higher education have been endorsed and signed by universities around the world (for a comprehensive list spanning 1990–2012, see Tilbury Citation2013, 74–81). All relevant international declarations refer to the need to develop interdisciplinary and transdisciplinary research and public outreach initiatives and encourage tertiary-level institutions to cooperate proactively with each other and with other organizations (Wright Citation2004, 8–10, 13–17).

The 2012 Higher Education for Sustainability Initiative for Rio + 20 embraces higher education's ample opportunities to advance sustainable development as ‘special responsibilities’. The overarching commitment accepted by the universities, associations, and student organizations based in more than 40 countries that have signed the Rio+20 Declaration is ‘to provide leadership on education for sustainable development’ (Sawahel Citation2012). Leadership is to be manifest by promoting development through transformed teaching and enhancing student capabilities, encouraging relevant research, disseminating knowledge, adopting sustainable practices, and building sustainable societies.Footnote3

The environment–poverty nexus is at the core of sustainable development. Sustainable development encompasses improvements in living conditions, advancing equity and justice, and preserving the ability of future generations in all world regions to meet their needs and realize their aspirations. Sustainable development ‘implies maintaining the capacity of ecological systems to support social and economic systems’ (Berkes, Colding, and Folke Citation2003, 2) and vice versa. While ‘ill-defined, politically contentious, manipulable, and at times contradictory’ (Roberts and Parks Citation2007, 223–224), sustainable development has emerged as a popular, adaptable, and encompassing (McFarlane and Ogazon Citation2011, 84–85) guiding principle for academic programming and transnational collaboration. The vagueness of the construct also has resulted in establishment of a widely diverse set of academic programs devoted to development studies and in special evaluation challenges.

There is widening consensus among scholars and policy makers that the world's persistent and emerging challenges of sustainable development are increasingly complex, transcend borders, and necessitate the application of efforts and resources that are not confined within the boundaries of a single country, organization, or discipline (Escrigas and Lobera Citation2009, 10). Of special concern in this article are THEPs with sustainable development project objectives. THEPs dedicated to sustainable development aim to complete a specific research objective, academic program innovation, and/or public engagement project.

Key dimensions of comprehensive sustainable development evaluations

In Citation2005, Crossley et al. found that ‘few comprehensive and accessible accounts of international education development projects exist in the available literature’ (55). Even fewer studies specifically evaluate transnational academic programs devoted to sustainable development.Footnote4 Holistic THEP evaluations that are comprehensive in scope and incorporate societal change are especially rare.

Evaluating THEPs with sustainable development objectives is an essential, albeit complex, and controversial, undertaking. In the discussion that follows, we develop a novel framework for evaluating sustainable development education. We base our framework on the wider ‘theory-based evaluation’ approach (Funnell and Rogers Citation2011). It is grounded in the realist tradition that emphasizes context and the non-linearity of outcomes (Pawson and Tilley Citation1997) and is informed by an extensive literature on education and development evaluations. The literature is diverse and fragmented, charged with promising ideas and approaches, and characterized by few areas of consensus. To provide practical guidance, we also draw upon field experiences with evaluating international development programs.

Defining evaluation

Evaluation involves a rigorous, systematic, and evidence-based process of collecting, analyzing, and interpreting information to answer specific questions. It should provide assessments of what works and why, in what context, highlight intended and unintended results, and provide strategic lessons to guide decision makers and inform stakeholders. Evaluation is carried out for several purposes, including accountability for results achieved and resources used, program improvement, and shared learning.

Academic and professional circles lack a widely accepted framework for evaluating higher education's links to sustainable development. Among other constraints, the process of articulating a useful framework is complicated by the politics of evaluation. As Mosse (Citation2004) demonstrates with reference to a rural development project in India, ‘success’ and ‘failure’ can be policy validating narratives primarily constructed by powerful ‘interpretive communities’ composed of donors, project managers, and consultants that obscure the actual reasons and contextual relationships that are responsible for outcomes and impacts.

To prepare the way for our tailored framework discussion, we first consider the narrow consensus on key dimensions of comprehensive evaluations. Next, with special attention to THEPs, we identify the limitations of prevailing approaches. Then, in the interest of advancing development evaluation methods and approaches without presuming to attain scientific certainty, we identify, in generic and adaptable terms, the defining characteristics of comprehensive and symmetrical THEP evaluations and elaborate our evaluation framework. The following section critically assesses several full-term African higher education evaluation case studies, with a view toward identifying key strengths and limitations and suggesting useful future pathways drawn from the comprehensive framework we elaborated.

Consensus dimensions

In the search for a framework that will illuminate sustainable development programming at higher academic institutions, we can start with an agreement that comprehensive evaluations need to cover inputs, processes, outputs, outcomes, and impacts (see Deardorff and van Gaalen Citation2012, 168–170). An output is a ‘tangible product (including services) of an intervention that is directly attributable to the initiative’. Outputs relate to the completion, rather than the conduct (process) of activities and are the type of results over which managers can exert the most influence. An outcome is the ‘actual or intended changes in development conditions that an intervention seeks to support’ (UNDP Citation2011). Usually, the contributions of several partners are required to achieve desired outcomes. In sustainable development evaluations, impact involves improvements in human well-being (Crossley et al. Citation2005, 38) and the state of the environment. In his theoretical work on the evaluation of donor-supported development projects, Smith (Citation2000, 209) points out that ‘some projects may achieve their objectives but not have any great impact on the community, while others may not achieve their objectives but nevertheless have a beneficial impact – possibly even greater than was foreseen by the original design’. This insight calls attention to the importance of focusing on how projects (higher education collaborations in the situations of focal attention here) work (Mosse Citation2004, 646) in a given context.

In sustainable development evaluations, the ultimate question is whether the intervention being evaluated actually has contributed to changing trends in social and environmental conditions. Who is directly and indirectly better off and worse off (Thabrew, Wiek, and Ries Citation2009, 71) and why? It is difficult to determine attribution for higher education's ultimate impacts. Impacts typically are clouded when evaluated early and specific contributions are more difficult to distinguish from other influences as one comes closer in time to observable ultimate impacts. Impact pathways are complex, time lags often lengthy, and there are multiple intervening factors and actors that can facilitate or hinder impact. Theory-based approaches offer one way of evaluating the results of sustainable development initiatives (Vaessen and Todd Citation2008, 232; Todd and Craig Citation2014).

The limited consensus on higher education partnership evaluation also holds that the most helpful assessments are ongoing and trigger remedial actions; the evaluation process itself should be broadly inclusive of project or program participants and stakeholders. It should explore gaps and complementary strengths (Klitgaard Citation2004, 51) while maintaining credibility and an adequate level of independence. Curricular, research, and community engagement initiatives should be subjected to integrated assessments rather than being evaluated independently (Yarime et al. Citation2012, 104). In addition, THEP evaluations should encompass individual (private) and social (collective) benefits and costs.Footnote5

Asymmetries and limitations in prevailing donor-driven evaluations

Among multilateral and bilateral agencies, evaluation and monitoring are ‘now considered integral to development processes’ (Crossley et al. Citation2005, 37). However, most Northern donor-inspired evaluations are asymmetric in at least three respects. First, they are predominantly Northern-designed. Asymmetry arises from exclusive donor determination of project indicators and baselines against which achievements are measured.

Second, Northern funders are prone to promote ‘ever-widening standardization’ of evaluation metrics internationally (Taylor Citation2008, 99; Neave Citation2012, 5–7). Pressure to borrow customized, externally determined, indicators narrows opportunities to conduct contextually based and culturally responsive evaluations (Mebrahtu Citation2002, 510; Stokes et al. Citation2011, 168; Rosario Leon cited in Eyben Citation2013, 5); it also reduces the likelihood that empowering processes will be set in motion and diverts attention from the complexity of interests, relationships, and institutional practices that shape how sustainable development is pursued through THEPs. Too often, therefore, asymmetrical ‘power dynamics influence whose and what knowledge counts and which results matter … ’ (Eyben Citation2013, 4). In other words, ‘policy … produces evidence rather than vice versa’ (Mosse Citation2004, 649).

Third, the asymmetrical nature of Northern-managed development evaluations is further revealed by the disproportionate attention paid to monitoring performance and assessing results only on the Southern side of THEPs (van den Berg and Feinstein Citation2009, 35). Costs and benefits to Northern higher education partners are largely ignored.

In the educational arena, many Northern-designed evaluations of the effectiveness of development assistance ‘concentrate on assessing the delivery of inputs rather than assessing the extent to which intended outcomes were actually achieved’ (Chapman and Moore Citation2010, 555, 557, 562; also Srivastava and Oh Citation2010). Education sector and program evaluations are even more problematic than specific project evaluations are because they are ‘not developed, implemented, and evaluated through a careful research process with the involvement of the people they are purportedly designed to impact’ (Stoecker Citation2005, 65). These insights demonstrate the importance of devoting attention to the politics of higher education evaluations. Without civil society and higher education partner participation in determining the variables and indicators for evaluating sustainable development purposes and projects, evaluations will ‘remain largely shaped by what is easier to evaluate and quantify, driven by powerful stakeholders … who strongly prioritize knowledge and skills for economic growth and competitiveness … ’ (Singh Citation2007, 60).

In the development arena, approaches to evaluation are changing (see Eyben Citation2013, 27), with more emphasis on demonstrating results. In constructing a framework for evaluating academic programs dedicated to sustainable development, we draw upon developments in Africa as well as lessons from past donor-driven experience and critical studies of development evaluation approaches in published academic work.

Sustainable development partnership evaluations in higher education

Researchers, practitioners, and consultants are engaged in a search for relevant evaluation metrics in the fields of education and development (Walker et al. Citation2009, 571; Africa-U.S. Higher Education Initiative, Citationn.d., 3). However, consensus on specific metrics has yet to take shape within either field. Higher education evaluations are conducted by ‘people with a range of different values and different views of what constitutes credible evidence’ (Ingram Citation2004, xvii). Academic sustainable development evaluation is further complicated by the challenges of working within three discrete realms of activity (research, development practice, and learning) and three levels of analysis (individual, organizational, and societal). Critical analysis of the ways in which the possibilities and constraints imposed by donors affect higher education partnership outcomes and impacts also needs to be embedded in the evaluation process (see Samoff and Carrol Citation2006).Footnote6

In the absence of a holistic, outcome- and impact-focused framework for assessing sustainable development collaborations, higher education program evaluations tend to concentrate on near-term and readily quantifiable inputs and outputs – such as the number of partnerships entered, sustainable development courses offered, professionals trained, grant proposals generated, reports issued, and financial returns on investments (Chapman and Moore Citation2010, 557, 563; Deardorff and van Gaalen Citation2012, 167, 173; Mundy and Menashy Citation2012, 98). Existing assessment tools devote little attention to curriculum, teaching, anticipatory education (Sterling Citation2013, 28), capacity development, research impact, outreach, and poverty alleviation (Yarime and Tanaka Citation2012, 67, 74). One common approach for measuring international scientific collaboration uses co-authorship of published articles. However, co-authorship does not directly capture the complex dynamics, scope, or sustainable development effects of collaboration among partners and external stakeholders; some transnational-research collaborations do not result in co-authored publications, but generate social transformation while other co-authored publications do not entail any substantial research collaboration or development impact. As Fuller (Citation2006, 369) has noted, we lack metrics that ‘present universities as producers of more than simply paper (i.e., academic publications, patents, and diplomas) in order to capture the full extent of their governance functions [and influence]’. The evaluation framework for assessing sustainable development education that we develop below addresses ‘the need to improve and strengthen the definition of key performance indicators … ’ (Cloete, Bailey, and Maassen Citation2011, xix). Furthermore, in recognition of the lagged-effects phenomenon (Schuller, Hammond, and Preston Citation2004, 188), our framework explicitly recognizes the need to ‘develop new indicators of impact that will assess longer term results’ of THEPs (Africa-U.S. Higher Education Initiative, Citationn.d., 3–4).

The theory of change approach, which powerfully illuminates how and why activities shape outcomes, and under which conditions, provides insights that can be adapted in assessing academic programs devoted to sustainable development. By systematically identifying links among activities, context, and outcomes (Connell and Kubisch Citation1998, 36–38; Gambone Citation1998, 159–160),

[p]articipants engage in a process of theory generation in which they specify what outcomes they aim to achieve and how they expect their intended interventions to lead to their desired outcomes. They identify short- and medium-term goals and try to make explicit the links between them and long-term outcomes.Footnote7 (Boydell and Rugkasa Citation2007, 219)

There are useful analytical techniques utilizing the theory of change approach. Contribution analysis is based on verifying the theory of change that the program or project is based on and, then, analyzing the intervening factors that might influence intended outcomes to identify the particular program/project contributions based on reasonable levels of evidence (Mayne Citation2008). The most significant change technique involves reporting the changes caused by a project or intervention based on stories by stakeholders and identifying the most decisive ones (Davies and Dart Citation2005). Another assessment tool for illuminating lagged effects is the biographical approach, which allows learners to trace educational outcomes and impacts over ‘however long a period seems appropriate … ’ (Schuller, Hammond, and Preston Citation2004, 189).

Although sustainable development is predominantly contextual and place-based, facilitating and constraining forces beyond the local level remain influential (Elliott Citation2013, 305). In the context of THEP analysis, horizontal case studies engage multiple intra-institutional, stakeholder, and single-nation explorations. Vertical comparisons of transnational academic programs devoted to sustainable development involve tracing mutual influence across local, regional, national, and transnational levels (Bartlett and Vavrus Citation2009, 9–11) and include donor comparisons.

To yield meaningful insights, therefore, sustainable development education evaluations need to be broadened beyond externally imposed measures and global metrics. The cross-checking of findings through triangulated perspectives enriches evaluation as a meaningful explanatory exercise (Stern Citation2004, 38–39). A variety of forms of triangulation, including methodological, data, investigator, and theory triangulation, can be employed to increase confidence in sustainable development-partnership evaluations (Green and Tones Citation2010, 503).Footnote8

In addition to inputs, objectives, outputs, outcomes, and impacts, evaluations of sustainable development-focused partnerships need to consider processes and pathways. Too often, ‘the dynamics of partnerships, both positive and negative, are underemphasized’ (Klitgaard Citation2004, 52). In particular, ‘how’ linkages among outputs and impact variables need to be clarified (Vaessen and Todd Citation2008). Process evaluation features prominently in our tailored framework for evaluating sustainable development partnerships.

A symmetrical framework for academic sustainable-development evaluations

Our framework for academic sustainable development program evaluations builds upon three of the four main purposes identified by Stern (Citation2004, 37–38) in his contribution ‘Evaluating Partnerships’. The three benchmarks that are salient for THEP evaluations are design, management, and development. We approach evaluation as design with respect to the mutually conceptualized and planned arrangement for the partnership and its projects. In the framework discussion that follows, evaluation as management focuses on ongoing evaluation and monitoring of progress and shortcomings in relation to partnership governance and operational dynamics. Evaluation as development treats higher education capacity development and institutional outreach. Embedded in these purposes are five key outcome and impact evaluation criteria adapted from an assessment of contributions to national development results (Uitto Citation2011, 478): (1) relation to sustainable development priorities; (2) extent of local (Southern university) ownership; (3) long-term contribution to capacity development; (4) evidence of cross-disciplinary and cross-institutional synergy; (5) resource mobilization and partnership enhancement. The preferred evaluation methodology embedded in the framework incorporates quantitative and narrative data and values triangulation.

Evaluation as design

A useful starting place in assessing the symmetry of transnational higher education collaboration for sustainable development focuses on partnership and project design. Stern (Citation2004, 33) cautions that ‘partnerships do not always work’ and Stone (Citation2004, 156) maintains that ‘partnerships fail as often as they succeed’. Asymmetry often figures prominently in transnational partnership conflicts and failures (Samoff and Carrol Citation2004, 151; Stern Citation2004, 33). Many of the design questions embedded in our framework are drawn from reports on the experience of African universities in transnational partnerships supported by donor funding (also see Koehn and Obamba Citation2014).

The first evaluative criteria to be applied in assessing partnership design should specifically consider whether or not the expectations and justifications advanced for engaging in collaboration outweighed the arguments in favor of unilateral implementation (Catley-Carlson Citation2004, 22). Evaluation questions ()Footnote9 should focus on inclusiveness, transparency (Stone Citation2004, 157), internal academic legitimacy (Calder and Clugston Citation2004, 258), and a shared vision among partners and community stakeholders of what the partnership aimed to achieve.

Table 1. Questions that should be asked for evaluation as design.

Additional questions regarding THEP design would assess the extent to which sustainable development processes are addressed (Shriberg Citation2004, 78) and partnership objectives are linked with national sustainable development priorities (Agunias and Newland Citation2012, 63). Evaluation as design also should assess the extent to which partners possessed a clear and accurate understanding of the expectations and contributions of each collaborator (Catley-Carlson Citation2004, 21; Samoff and Carrol Citation2004, 130; Chapman and Moore Citation2010, 563). Other design questions should focus on the incentives and rewards integrated into partnership plans (Catley-Carlson Citation2004, 24; Klitgaard Citation2004, 48; Shriberg Citation2004, 78; Stern Citation2004, 35).

Also important are queries about the feasibility and flexibility of timeframes adopted for project inputs, outputs, outcomes, and impact. The key guideline here is to remain context-sensitive and not bound to rigid ‘Western notions of efficiency’ (Crossley et al. Citation2005, 97). Further, ‘too close an identification with project terms of reference and Logical Frameworks’ can stifle academic opportunities to respond to unexpectedly promising directions and to reap unintended benefits (Crossley et al. Citation2005, 106).Footnote10

Partnership designs further need to be evaluated on the basis of incorporation of ethical principles that are consistent with contextually determined sustainable development objectives. In this connection, we suggest consideration of the sustainability ethics justified and adapted to evaluation approaches by Fredericks (Citation2014). Of particular value in THEP sustainable development initiatives are demonstrated respect for farsightedness and the anticipatory rights of future generations, equity and justice in burdens and benefits, adaptability, careful use, cooperation, and feasible idealism (see Fredericks Citation2014, 98–109).

The design of partnership governance, including endorsement by partner university leaders (Calder and Clugston Citation2004, 256), also requires evaluation. The cooperation of stakeholders (Stern Citation2004, 31; Walsh and Kahn Citation2010, 39), the distribution of project responsibilities (Stern Citation2004, 32), and power relationships (Green Citation2013) should receive attention as part of the evaluation of design.

Evaluation as management

In THEP evaluations of management (operational) dynamics, evaluators should address questions () related to institutional commitment (Calder and Clugston Citation2004, 256) and stakeholder involvement (Stern Citation2004, 31; Thabrew, Wiek, and Ries Citation2009, 68, 74; Eyben Citation2013, 14). As a partnership evaluation tool, social mapping can illuminate degrees of participation, uncover central nodes, and reveal changes over time in the extent of collaboration (Walsh and Kahn Citation2010, 67).

Table 2. Questions that should be asked for evaluation as management.

Mutual trust is a defining aspect of partnership symmetry. Trust has interpersonal, inter-group, and inter-institutional dimensions (Schuller and Desjardins Citation2007, 70). Evaluation questions would revolve around conflict management (Stern Citation2004, 32; Boydell and Rugkasa Citation2007, 224). Major subjective differences on levels of trust among university partners suggest problems associated with asymmetric management behavior.

Face-to-face and virtual visits and meetings are important partnership lubricants. Thus, evaluation must pursue questions related to visitation and meeting arrangements, dynamics, productivity (King Citation2009, 44; Bailey Citation2010, 44; Walsh and Kahn Citation2010, 39), and follow-up actions (Gedde Citation2009, 35).

Other important questions pertain to inclusiveness and participation (Catley-Carlson Citation2004, 22–23). Budgeting constitutes a key dimension of symmetrical-partnership management that should be evaluated alongside linked aspects of academic management (Smith Citation2000, 216; Catley-Carlson Citation2004, 25–26; Stern Citation2004, 31). Also meriting attention are the interrelations of university actors with local and domestic communities and enterprises (Morfit and Gore Citation2009, 16) and the key interests and forces driving or blocking proposed changes (Green Citation2013).

Management of collaborative academic programs devoted to sustainable development includes monitoring to ensure that the overall aims of the THEP are ‘still synergized’ (Wanni, Hinz, and Day Citation2010, 58) and participants are utilizing key data related to performance indicators (Cloete, Bailey, and Maassen Citation2011, xix). Continuous monitoring provides managers and stakeholders with regular feedback on the consistency or discrepancy between planned and actual activities and on the internal and external factors affecting results (UNDP Citation2011). Monitoring implementation activities provides an early indication of the likelihood that expected results will be attained and offers an opportunity to validate program theory and hypotheses and to make necessary changes in activities and approaches.

It is important to know whether THEP managers have conducted agreed upon monitoring and evaluation (M&E) exercises at regular intervals (Wanni, Hinz, and Day Citation2010, 58). Using simple terms and methods, participatory evaluation assesses the perceived extent of participation in, and influence of, stakeholders on a particular project and promotes community empowerment and leadership accountability (Stokes et al. Citation2011, 168; Jilke Citation2013) by addressing issues of contextual relevance, learning from pathways to local expertise, and identifying appropriate development approaches (Koehn Citation1990, 191–223; Crossley et al. Citation2005, 39–40; Boydell and Rugkasa Citation2007, 223; Jost et al. Citation2014). Participatory evaluation continuously taps stakeholder perspectives on project impact and local outcomes (Oakley Citation1991, 263–266) and facilitates continuous learning, ongoing process improvements, and willingness to implement changes (Stokes et al. Citation2011, 168). Evaluation questions should address the collection and analysis of data (Walsh and Kahn Citation2010, 67) and whether managers regularly disclosed progress, failures, and successes (Calder and Clugston Citation2004, 258).

Questions pertaining to the initially perceived benefits of and priorities for transnational collaboration and tradeoffs among incentives can be particularly illuminating (Ingram Citation2004, xviii–xix; Postiglione and Chapman Citation2010, 379). Also revealing are the results of inquiries regarding whether all higher education partners have refined and improved project components, strategies, and symmetrical process arrangements (Wanni, Hinz, and Day Citation2010, 58) based on feedback and reflection on priorities (Stern Citation2004, 38; Walsh and Kahn Citation2010, 66).

Other essential management evaluation responsibilities involve accountability to external stakeholders and responsiveness to community and country development priorities (Tikly Citation2011, 10). Optimal impacts reflect community identified and stakeholder defined needs (Nordtveit Citation2010, 112), are ‘appropriate to the local situation (for example, in terms of technology)’, sustainable over the foreseeable future, and avoid imposing new and onerous financial burdens or other negative side effects (Smith Citation2000, 216). Sustainable development evaluations should assess partnership support to public agencies, NGOs, and private firms and THEP improvements in civic service to local communities (Morfit and Gore Citation2009, 16).

Addressing means as well as ends is necessary in holistic evaluations. THEP sustainability objectives are advanced to the extent that evaluation processes ‘help beneficiaries to formulate their own development strategies, encourage ownership and commitment, and help create a development consensus … ’ (Stern Citation2004, 39). Missing from higher education evaluations that are reduced to tracking outputs or cost–benefit analyses are such important considerations as participants' willingness to take on risk and pursue innovative approaches. Did research project participants and curriculum builders ‘welcome serendipity and unexpected developments’ (Austin and Foxcroft Citation2011, 130; also Beretz Citation2012, 144–147)?

Evaluation as developing mutual capacity for sustainable development

Another defining dimension of transnational academic collaboration involves capacity development. In this connection, symmetrical partnership evaluations regularly assess both institutional capacity and human capabilities and report identified shortcomings for action by all THEP participants. The overriding capacity evaluation question is: ‘Have outputs enhanced partner conditions [outcomes] that are likely to promote sustainable development?’

Institutional-capacity assessment

Symmetrical evaluations look for evidence regarding key dimensions of university institutional capacity (Catley-Carlson Citation2004, 24; Calleson Citation2005, 319–320; King Citation2009, 33; Yarime et al. Citation2012, 108). The THEP evaluation framework's critical sustainable development capacity questions are set forth in .

Table 3. Questions that should be asked for institutional-capacity assessment.

Assessing new institutional capacity for sustainable development education begins with the teaching function. Sustainable development evaluations should explore cross-disciplinary integration as well as the extent to which partners have incorporated indigenous knowledge, ways of learning, and insights (Thaman Citation2006, 181). Did partners include the views and suggestions of interested stakeholders – including government personnel, prospective employers, professional associations, donors, community leaders, faculty members, university administrators, and students (Bloom Citation2003, 147)? Other valuable indicators of THEP institutional capacity building center on the involvement of participating higher education institutions and external stakeholders in project- and instruction-related research and development activities (Calder and Clugston Citation2004, 257; King Citation2009, 44; Yarime and Tanaka Citation2012, 74).

Assessments of research impact, McMahon (Citation2009, 256) alerts us, should include its indirect professional and quality-of-life influence on the researcher's students, their students, and the graduate's contributions to society's needs.Footnote11 Stone (Citation2004, 155) contends that ‘conventional indicators of academic excellence (such as academic citations and scholarly peer reviews) will not suffice, since they do not reliably indicate policy relevance or impact’. In contrast to remote academic publications, influence over national policy has long-term multiplier effects. According to Court (Citation2008, 107), assessments should strive to secure (1) ‘clear documentation of the practice, quality, and developmental relevance of research partnerships’; (2) ‘detailed examples of success, and particularly failure’; and (3) ‘more assessment from the South’. Court (Citation2008, 107) recommends that evaluators provide in-depth case studies that analyze research project successes and failures and discover what has worked and has not worked (also Vromen Citation2010, 256–258).

In THEPs devoted to sustainable development that aspire to be symmetrical, institutional capacity should be judged, in part, on the willingness of faculty and administrators to place a premium on social justice and advocacy for those most in need. Thus, evaluators explore the extent to which learning opportunities are equitable across participating institutions and benefits are extended to persons who lack access to higher education (Walker et al. Citation2009, 567). Further, they look for evidence of increased faculty and staff contributions to glocal sustainable development undertakings and involvement in development policy circles (Morfit and Gore Citation2009, 16) and action based on adaptive learning (Eyben Citation2013, 14).

Another important, but less frequently attended to, dimension of institutional capacity assessment involves the extent to which progress is achieved in building monitoring and evaluation capacity at participating universities (Crossley and Bennett Citation1997, 222–223; Schuller, Hammond, and Preston Citation2004, 192; Pain Citation2009, 111). Appropriate M&E methods include include exit interviews with students, written course and experiential learning evaluations, stakeholder interviews, and open discussions aimed at improving various program features. Institutional capacity assessment also involves attentiveness to funding gaps, excessive time demands, personnel transfers (Crossley and Bennett Citation1997, 240), or other barriers that are holding back progress in governance, management, collaborative research, and/or public engagement (Stern Citation2004, 31–32; McLean and Walker Citation2011).

In sum, symmetrical institutional capacity building evaluations consider multiple factors. The selected indicators must be relevant to the partnership's sustainable-development objectives and integrated into a holistic THEP evaluation plan and methodology (DAC Citation2007, 61).

Human-capabilities assessment

THEPs provide opportunities for developing the personal competencies of Northern and Southern participants simultaneously; relevant evaluation questions are set forth in . Human-capabilities evaluation includes such matters as evidence of improvement in research capabilities and resource access as well as changes in perceptions of the value of transnational collaboration with Southern colleagues (Morfit and Gore Citation2009, 16). Outcome inquiries should focus on the transformation of curricula, course syllabi, internships, and service-learning opportunities (Holm and Malete Citation2010, 6), on how sustainable development concepts are engaged by students across the core curriculum (Shriberg Citation2004, 83; Tilbury Citation2004, 98, 104; McFarlane and Ogazon Citation2011, 100; Sawahel Citation2012), and on the preparation of culturally responsive evaluators (Stokes et al. Citation2011, 168).

Table 4. Questions that should be asked for human-capabilities assessment.

Rather than relying on numerical-output indicators such as the attainment of qualifications, human-capabilities evaluation should consider impact on the graduate's transnational competencies (Schuller and Desjardins Citation2007, 41; Koehn and Rosenau Citation2010) and career motivations. This approach requires baseline and near-graduation assessments as well as attention to asset building and long-term outcomes (Colclough Citation2012, 2). Thus, the linked technical and interpersonal performance of graduated practitioners needs to be evaluated periodically from multiple perspectives, using a variety of methods (Frenk et al. Citation2010, 1943), by socioculturally diverse observers, collaborators, and community members.Footnote12 When assessing the behavioral competence of graduates, the extent of demonstrated improvement from the initial starting point (‘added value’) and remaining shortcomings (Frenk et al. Citation2010, 1943) rather than ‘highest score’ or complete mastery constitute pivotal components in programmatic evaluations (see Jamil Salmi, cited in Marshall Citation2011).

In Citation2009, UNESCO identified skills as an educational priority (see McGrath Citation2010). As a predominantly skill-based initiative, authentic transnational-competence (TC) assessment focuses on behavioral demonstrations of skill-development expectations rather than on internal facilitators such as personal knowledge acquisition and attitudes. In the analytic realm, for instance, graduating practitioners should recognize contextually relevant transboundary connections with sustainable development (Virtanen Citation2010, 233), identify short-term and long-term tradeoffs among interdependent factors, and be able to explain how remote events and trends affect local conditions and the processes through which local actions exacerbate or ameliorate geographically and temporally distant conditions. Functionally skilled graduates should demonstrate the ability to leverage linking knowledge, critical reflection, ‘future thinking’ (Virtanen Citation2010, 234–235, 238–239), innovation, advocacy, and the other generic TC skills ‘into usable and accessible solutions’ to specific sustainable development challenges (Wamae Citation2011; also Tilbury Citation2004, 105; Koehn and Rosenau Citation2010, Chap. 9).

In THEPs devoted to research and sustainable development, participants and donors expect education to matter for short-term and long-term societal outcomes. Thus, sustainable development education should enhance ‘the agency and capabilities of individuals’ (Schuller and Desjardins Citation2007, 59). THEP evaluators are interested in whether university-graduated practitioners ‘actually do exercise their professional capabilities in ways that further social transformation’ (Walker et al. Citation2009, 568) and promote sustainable development (Wiek, Withycombe, and Redman Citation2011, 214). Eliciting the perceptions of poor and marginalized community members should be incorporated as a critical component of development-practitioner-competency evaluations (Jeffery Citation2012, 172–174). Ongoing community-based evaluations of practitioner impact should integrate multiple data collection methods that include pre- and post-project needs-assessment exercises, structured and semi-structured interviews, local government records and reports, focus group discussions (where culturally appropriate), periods of observation, and analysis of personal and institutional life histories (see Jeffery Citation2012, 172–174).

Moreover, evaluations of human capability building need to address the long-term societal impacts of education and training initiatives in a convincing fashion. In our era of brain drain and brain circulation, societal impact analysis should include tracer studies that explore the country of origin and receiving country(ies) sustainable development contributions of graduates. Carefully documented longitudinal impact case studies that incorporate a justified multiplier for comparable situations are useful in assessing long-term societal impacts.Footnote13 Demonstrated commitment to the training of trainers also is relevant in assessing the cumulative sustainable development impact of human-capability interventions (Gedde Citation2009, 35).

Development can be seen as a ‘process of expanding the real freedoms that people enjoy’ (Sen Citation1999, 3). In addition to skill-based assessments, therefore, evaluators need to ask whether the transnational partnership is enabling participating students in both South and North to maximize their freedoms ‘as human personalities, as confident citizens of their countries, as empowered members of their communities, and as informed “global citizens” entering debates beyond their national borders’ (Singh Citation2007, 76).

Comprehensive assessments

Joining the three domains of our evaluation framework offers a unique and holistic perspective on THEP impacts and outcomes. At the summative point, evaluators integrate assessment results related to all projects and identify the measures and process indicators by which projects have succeeded and failed (Catley-Carlson Citation2004, 26; Walsh and Kahn Citation2010, 69). In the interest of symmetry, one expects to encounter mutual, although not identical, benefits in partnership design, partnership management, and partner institutional capacity and human capability building. Research and evaluation findings suggest that a sense of joint ownership among partnered universities and communities is likely to be associated with favorable THEP outcomes (Fukuda-Parr, Lopes, and Malik Citation2002, 14).

We agree with Wanni, Hinz, and Day (Citation2010, 58) that ‘evaluation of the partnership itself, not just of outputs and deliverables, has to be built into the partnership’. THEP evaluators seek to discover change-promoting and change-resisting factors and forces (Boydell and Rugkasa Citation2007, 225; Nordtveit Citation2010, 111). Klitgaard (Citation2004, 54) recommends documentation of outrageous partnership success stories and of outrageous failures (also Chapman and Moore Citation2010, 563).

presents some key comprehensive evaluation questions. Included here are inquiries into how the THEP has morphed over time (Wanni, Hinz, and Day Citation2010, 62), whether crucial project elements and management changes have become institutionalized (Stern Citation2004, 31; Morfit and Gore Citation2009, 16), and whether other players remain committed (Morfit and Gore Citation2009, 16). Key overall assessment indicators involve the extent to which the partnership has strengthened M&E capacity among all partners ‘in ways consistent with the principles of sustainable development’ (Crossley et al. Citation2005, 44) and with demonstrating long-term capability and capacity building (Pain Citation2009, 95).

Table 5. Questions that should be asked for comprehensive assessments.

Comprehensive evaluations also should assess whether durable relationships have been built on the basis of ‘friendship, trust, and mutual respect’ (Holm and Malete Citation2010, 11) and whether collaborative research and development activities among the partners are continuing (or will continue) beyond the termination of external funding (Catley-Carlson Citation2004, 21; King Citation2009, 35; Morfit and Gore Citation2009, 16). Other key sustainability variables include plans for specific future collaborations (Klitgaard Citation2004, 46, 51; Wanni, Hinz, and Day Citation2010, 59) and demonstrated ability of the partners to leverage additional funding from external sources (Boydell and Rugkasa Citation2007, 223). For instance, when external funding to revitalize the engineering curriculum at The University of Malawi expired without sufficient capacity in place to implement the new curriculum, the UK partner, Leeds Met University, stepped in by seconding a faculty member from its staff and funding a conference in Malawi that led to a successful follow-up grant that sustained the partnership (Wanni, Hinz, and Day Citation2010, 35).

Comprehensive THEP sustainable development assessments also address benefits and costs (both intended and unintended, tangible and intangible). In his review of development partnership evaluations, Klitgaard (Citation2004, 45, 47, 51, 52) finds that costs, including opportunity costs, are ‘downplayed’ and that some of the ‘most important’ benefits are ignored (also see Sutton, Egginton, and Favela Citation2012, 159–160).

Another indicator of partnership impact needs to be introduced at this stage; that is, the ability of both Northern and Southern universities to maintain legitimacy with their core constituencies. Stern (Citation2004, 36) warns that ‘the consequence of partnerships not managing … balance between constructing distinctive understandings and visions and remaining in touch with their natural hinterland is loss of “reach” … [and] reduced ability to carry with them a wider constituency … .’

African partnership case studies

In this section, we draw upon published case studies of higher education partnerships involving African universities to demonstrate the utility of the symmetrical-evaluation approach. Specifically, we highlight the strengths and weaknesses of several recent VLIR UOS evaluation processes and suggest symmetry-based advancements based on the framework elaborated above.

An organization of Flemish universities, VLIR UOS supports partnerships among universities in Flanders, Belgium, and developing countries, with the explicit objective of looking for innovative responses to global and local challenges. The overall objective of the institutional university cooperation (IUC) program is to empower the Southern university as an institution to fulfill its role as a development actor in society.

VLIR UOS uses a systematic approach to evaluating cooperation at the university level that includes conducting both mid-term and final evaluations. Of the nine evaluations of university collaboration it has conducted in sub-Saharan Africa, we focus on three final evaluations of comprehensive research and development partnerships. These evaluations covered partnerships with Mekelle University in Ethiopia (van Baren and Alemayehu Citation2013), University of Nairobi in Kenya (de Nooijer and Abagi Citation2009), and University of the Western Cape in South Africa (Vander Weyden and Livni Citation2014). The three full-term evaluations had almost identical objectivesFootnote14 and similar scope.Footnote15

The evaluation methodologies mainly involved qualitative inquiries, including document analysis, interviews, debriefing meetings, and visits to project sites. The evaluations used standard criteria of quality, efficiency, effectiveness, impact, development relevance, and sustainability, which were applied to the key results areas (research, teaching, extension and outreach, management, human resources development, infrastructure, and mobilization of additional resources). Then, evaluators scored each IUC partnership on a five-point scale used to judge the results in quantitative terms and to evaluate the performance of projects.

Although evaluations were tailored to the specific cases, all of them accounted for traditional collaboration outputs, including the generation and strengthening of academic research, published research papers, numbers of graduates, and curriculum development. For instance, the evaluation of the IUC with Mekelle University found that the program had established a research culture in an institution that was highly teaching-driven and that the teaching program had been strengthened by the introduction of new curricula and integration of research findings. The University of Western Cape evaluation detected an ‘incredible transformation’ from a teaching to a research-based academic university with a high academic impact (largely based on citation counts).

The reviewed evaluations also attempted to assess the societal impact of the programs through links established with government authorities and with communities for extension and outreach. Here, evaluators typically relied upon impressionistic observations. In the Mekelle case, for instance, the evaluation team reported observing positive examples of the implementation of research results for improved livelihoods, transfer of techniques, and support in marketing, which enabled communities to improve the management of ground and surface water resources and micro dams, facilitating the cultivation of apples, and increased income.

Although the three cases involved serious efforts to conduct comprehensive THEP evaluations, we can observe important limitations. For instance, making broad claims about the transformation of the universities from teaching-based to research-oriented ones based on the numbers of graduates, papers, and citations alone is not justifiable. At minimum, such outcome-based evaluations need to include measures of relevance to local sustainable development needs. Assessing community-level changes in resource management and livelihoods and attributing them to the IUC program in question is also impossible without the application of a comprehensive framework. As recognized in the South Africa evaluation, it proved difficult to analyze actual impacts within the limited framework of the evaluation (Vander Weyden and Livni Citation2014, 21).

Process shortcomings occurred as well. The VLIR UOS funded partnerships adopted a structured, generic, and bureaucratic approach to partnership initiation. In an earlier VLIR UOS final evaluation of the IUC with University of Zambia, the authors acknowledge that

an evaluation mission of 6 working days to assess a 10-year programme almost 1 year after it has come to an end is bound to be affected by lack of time to fully grasp the evolution of the programme and its constituent projects. (de Nooijer and Siakanomba Citation2008, 7)

The VLIR-UOS evaluations also demonstrated certain asymmetries, starting with the fact that M&E of the IUC programs was explicitly the responsibility of the Northern partner. Other evaluation weaknesses are attributable to failure to take into consideration the presence or absence of symmetry in ‘evaluation as design’. Examples of fruitful initial design questions in our comprehensive framework not addressed include: ‘how symmetrical was participation in the process of establishing governance arrangements?’ and ‘did the THEP involve all core and periphery stakeholders who needed to cooperate in planning and project implementation?’

A number of unasked questions regarding ‘evaluation as management’ would have provided a depth of understanding regarding drivers of, and barriers to, change. These questions include: ‘did top higher education managers concentrate on principal partnership objectives and sustainable development needs or were they sidetracked by competing interests?’ ‘what was the degree of each academic stakeholder's involvement in overseeing project management?’ and ‘were partner budgets equitably distributed according to agreed-upon responsibilities?’ The VLIR-UOS partnerships granted African researchers holding PhD degrees substantive autonomy to formulate their own research projects in ways that are relevant to local development challenges. However, none of the projects proposed by the African researchers could materialize without the approval and conceptual support of the senior Northern researchers (also see Barrett, Crossley, and Dachi's Citation2011 study of the EdQual research partnership between two UK and four African universities). In contrast, and in line with our framework recommendations, every research project in Kenya's Moi University–Indiana University–Purdue University Indianapolis AMPATH-partnership program entailed joint leadership by Northern and Southern researchers (Koehn and Obamba Citation2014, 186). This management practice offers a promising pathway for improving VLIR-UOS THEP implementation and in designing future partnerships.

Further, one THEP feature intended to promote financial autonomy and mutual symmetry is the practice of decentralizing financial management to African partner institutions. This strategy potentially provides opportunities to strengthen capacity for financial management and technology transfer. In the VLIR-UOS partnerships, however, financial decentralization co-existed with a tight, rigid, and asymmetric regime of budgetary rules and rigidities determined by the Belgian partners and donors. It is important that evaluations capture such asymmetric relationships so that these can be corrected in future program design.

Among the unasked evaluation questions in our framework that reveal the extent of institutional capacity and human-capability development are: ‘what evidence is there that policy makers have officially recognized the social and economic benefits of specific sustainable development approaches and practices as a result of THEP activities?’ and ‘how have program graduates exercised professional capabilities and transnational competence in ways that further sustainable development?’ Filling these and other capacity building information gaps would enable evaluators to address outcomes in greater depth.

Viewing these three VLIR-UOS evaluations in the context of our comprehensive framework for assessing North–South university partnerships revealed existing asymmetries. Some of the most important asymmetries identified for discussion here relate to financial management and accountability. The potential for fiscal mismanagement and corruption remains a legitimate concern that would undermine other partnership aims and tarnish accomplishments. Within an overall context of fiscal decentralization and symmetry-enhancement pathways, we learned that THEPs would be particularly well-served by devoting further attention to building North–South trust through enhanced transparency, budgetary flexibility, financial management capacity development, and the progressive removal of Northern constraints.

Symmetrical implementation

We recognize the complexities and time/resource demands (Brown Citation1998, 101, 106–107; Gambone Citation1998, 150, 155, 161) involved in efforts to link academic program activities to social and development outcomes and impacts and do not presume to identify perfectly reliable, politically neutral, or complete evaluation methodologies. Our more modest objective is to identify a flexible and adaptable approach that will illuminate processes that are connected to outcomes and impacts in complex and dynamic contexts. Progress in this framework-constructing direction will enable the results of academic sustainable development evaluation exercises to be communicated lucidly, meaningfully, and convincingly to a broad range of stakeholders, policy makers, and lay publics (Shriberg Citation2004, 74; Stone Citation2004, 156; Schuller and Desjardins Citation2007, 18).

The approach set forth here also is consistent with ‘the need to develop a more holistic, imaginative and generous attitude to education's benefits’ (Schuller, Hammond, and Preston Citation2004, 192). Our approach emphasizes contextual indicators of achievements and vulnerabilities rather than global metrics. When applied across levels of analysis and key academic program domains, the framework's multidimensional, participatory, and social justice core unfolds.

Process plays an important part in the continuous improvement perspective we develop (also see Stone Citation2004, 158; Crossley et al. Citation2005, 39). In this connection, the framework devotes special attention across all partnership dimensions to formative and ongoing evaluations. For instance, the University of Leicester (UK)/University of Gondar (Ethiopia) partnership's Links Committee regularly monitors different project stages by reviewing informal reports on outputs and impacts at various levels (Wanni, Hinz, and Day Citation2010, 59).

The evaluation process itself is expected to affect the symmetry of partner relationships (Klitgaard Citation2004, 49). Pathways to sustainable development are introduced iteratively based on evaluation results that span a sufficient interval – ideally, 10–15 years. Application of the framework is consistent with Easter's (Citation2010, 2) call for ‘balanced short-term and long-term impact assessment on projects … .’

Our symmetrical approach to the evaluation of academic programs devoted to sustainable development is inclusive (Tikly Citation2011, 10); it emphasizes cross-sectoral participation by all stakeholders, the value of multiple perspectives, and the incorporation of economic, social, and ethical environmental considerations (Thabrew, Wiek, and Ries Citation2009, 68–69, 74). In symmetrical evaluations, the participants in the process own the inquiry (Patton Citation2002, 185). Klitgaard (Citation2004, 51) maintains that ‘evaluation partnerships must be managed in a way that tries to value and preserve dissenting perspectives’ rather than by stifling diversity and creativity by insisting on consensus. Ensuring that diverse perspectives are accorded a central role often requires that the Southern partner be granted additional resources and time in order to enhance participants' evaluation capabilities (Tikly Citation2011, 10). Too many evaluations are conducted ‘informally with minimal financial and staffing inputs’ (Wanni, Hinz, and Day Citation2010, 58). Establishment of a flexible time frame for the evaluation process also reduces prospects that accountability demands will outweigh mutual learning objectives (Crossley et al. Citation2005, 107).

Evaluators increasingly recognize the challenges of evaluating in complex situations (Forss, Marra, and Schwartz Citation2011). Chaotic behavior can lead to substantially different and unexpected outcomes (Koehn Citation2012b, 339). It is especially rewarding for academic program evaluations to attend to ‘unintended outcomes, the unmanageable element, the local variability of effects, and the importance of social and human relationships … ’ (Crossley et al. Citation2005, 39). It is essential, therefore, that evaluation frames remain alert for unexpected consequences (e.g. voluntary technology sharing) and even expect serendipitous developments (Uitto Citation2008, 8; Morfit and Gore Citation2009, 18; Beretz Citation2012, 144; Breton and Engle Citation2014).Footnote16

Our framework treats the evaluation process as a valuable learning experience for participants. Evaluations balance external reviews with participation by all higher education partners. Team members pursue mixed-method and complementary evaluation strategies and collect narrative and quantitative data related to progress in sustainable development (Singh Citation2007, 76; Gilboy et al. Citation2010, 9; Uitto Citation2011, 479; Agunias and Newland Citation2012, 60). The use of mixed methods ‘generates important synergies’ and ‘provides additional layers of explanation and insight that single-method studies are denied’ (Colclough Citation2012, 6). Available methods include collecting primary (interviews, focus groups, direct observation, field visits, biographical narratives, surveys, social audits, community score cards) and secondary (desk reviews, meta-evaluation, content analysis) data (EO Citation2011, 42–48; Asibey Citation2013, 234, 242).

Focusing on a limited number of mutual partner pre-identified core variables and adoption of a pre-analysis plan would enable time-constrained evaluatorsFootnote17 to complete a meaningful assessment with modest resources and to avoid selective data mining. The contextual nature of transnational academic program evaluations requires that THEP project participants and stakeholders identify their most critical core variables, choose a number of relevant process questions provided in the holistic framework, and focus on progress or setbacks.Footnote18 If the proposed framework provides the constant for reference by THEP evaluators, identifying contextually based core variables also would facilitate a series of partial cross-context comparisons and generate insights into promising pathways to sustainable development that can be more widely adapted in situations where conditions and challenges are similar.

Conclusion

In many ways, higher education provides the foundation for sustainable development. Academia is searching for insights into how its contributions to sustainable development can be identified, modified, and advanced. In the interest of preparing effective sustainable development professionals and enabling engaged Northern and Southern universities to learn about, and adapt, best practices, we need to know what works and what does not work, in what context, and why, in collaborative academic research, outreach, and learning (Frenk et al. Citation2010, 1954). In today's globally linked academic and environmental contexts, this interest requires evaluation of the outcomes and impacts of transnational partnerships. Applying a symmetry-sensitive evaluation framework along the lines presented here lays the groundwork for catalytic impact assessments and contributes to the design of symmetrical follow-up THEPs.

Drawing on lessons gained from international development experience, we argue for a focus on symmetry in higher education evaluation design and management.Footnote19 The framework for symmetrical higher education evaluation developed here emphasizes attention to sustainable development outcomes and impacts. Approaches to evaluation where the theory of change is used to identify intermediate states, impact drivers (Todd and Craig Citation2014, 63–66, 83), and progress markers (Breton and Engle Citation2014) to analyze catalytic outcome impact pathways, and to explain the contributions of project activities possess particular promise in both development and academic contexts.

Although methods of sustainable development and academic program evaluation are imperfect and consensus on a particular approach remains elusive, calls for evaluation will continue to escalate. The higher education community can play a leading role in determining and championing what is realistic and meaningful to measure in sustainable development evaluations. Participants in THEPs devoted to sustainable development and supporting donors would be well served by utilizing a comprehensive evaluation framework that draws on experience in the field, theory-based approaches, and principles of symmetry in design, management, capacity building, and institutional outreach.

Notes

1. An initial proposal by the Government of Japan and NGOs at the 2002 World Summit on Sustainable Development held in Johannesburg led to declaration of the U.N. Decade of Education for Sustainable Development in 2005 (Nomura and Abe Citation2009, 483–484).

2. For one helpful enumeration of the focal issues and ‘approaches commensurate with sustainability principles’ involved in sustainability research, see White (Citation2013, 168).

3. For details regarding the Rio+20 People's Sustainability Treaty for Higher Education, see Tilbury (Citation2013, 73–74).

4. Higher Education for Development requires a ‘results-based management’ approach to demonstrate impact on its funded THEPs. However, its approach concentrates on tracking performance across a standard set of indicators derived specifically from USAID's Education Strategy (www.hedprogram.org/resources/metrics.cfm; accessed 28 November 2012).

5. For an informative list of potential private and public non-monetary benefits of (higher) education that draws upon the work of Walter McMahon, see Schuller and Desjardins (Citation2007, 45).

6. For an insightful critical analysis of the results and evidence artifacts of the development-evaluation methods popular with Northern donors, including logical frameworks, payment by results, randomized-control trials, and cost-effectiveness analysis, see Eyben (Citation2013, 7–24).

7. Although reports suggest that the theory-of-change approach addresses some of the conceptual and methodological challenges involved in evaluating community partnerships, it can be difficult to reach consensus on goals among multiple stakeholders. The approach also is time-consuming and resource-intensive (Boydell and Rugkasa Citation2007, 219).

8. As applied by UNDP, the concept of triangulation in evaluation ‘refers to empirical evidence gathered through three major sources of information: perception, validation and documentation’ (Uitto Citation2011, 479).

9. The tables developed for this article are intended to provide partners with an initial, but not exhaustive, list of questions to consider in the conduct of symmetrical evaluations.

10. For a critical discussion of the logic model that is applied to the internationalization of higher education institutions, see Deardorff and van Gaalen (Citation2012, 168–170).

11. Documenting the value of research is inherently challenging for numerous reasons, including its long-term, indirect, unnoticeable, and spin-off effects (see Bailey Citation2010, 45).

12. In all THEPs involving Northern and Southern partners, human-capability-assessment procedures should be designed in ways that minimize the administrative burden on raters.

13. Montague Demment, APLU's (Association of Public and Land Grant Universities) Vice President for International Programs, suggested this approach in a Washington, DC, meeting with the lead author on 16 December 2010.

14. Principal objectives included (1) measurement of actual results of the IUC program; (2) formulation of recommendations for ongoing and future collaboration; (3) identification of strengths and weaknesses of each collaboration; (4) identification of departments and/or research groups that have received substantive support and thus can present proposals for the post-IUC program focus; (5) identification of possible themes and partnerships for possible network programs for the future of the involved projects in view of establishing sustainability; (6) formulation of recommendations to all stakeholders in terms of the follow-up plan that has been elaborated by the Northern and Southern project leaders.

15. Scope of the evaluations addressed (1) the present implementation of the program (state of implementation; activities, intermediate results, meeting the objectives); (2) quality, efficiency, efficacy, impact, development relevance, and sustainability; (3) position of the IUC program within the international cooperation activities of the partner university in comparison to other donor cooperation programs (added value); (4) management of the program both in Flanders and locally (recommendations for improvement); (5) cooperation among all parties involved; (6) follow-up plan to achieve sustainability among institutions and involved research groups; (7) embedment and impact of the university on development processes in surrounding community, province, and country.

16. For an example of a rewarding unintended consequence that emerged from a transnational partnership involving New Mexico State University and Universidad Autonoma Chihuahua, see Sutton, Egginton, and Favela (Citation2012, 154). For African examples, see Gore and Odell (Citation2009, 27, 40–43).

17. It is important to avoid situations where so much of THEP participants’ time is ‘being devoted to performance measurement and reporting against targets to the detriment of time spent actually doing their job’ (Eyben Citation2013, 13).

18. As Hopkinson and James (Citation2013, 250) discovered at ‘Ecoversity’, ‘progress towards sustainability is always likely to vary over time, with some periods of rapid progress, and others of stasis or even some backwards movement’.

19. Symmetrical evaluation processes are expected to reinforce the recent symmetrical bifurcation in THEP directionality (Koehn Citation2012b).

References

  • Africa-U.S. Higher Education Initiative. n.d. Developing a Knowledge Center for the Africa-U.S. Higher Education Initiative: A concept paper. Washington, DC: Association of Public and Land-Grant Universities.
  • Agunias, Dovelyn R., and Kathleen Newland. 2012. Developing a Road Map for Engaging Diasporas in Development: A Handbook for Policymakers and Practitioners in Home and Host Countries. Geneva: International Organization for Migration.
  • Asibey, Andrew O. 2013. “Why Civil Society Organizations in Sub-Saharan Africa Matter in Monitoring and Evaluating Poverty Interventions in Turbulent Times: A Case Study of Ghana.” In Evaluation in Turbulent Times: Reflections on a Discipline in Disarray, 229–252. New Brunswick, NJ: Transaction.
  • Austin, Ann E., and Cheryl Foxcroft. 2011. “Fostering Organizational Change and Individual Learning Through ‘Ground-Up’ Inter-institutional Cross-Border Collaboration.” In Cross-Border Partnerships in Higher Education: Strategies and Issues, edited by Robin Sakamoto and David W. Chapman, 115–132. New York: Routledge.
  • Bailey, Tracy. 2010. “The Research-Policy Nexus: Mapping the Terrain of the Literature.” Paper prepared for the Higher Education Research and Advocacy Network in Africa (HERANA), Center for Higher Education Transformation, Wynberg, UK.
  • van Baren, Ben, and Alemayehu Assefa. 2013. Final Evaluation of the IUC Partner Programme with Mekelle University, Ethiopia. Brussels: VLIR-UOS.
  • Barrett, Angeline M., Michael Crossley, and Hillary A. Dachi. 2011. “International Collaboration and Research Capacity Building: Learning from the EdQual Experience.” Comparative Education 47 (1): 25–43. doi: 10.1080/03050068.2011.541674
  • Bartlett, Lesley, and Frances Vavrus. 2009. “Introduction: Knowing Comparatively.” In Critical Approaches to Comparative Education: Vertical Case Studies from Africa, Europe, the Middle East, and the Americas, edited by Frances Vavrus and Lesley Bartlett, 1–18. New York: Palgrave Macmillan.
  • Beretz, Alain. 2012. “Preparing the University and Its Graduates for the Unpredictable and the Unknowable.” In Global Sustainability and the Responsibilities of Universities, edited by Luc E. Weber and James J. Duderstadt, 143–151. London: Economica.
  • van den Berg, Rob D., and Osvaldo Feinstein. 2009. “Evaluating Climate Change and Development.” In Evaluating Climate Change and Development, edited by Rob D. van den Berg and Osvaldo Feinstein, 1–40. New Brunswick, NJ: Transaction.
  • Berkes, Fikret, Johan Colding, and Carl Folke. 2003. “Introduction.” In Navigating Social-Ecological Systems: Building Resilience for Complexity and Change, edited by Fikret Berkes, Johan Colding, and Carl Folke, 1–25. Cambridge: Cambridge University Press.
  • Bloom, David E. 2003. “Mastering Globalization: From Ideas to Action on Higher Education Reform.” In Universities and Globalization: Private Linkages, Public Trust, edited by Gilles Breton and Michel Lambert, 140–149. Paris: UNESCO.
  • Boydell, Leslie R., and Jorun Rugkasa. 2007. “Benefits of Working in Partnership: A Model.” Critical Public Health 17 (3): 217–228. doi: 10.1080/09581590601010190
  • Breton, Maria E., and Nathan L. Engle. 2014. “Evaluating Institutional Change and Long-Term Climate Change Adaptation and Resilience Measures Towards a National Drought Preparedness Policy in Brazil.” Paper presented at the 2nd International conference on evaluating climate change and development, Washington, DC, November 5.
  • Brown, Prudence. 1998. “Shaping the Evaluator's Role in a Theory of Change Evaluation: Practitioner Reflections.” In New Approaches to Evaluating Community Initiatives, edited by Karen Fulbright-Anderson, Anne C. Kubisch, and James Connell, 101–111. Seward, NE: Concordia University Press.
  • Calder, Wynn, and Clugston, Rick. 2004. “Lighting Many Fires: South Carolina's Sustainable Universities Initiative.” In Higher Education and the Challenge of Sustainability: Problematics, Promise, and Practice, edited by Peter B. Corcoran and Arjen E. J. Wals, 249–262. Dordrecht: Kluwer Academic.
  • Calleson, Diane C. 2005. “Community-Engaged Scholarship.” Academic Medicine 80 (April): 317–321. doi: 10.1097/00001888-200504000-00002
  • Catley-Carlson, Margaret. 2004. “Foundations of Partnerships: A Practitioner's Perspective.” In Evaluation & Development: The Partnership Dimension, edited by Andres Liebenthal, Osvaldo N. Feinstein, and Gregory K. Ingram, 21–27. New Brunswick, NJ: Transaction.
  • Chapman, David W., and Audrey S. Moore. 2010. “A Meta-look at Meta-studies of the Effectiveness of Development Assistance to Education.” International Review of Education 56: 547–565. doi: 10.1007/s11159-011-9185-0
  • Cloete, Nico, Tracy Bailey, and Peter Maassen. 2011. Universities and Economic Development in Africa: Pact, Academic Core, and Coordination. Executive Summary of Synthesis Report. Wynberg, South Africa: Centre for Higher Education Transformation.
  • Colclough, Christopher. 2012. “Investigating the Outcomes of Education: Questions, Paradigms and Methods.” In Education Outcomes and Poverty: A Reassessment, edited by Christopher Colclough, 1–15. London: Routledge.
  • Connell, James, and Anne C. Kubisch. 1998. “Applying a Theory of Change Approach to the Evaluation of Comprehensive Community Initiatives: Progress, Prospects, and Problems.” In New Approaches to Evaluating Community Initiatives, edited by Karen Fulbright-Anderson, Anne C. Kubisch, and James Connell, 15–44. Seward, NE: Concordia University Press.
  • Court, David. 2008. “The Historical Effect of Partnerships in East Africa.” NORRAG News 41 (December): 105–107.
  • Crossley, Michael, and J. Alexander Bennett. 1997. “Planning for Case Study Evaluation in Belize, Central America.” In Qualitative Educational Research in Developing Countries: Current Perspectives, edited by Michael Crossley and Graham Vulliamy, 221–243. New York: Garland.
  • Crossley, Michael, Andrew Herriot, Judith Waudo, Miriam Mwirotsi, Keith Holmes, and Magdallen Juma. 2005. Research and Evaluation for Educational Development: Learning from the PRISM Experience in Kenya. Oxford, UK: Symposium Books.
  • DAC (Development Assistance Committee). 2007. Canada: Peer Review. Paris: OECD.
  • Davies, Rick, and Jess Dart. 2005. The ‘Most Significant Change’ Technique: A Guide to Its Use. Monitoring and EVALUATION NEWS. http://www.mande.co.uk/docs/MSCGuide.pdf.
  • Deardorff, Darla K., and Adinda van Gaalen. 2012. “Outcomes Assessment in the Internationalization of Higher Education.” In The Sage Handbook of International Higher Education, edited by Darla K. Deardorff, Hans de Wit, John Heyl, and Tony Adams, 167–190. Los Angeles: Sage.
  • Easter, Robert. 2010. Letter to Honorable Rajiv Shah, Administrator, USAID. Washington, DC: BIFAD.
  • Elliott, Jennifer A. 2013. An Introduction to Sustainable Development. 4th ed. London: Routledge.
  • EO (Evaluation Office). 2011. ADR Method Manual. New York: Evaluation Office, United Nations Development Programme.
  • Escrigas, Cristina, and Jose Lobera. 2009. “Introduction: New Dynamics for Social Responsibility.” In Higher Education at a Time of Transformation: New Dynamics for Social Responsibility, edited by Cristina Escrigas and Josep Lobera, 1–16. London: Palgrave Macmillan.
  • Eyben, Rosalind. 2013. Uncovering the Politics of ‘Evidence’ and ‘Results’: A Framing Paper for Development Practitioners. http://www.bigpushforward.net.
  • Forss, Kim, Mita Marra, and Robert Schwartz, eds. 2011. Evaluating the Complex: Attribution, Contribution, and Beyond. New Brunswick, NJ: Transaction.
  • Fredericks, Sarah. 2014. Measuring and Evaluating Sustainability: Ethics in Sustainability Indexes. London: Routledge.
  • Frenk, Julio, Lincoln Chen, Zulfiqar A. Bhutta, Jordan Cohen, Nigel Crisp, Timothy Evans, Harvey Fineberg, et al. 2010. “Health Professionals for a New Century: Transforming Education to Strengthen Health Systems in an Interdependent World.” Lancet 376 (9756): 1923–1958. doi: 10.1016/S0140-6736(10)61854-5
  • Fukuda-Parr, Sakiko, Carlos Lopes, and Khalid Malik. 2002. “Overview.” In Capacity for Development: New Solutions to Old Problems, edited by Sakiko Fukuda-Parr, Carlos Lopes, and Khalid Malik, 1–21. London: Earthscan.
  • Fuller, Steve. 2006. “Universities and the Future of Knowledge Governance from the Standpoint of Social Epistemology.” In Knowledge, Power and Dissent: Critical Perspectives on Higher Education and Research in Knowledge Society, edited by Guy Neave, 345–370. Paris: UNESCO.
  • Funnell, Sue F., and Patricia J. Rogers. 2011. Purposeful Program Theory: Effective Use of Theories of Change and Logic Models. San Francisco: Jossey-Bass.
  • Gambone, Michelle A. 1998. “Challenges of Measurement in Community Change Initiatives.” In New Approaches to Evaluating Community Initiatives, edited by Karen Fulbright-Anderson, Anne C. Kubisch, and James Connell, 149–163. Seward, NE: Concordia University Press.
  • Gedde, Maia. 2009. The International Health Links Manual: A Guide to Starting up and Maintaining Long-Term International Health Partnerships. London: Tropical Health and Education Trust.
  • Gilboy, Andrew, Cornelia Flora, Ron Raphael, and Bhavani Pathak. 2010. Agriculture Long-Term Training: Assessment and Design Recommendations. Washington, DC: USAID.
  • Gore, Jane S., and Malcolm J. OdellJr. 2009. Higher Education Partnerships in Sub-Saharan Africa: An Impact Assessment of 12 Higher Education Partnerships. Washington, DC: USAID, Bureau for Economic Growth, Agriculture and Trade.
  • Green, Duncan. 2013. “What Is a Theory of Change and How Do We Use It?” http://www.oxfamblogs.org/fp2p/?p=15532.
  • Green, Jackie, and Keith Tones. 2010. Health Promotion: Planning and Strategies. Thousand Oaks, CA: Sage.
  • de Haan, Gerhard, Inka Bormann, and Alexander Leicht. 2010. “Introduction: The Midway Point of the UN Decade of Education for Sustainable Development: Current Research and Practice in ESD.” International Review of Education 56: 199–206. doi: 10.1007/s11159-010-9162-z
  • Holm, John D., and Leapetsewe Malete. 2010. “The Asymmetries of University Partnerships Between Africa and the Developed World: Our Experience in Botswana.” Paper delivered at the 2010 Going Global 4 – The British Council's International education conference, London, March 24–26.
  • Hopkinson, Peter, and Peter James. 2013. “Whole Institutional Change Towards Sustainable Universities: Bradford's Ecoversity Initiative.” In The Sustainable University: Progress and Prospects, edited by Stephen Sterling, Larch Maxey, and Heather Luna, 235–255. London: Routledge.
  • Ingram, Gregory K. 2004. “Overview.” In Evaluation & Development: The Partnership Dimension, edited by Andres Liebenthal, Osvaldo N. Feinstein, and Gregory K. Ingram, xi–xxi. New Brunswick, NJ: Transaction.
  • Jeffery, Roger. 2012. “Qualitative Methods in the RECOUP Projects.” In Education Outcomes and Poverty: A Reassessment, edited by Christopher Colclough, 170–189. London: Routledge.
  • Jilke, Sebastian. 2013. “What Shapes Citizens’ Evaluations of Their Public Officials’ Accountability? Evidence from Local Ethiopia.” Public Administration and Development. doi:10.1002/pad.1659
  • Jost, Christine, Sophie Alvarez, Deissy Martinez Baron, Osana Bonilla-Findji, Kevin Coffey, Wiebke Förch, Arun Khatri-Chhetri, et al. 2014. “Pathway to Impact: Supporting and Evaluating Enabling Environments for Outcomes in CCAFS.” Paper presented at the 2nd International conference on evaluating climate change and development, Washington, DC, November 5.
  • King, Kenneth. 2009. “Higher Education and International Cooperation: The Role of Academic Collaboration in the Developing World.” In Higher Education and International Capacity Building: Twenty-Five Years of Higher Education Links, edited by David Stephens, 33–49. Oxford: Symposium Books.
  • Klitgaard, Robert. 2004. “Evaluation of, for, and Through Partnerships.” In Evaluation & Development: The Partnership Dimension, edited by Andres Liebenthal, Osvaldo N. Feinstein, and Gregory K. Ingram, 43–57. New Brunswick, NJ: Transaction.
  • Koehn, Peter H. 1990. Public Policy and Administration in Africa: Lessons from Nigeria. Boulder, CO: Westview.
  • Koehn, Peter H. 2012a. “Transnational Higher Education and Sustainable Development: Current Initiatives and Future Prospects.” Policy Futures in Education 10 (3): 274–282. doi: 10.2304/pfie.2012.10.3.274
  • Koehn, Peter H. 2012b. “Turbulence and Bifurcation in North-South Higher-Education Partnerships for Research and Sustainable Development.” Public Organization Review 12: 331–355. doi: 10.1007/s11115-012-0176-9
  • Koehn, Peter H., and Milton O. Obamba. 2014. The Transnationally Partnered University: Insights from Research and Sustainable Development Collaborations in Africa. Gordonsville, VA: Palgrave Macmillan.
  • Koehn, Peter H., and James N. Rosenau. 2010. Transnational Competence: Empowering Professional Curricula for Horizon-Rising Challenges. Boulder, CO: Paradigm.
  • Lobera, Jose. 2008. “Delphi Poll – Higher Education for Human and Social Development.” In Higher Education in the World 3: New Challenges and Emerging Roles for Human and Social Development, 307–327. London: Palgrave Macmillan.
  • Marshall, Jane. 2011. “UNESCO Debates Uses and Misuses of Rankings.” University World News 172 (May).
  • Mayne, John. 2008. Contribution Analysis: An Approach to Exploring Cause and Effect. ILAC Brief 16. Rome: Institutional Learning and Change Initiative, Biodiversity International.
  • McFarlane, Donovan A., and Agueda G. Ogazon. 2011. “The Challenges of Sustainability Education.” Journal of Multidisciplinary Research 3 (3): 81–107.
  • McGrath, Simon. 2010. “The Role of Education in Development: An Educationalist's Response to Some Recent Work in Development Economics.” Comparative Education 46 (2): 237–253. doi: 10.1080/03050061003775553
  • McLean, Monica, and Melanie Walker. 2011. “The Possibilities for University-Based Public-Good Professional Education: A Case Study from South Africa Based on the ‘Capability Approach’.” Studies in Higher Education 37 (5): 1–17.
  • McMahon, Walter W. 2009. Higher Learning, Greater Good: The Private and Social Benefits of Higher Education. Baltimore, MD: Johns Hopkins University Press.
  • Mebrahtu, Esther. 2002. “Perceptions and Practices of Monitoring and Evaluation: International NGO Experiences in Ethiopia.” Development in Practice 12 (August): 501–517. doi: 10.1080/0961450220149645a
  • Morfit, Christine, and Jane Gore. 2009. HED/USAID Higher Education Partnerships in Africa 1997–2007. Washington, DC: Higher Education for Development.
  • Mosse, David. 2004. “Is Good Policy Unimplementable? Reflections on the Ethnography of Aid Policy and Practice.” Development and Change 35 (4): 639–671. doi: 10.1111/j.0012-155X.2004.00374.x
  • Mundy, Karen, and Francine Menashy. 2012. “The Role of the International Finance Corporation in the Promotion of Public Private Partnerships for Educational Development.” In Public Private Partnerships in Education: New Actors and Modes of Governance in a Globalizing World, edited by Susan L. Robertson, Karen Mundy, Antoni Verger, and Francine Menashy, 81–103. Northhampton, MA: Edward Elgar.
  • Neave, Guy. 2012. The Evaluative State, Institutional Autonomy and Re-engineering Higher Education in Western Europe: The Prince and his Pleasure. London: Palgrave Macmillan.
  • NEPAD (New Partnership for Africa's Development). 2005. Renewal of Higher Education in Africa: Report of AU/NEPAD Workshop 27–28 October. Johannesburg: NEPAD.
  • Nomura, Ko, and Osamu Abe. 2009. “The Education for Sustainable Development Movement in Japan: A Political Perspective.” Environmental Education Research 15 (4): 483–496. doi: 10.1080/13504620903056355
  • de Nooijer, Paul, and Okwach Abagi. 2009. Final Evaluation of the IUC Partner Program with the University of Nairobi (UoN), Kenya. Gent: VLIR-UOS.
  • de Nooijer, Paul, and Bornwell Siakanomba. 2008. Final Evaluation of the IUC Partnership with the University of Zambia (UNZA). Brussels: VLIR-UOS.
  • Nordtveit, Bjorn. 2010. “Development as a Complex Process of Change: Conception and Analysis of Projects, Programs and Policies.” International Journal of Education and Development 30: 110–117. doi: 10.1016/j.ijedudev.2009.06.004
  • Oakley, Peter. 1991. Projects with People: The Practice of Participation in Rural Development. Geneva: International Labor Office.
  • Obamba, Milton O., and Jane K. Mwema. 2009. “Symmetry and Asymmetry: New Contours, Paradigms, and Politics in African Academic Partnerships.” Higher Education Policy 22: 349–371. doi: 10.1057/hep.2009.12
  • Pain, Adam. 2009. “Economic Development and Sustainable Livelihoods.” In Higher Education and International Capacity Building: Twenty-Five Years of Higher Education Links, edited by David Stephens, 95–114. Oxford, UK: Symposium Books.
  • Patton, Michael Q. 2002. Qualitative Research & Evaluation Methods. Thousand Oaks, CA: Sage.
  • Pawson, Ray, and Nick Tilley. 1997. Realistic Evaluation. London: Sage.
  • Postiglione, Gerald A., and David W. Chapman. 2010. “East Asia's Experience of Border Crossing: Assessing Future Prospects.” In Crossing Borders in East Asian Higher Education, edited by David W. Chapman, William K. Cummings, and Gerald A. Postiglione, 377–382. Hong Kong: University of Hong Kong.
  • Roberts, J. Timmons, and Bradley C. Parks. 2007. A Climate of Injustice: Global Inequality, North-South Politics, and Climate Policy. Cambridge: MIT Press.
  • Samoff, Joel, and Bidemi Carrol. 2004. “The Promise of Partnership and Continuities of Dependence: External Support to Higher Education in Africa.” African Studies Review 47 (1): 67–199.
  • Samoff, Joel, and Bidemi Carrol. 2006. “Influence – Direct, Indirect and Negotiated: The World Bank and Higher Education in Africa.” In Knowledge, Power and Dissent: Critical Perspectives on Higher Education and Research in Knowledge Society, edited by Guy Neave, 133–180. Paris: UNESCO.
  • Sawahel, Wagdy. 2012. “University Leaders Worldwide Sign Sustainability Declaration.” University World News 223 (May 25).
  • Schuller, Tom, and Richard Desjardins. 2007. Understanding the Social Outcomes of Learning. Paris: Organization for Economic Co-operation and Development.
  • Schuller, Tom, Cathie Hammond, and John Preston. 2004. “Reappraising Benefits.” In The Benefits of Learning: The Impact of Education on Health, Family Life and Social Capital, edited by Tom Schuller, John Preston, Cathie Hammond, Angela Brassett-Grundy, and John Bynner, 179–193. London: RoutledgeFalmer.
  • Sen, Amartya. 1999. Development as Freedom. New York: Anchor Books.
  • Shriberg, Michael. 2004. “Assessing Sustainability: Criteria, Tools, and Implications.” In Higher Education and the Challenge of Sustainability: Problematics, Promise, and Practice, edited by Peter B. Corcoran and Arjen E. J. Wals, 71–86. Dordrecht: Kluwer Academic.
  • Singh, Mala. 2007. “Universities and Society: Whose Terms of Engagement?” In Knowledge Society vs. Knowledge Economy: Knowledge, Power, and Politics, edited by Sverker Sorlin and Hebe Vessuri, 53–78. Basingstoke, UK: Palgrave Macmillan.
  • Smith, Harvey. 2000. “Transforming Education Through Donor-Funded Projects: How Do We Measure Success?” In Globalisation, Educational Transformation and Societies in Transition, edited by Teame Mebrahtu, Michael Crossley, and David Johnson, 207–218. Oxford: Symposium Books.
  • Srivastava, Prachi, and Su-Ann Oh. 2010. “Private Foundations, Philanthropy, and Partnership in Education and Development: Mapping the Terrain.” International Journal of Educational Development 30 (5): 460–471. doi: 10.1016/j.ijedudev.2010.04.002
  • Sterling, Stephen. 2013. “The Sustainable University: Challenge and Response.” In The Sustainable University: Progress and Prospects, edited by Stephen Sterling, Larch Maxey, and Heather Luna, 17–50. London: Routledge.
  • Stern, Elliot. 2004. “Evaluating Partnerships.” In Evaluation & Development: The Partnership Dimension, edited by Andres Liebenthal, Osvaldo N. Feinstein, and Gregory K. Ingram, 29–41. New Brunswick, NJ: Transaction.
  • Stoecker, Randy. 2005. Research Methods for Community Change: A Project-Based Approach. Thousand Oaks, CA: Sage.
  • Stokes, Helga, Shane S. Chaplin, Shimaa Dessouky, Liya Aklilu, and Rodney K. Hopson. 2011. “Addressing Social Injustices, Displacement, and Minority Rights Through Cases of Culturally Responsive Evaluation.” Diaspora, Indigenous, and Minority Education 5: 167–177. doi: 10.1080/15595692.2011.583514
  • Stone, Diane. 2004. “Research Partnerships and Their Evaluation.” In Evaluation & Development: The Partnership Dimension, edited by Andres Liebenthal, Osvaldo N. Feinstein, and Gregory K. Ingram, 149–160. New Brunswick, NJ: Transaction.
  • Sutton, Susan B., Everett Egginton, and Raul Favela. 2012. “Collaborating on the Future: Strategic Partnerships and Linkages.” In The Sage Handbook of International Higher Education, edited by Darla K. Deardorff, Hans de Wit, John Heyl, and Tony Adams, 147–166. Los Angeles: Sage.
  • Taylor, Peter. 2008. “Introduction.” In Higher Education in the World 3: New Challenges and Emerging Roles for Human and Social Development, xxiv–xvii. GUNI Series on the Social Commitment of Universities 3. London: Palgrave Macmillan.
  • Thabrew, Lanka, Arnim Wiek, and Robert Ries. 2009. “Environmental Decision Making in Multi-stakeholder Contexts: Applicability of Life Cycle Thinking in Development Planning and Implementation.” Journal of Cleaner Production 17: 67–76. doi: 10.1016/j.jclepro.2008.03.008
  • Thaman, Konai H. 2006. “Acknowledging Indigenous Knowledge Systems in Higher Education in the Pacific Island Region.” In Higher Education, Research, and Knowledge in the Asia-Pacific Region, edited by V. Lynn Meek and Charas Suwanwela, 175–184. Gordonsville, VA: Palgrave Macmillan.
  • Tikly, Leon. 2011. “Towards a Framework for Researching the Quality of Education in Low-Income Countries.” Comparative Education 47 (1): 1–23. doi: 10.1080/03050068.2011.541671
  • Tilbury, Daniella. 2004. “Environmental Education for Sustainability: A Force for Change in Higher Education.” In Higher Education and the Challenge of Sustainability: Problematics, Promise, and Practice, edited by Peter B. Corcoran and Arjen E. J. Wals, 97–112. Dordrecht: Kluwer Academic.
  • Tilbury, Daniella. 2013. “Another World Is Desirable: A Global Rebooting of Higher Education for Sustainable Development.” In The Sustainable University: Progress and Prospects, edited by Stephen Sterling, Larch Maxey, and Heather Luna, 71–85. London: Routledge.
  • Todd, David, and Rob Craig. 2014. “Assessing Progress Towards Impacts in Environmental Programmes Using the Field Review of Outcomes to Impacts Methodology.” In Evaluating Environment in International Development, edited by Juha I. Uitto, 62–86. London: Routledge.
  • Uitto, Juha I. 2008. “Small Hydel for Environmentally Sound Energy in Remote Areas: Lessons from the Indian Himalayas.” Focus on Geography 51 (2): 1–8. doi: 10.1111/j.1949-8535.2008.tb00220.x
  • Uitto, Juha I. 2011. “Sustainable Development of Natural Resources in Laos: Evaluating the Role of International Cooperation.” Asian Journal of Environment and Disaster Management 3 (4): 475–490. doi: 10.3850/S1793924011001039
  • UNDP (United Nations Development Programme). 2011. Evaluation Policy. New York: UND. http://www.undorg/evaluation/policy.htm.
  • UNESCO (United Nations Educational, Scientific and Cultural Organization). 2009. Communique of the 2009 World Conference on Higher Education: The New Dynamics of Higher Education and Research for Societal Change and Development. Paris: UNESCO.
  • Vaessen, Jos, and David Todd. 2008. “Methodological Challenges of Evaluating the Impact of the Global Environment Facility's Biodiversity Program.” Evaluation and Program Planning 31: 231–240. doi: 10.1016/j.evalprogplan.2008.03.002
  • Vander Weyden, Patrick, and Leah Livni. 2014. Final Evaluation of the Institutional University Cooperation with the University of the Western Cape (UWC), South Africa. Brussels: VLIR-UOS.
  • Virtanen, Anne. 2010. “Learning for Climate Responsibility: Via Consciousness to Action.” In Universities and Climate Change, edited by Walter L. Filho, 231–240. Berlin: Springer.
  • Vromen, Ariadne. 2010. “Debating Methods: Rediscovering Qualitative Approaches.” In Theory and Methods in Political Science, edited by David Marsh and Gerry Stoker, 249–266. Basingstoke, UK: Palgrave Macmillan.
  • Walker, Melanie, Monica McLean, Arona Dison, and Rosie Peppin-Vaughn. 2009. “South African Universities and Human Development: Towards a Theorisation and Operationalisation of Professional Capabilities for Poverty Reduction.” International Journal of Educational Development 29: 565–572. doi: 10.1016/j.ijedudev.2009.03.002
  • Walsh, Lorraine, and Peter Kahn. 2010. Collaborative Working in Higher Education: The Social Academy. New York: Routledge.
  • Wamae, Watu. 2011. “Continent Needs Its Own Science Indicators.” University World News 179 (July).
  • Wanni, Nada, Sarah Hinz, and Rebecca Day. 2010. Good Practices in Educational Partnerships Guide: UK-Africa Higher & Further Education Partnerships. London: Association of Commonwealth Universities, Africa Unit.
  • White, Rehema M. 2013. “Sustainablity Research.” In The Sustainable University: Progress and Prospects, edited by Stephen Sterling, Larch Maxey, and Heather Luna, 168–191. London: Routledge.
  • Wiek, Arnim, Lauren Withycombe, and Charles L. Redman. 2011. “Key Competencies in Sustainability: A Reference Framework for Academic Program Development.” Sustainability Science 6: 203–218. doi: 10.1007/s11625-011-0132-6
  • Wright, Tarah. 2004. “The Evolution of Sustainability Declarations in Higher Education.” In Higher Education and the Challenge of Sustainability: Problematics, Promise, and Practice, edited by Peter B. Corcoran and Arjen E. J. Wals, 7–20. Dordrecht: Kluwer Academic.
  • Yarime, Masaru, and Yuko Tanaka. 2012. “The Issues and Methodologies in Sustainability Assessment Tools for Higher Education Institutions: A Review of Recent Trends and Future Challenges.” Journal of Education for Sustainable Development 6 (1): 63–77. doi: 10.1177/097340821100600113
  • Yarime, Masaru, Gregory Trencher, Takashi Mino, Roland W. Scholz, Lennart Olsson, Barry Ness, Niki Frantzeskaki, and Jan Rotmans. 2012. “Establishing Sustainability Science in Higher Education Institutions: Towards an Integration of Academic Development, Institutionalization, and Stakeholder Collaborations.” Sustainability Science 7 (Supplement 1): 101–113. doi: 10.1007/s11625-012-0157-5