607
Views
18
CrossRef citations to date
0
Altmetric
Articles

Enhancing environmental educators' evaluation competencies: insights from an examination of the effectiveness of the My Environmental Education Evaluation Resource Assistant (MEERA) website

, &
Pages 471-497 | Received 05 Nov 2009, Accepted 04 Feb 2011, Published online: 06 Jul 2011
 

Abstract

To conduct evaluations that can benefit individual programs as well as the field as a whole, environmental educators must have the necessary evaluation competencies. This exploratory study was conducted to determine to what extent a self‐directed learning resource entitled My Environmental Education Evaluation Resource Assistant (MEERA) can enhance environmental educators' evaluation competencies. The multiple case studies relied on data from eight environmental educators with limited evaluation experience who used MEERA to evaluate one of their programs. Results suggest that MEERA can (1) increase environmental educators' perceived evaluation competencies, (2) help environmental educators produce quality evaluation outputs, and (3) foster their use of evaluation results. Perceived benefits of using MEERA included obtaining evidence of program success, insights into how to improve programs, and alternative ways of thinking about programs. Perceived challenges included varying difficulties with evaluation tasks such as prioritizing evaluation questions and designing data collection instruments and, in line with this, desiring personal expert assistance for context‐specific advice and reassurance. This research contributes to expanding understanding of how to enhance environmental educators' evaluation competencies and practices.

Acknowledgements

MEERA and its evaluation were funded by and developed in partnership with the US Environmental Protection Agency and the US Forest Service, with valuable insights and support provided by Drew Burnett, Kathleen MacKinnon, Dr Barbara McDonald, and Dr Safiya Samman. MEERA's steering committee members provided constructive suggestions that improved all aspects of the study with Dr Gabriel Della‐Piana offering particularly helpful insights. Data entry and analysis was greatly facilitated by the contributions of University of Michigan graduate students Brian T. Barch, John Franklin Cawood, Catherine Game, and Gillian Ream. Another University of Michigan graduate student, Kim Wolske, provided valuable assistance with editing the manuscript. The thoughtful feedback by Kara Shea Davis Crohn helped in improving the original draft of this manuscript and the editor's and anonymous referees' comments resulted in significant refinements.

Notes

1. MEERA is operated through Plone, which automatically tracks a series of visitor statistics, such as the ones reported here on a monthly basis.

2. Evaluation and research use similar methods (Rossi, Lipsey, and Freeman Citation2004) but they differ in their purposes and vary in how their quality is assessed (Mathison Citation2008; Patton Citation2008). Research seeks to contribute to a particular body of knowledge and is judged on this basis. Evaluation should meet the needs of a particular program and is judged in great part based on its ‘utility’ (Joint Committee on Standards for Educational Evaluation Citation1994); i.e., to what extent evaluation results are used by program stakeholders for program improvement or other purposes. Thus, a quality research study may not be a quality program evaluation and vice versa. At the same time, however, research can inform program evaluations and findings from program evaluations can contribute to disciplinary bodies of knowledge (Chelimsky Citation1997; Labin Citation2008).

3. Based on the study's results, (1) the US Environmental Protection Agency continued to recommend MEERA to environmental educators; (2) the US Forest Service is considering recommending MEERA to its conservation educators; (3) MEERA's steering committee members have suggested that their organizations recommend the site to environmental educators and others; and (4) Dr Zint will be improving MEERA. Planned improvements include adding (1) suggestions for how best to use the site by collaborating with evaluation experts and others, (2) links to emerging evaluation systems, and (3) adding reviews of EE program evaluations.

4. After each step, participants were asked to indicate if they thought they could use what they learned with ‘no’ and ‘yes’ as response options. The mean of 90% indicating that participants thought they could use what they learned, was calculated based on the percent of respondents who selected ‘yes’ after each of the eight steps.

5. The potential for this exists because all participants indicated that they planned to continue to use MEERA. But will they use more sophisticated evaluation approaches and methods over time? Several participants suggested that they will adopt more participatory approaches [e.g., ‘I would make sure to bring in more partners now that I have been through one evaluation. We just have so many partners I wasn't sure where to bring them in originally and that was a mistake’ (Karen, A2, Q3)]. Will they also be more likely to adopt triangulated approaches over time? These questions are important to explore in future because some professional evaluators are concerned about the validity and reliability of results from evaluations which are not conducted by evaluation experts (Davis Crohn, personal communication).

6. As education researchers know, the enacted curriculum rarely reflects the intended curriculum. Moreover, in terms of ultimately helping to improve or replicate a program, it is essential to know what occurred as part of the program that can explain why it may have failed or succeeded in achieving its outcomes (Patton Citation2008). As has also been the case in the evaluation community (King, Morris, and Fitz‐Gibbon Citation1987), the majority of past EE program evaluations have not sufficiently examined the implementation of programs, including while they were being evaluated (Zint, Citationforthcoming).

7. Despite extensive literature searches, the authors were not able to identify the study by Kiernan and Alter (Citation2004) until after completion of the study, and thus their work could not inform MEERA's development or this research. Interestingly, we appear to have independently drawn on similar empowerment, adult learning, and web design principles to develop the respective websites. In addition, while our studies' methods were quite different, our sites' visitor experiences, as well as benefits and challenges were similar.

8. These authors' observation is based on their experiences with mostly summative evaluations (Robottom Citation1985; Fleming and Easton Citation2010) and a review of behavioral outcome evaluations of EE programs (Zint, Citationforthcoming), the majority of which did not mention if, or how, evaluations were used for program improvement or other purposes. It is acknowledged that formative EE program evaluations are likely used more frequently for program improvement purposes.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 376.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.