29,383
Views
133
CrossRef citations to date
0
Altmetric
Research Article

Environmental education and K-12 student outcomes: A review and analysis of research

, , &

ABSTRACT

Many practitioners and researchers describe academic and environmental benefits of environmental education for kindergarten through twelfth grade (K-12) students. To consider the empirical underpinnings of those program descriptions, we systematically analyzed the peer-reviewed literature (1994–2013), focusing on outcomes of environmental education programs with K-12 students. In the resulting sample of 119 articles, we identified 121 unique outcomes, finding that most articles reported positive findings from the programs under study. Reflections stemming from the review highlight the versatility of environmental education, while also suggesting opportunities for bolder and more diversified approaches in research design and thinking.

Introduction

In the past 25 years, research in environmental education (EE) has grown in output, scope, and types of design, as well as methods and approaches (Ardoin, Clark, & Kelsey, Citation2013; Rickinson, Citation2001; R. B. Stevenson, Wals, Dillon, & Brody, Citation2013). With this growth, researchers and practitioners find it increasingly challenging to remain aware of current findings and implications for on-the-ground activities. As a result, practices may not reflect recent findings and, at the same time, research may become distanced from practice, hampering innovation in both directions.

Systematic reviews represent one strategy for addressing this proliferation in research (Rickinson & Reid, Citation2016). The utility of systematic reviews extends beyond the empirical and positivistic realms as they can help map a field while identifying strengths, weaknesses, and gaps. Individual reviews can synthesize and analyze a field's work; complementary reviews can leverage the expertise of researchers involved in the review process (Rickinson & Reid, Citation2016). As Hart (Citation2003, p. 247) suggests, because authors do not—and indeed cannot—remain invisible in the literature-review process, each review contributes to the research-and-practice dialogue.

Thanks to EE's interdisciplinarity, numerous research areas are ripe for examination in this way (R. B. Stevenson, Wals, et al., Citation2013). Rickinson (Citation2001), for example, conducted an extensive review of learners and learning in EE. Whereas Stern, Powell, and Hill (Citation2014) investigated programmatic best practices, Williams and Dixon's (Citation2013) systematic review focused on academic outcomes within the context of school gardens.

In working with a range of EE stakeholders, including researchers, practitioners, and funders, we uncovered a need for a more robust understanding of what outcomes are addressed in various sectors of the field. We noted an interest in better understanding how, and under what conditions, EE with school-aged youth relates to diverse outcomes, including non-environmentally related outcomes (e.g., academic achievement, youth development, and social and emotional learning). Ladwig (Citation2010) noted that EE programs measuring environmentally specific outcomes—such as environmental knowledge, attitudes, skills, and behavior—often extend to include both personal and interpersonal outcomes. R. B. Stevenson, Wals, et al. (Citation2013) discuss how EE goes beyond mere understanding and conceptualization to developing learners' agency, including a problem-solving orientation. Through focus groups and surveys with EE participants and practitioners, West (Citation2015) identified a large number and range of outcomes, noting an emphasis on knowledge for both groups, plus social outcomes among participants. Likewise, previous literature has suggested that environmental educators express diverse opinions about both the shorter- and longer-term goals of their work (Ardoin, Biedenweg, & O'Connor, Citation2015; Hart, Citation2007; Schusler et al., Citation2009).

Prior reviews of EE outcomes have primarily focused on the environmental aspects of those efforts; similarly, they have been dominated by broad outcome categories, despite the presence of what Stern, Powell, and Hill (Citation2014, p. 586) call “more nuanced outcomes.” Our review aims to explore those nuances with attention to outcomes other than those considered “traditional” in EE (Stern et al., Citation2014). Therefore, rather than using a priori coding focused on environmental knowledge, attitudes, behaviors, and skills, we desired to uncover the breadth of EE outcomes under study using an exploratory process similar to Williams and Dixon's (Citation2013) school-gardens review. We aimed to include many studies, similar to the breadth of Rickinson's (Citation2001) sample of 110 publications.

Our review focuses on K-12 students (approximately ages 5 through 18) because they represent a primary audience for EE; as such, EE research and research reviews frequently focus on youth within this band. Similarly, “students,” or those pursuing EE within formal context or structure, are a key audience for many EE stakeholders, such as funders, policymakers, and school administrators. By examining the most recent 20 years of literature related to outcomes of EE programs for K-12 students—and doing so systematically and analytically—we hoped to gain an understanding of existing research within this paradigm and, concurrently, illuminate gaps. This study describes our review process, presents an interpretation of our findings, and compares our findings with those from similar reviews.

We focus on empirical studies that measure outcomes, and we define our coding categories iteratively to reflect what other researchers may indicate as primary outcomes. By using this honed approach, we are not suggesting that an outcome orientation is, or should be, the driving force of EE research. We recognize, rather, that this is one of numerous possible framings and we chose this orientation as a lens that may help disentangle some of the ways in which researchers have examined EE programs.

The following research questions guided the analysis and coding of the articles that resulted from our search:

For K-12 audiences, primarily in formal settings, what EE outcomes are researchers seeking to measure?

What implications about past, present, and future EE research might a review of EE outcomes research suggest?

Guiding principles of the review

Our work benefited from and was influenced by the methods, findings, and philosophical discussions of previous EE research reviews (e.g., Hart & Nolan, Citation1999; Rickinson, Citation2001; Stern et al., Citation2014; Williams & Dixon, Citation2013). We also learned from critiques of research reviews in education, more generally, and EE, in particular (e.g., Cooper, Citation2010; Gough, Citation2007; Gough, Oliver, & Thomas, Citation2012; Hallinger, Citation2013; Marcinkowski, Citation2003; Rickinson & Reid, Citation2016). Given others' criticisms (e.g., Cooper, Citation2010; Gough, Citation2007) that, historically, many research reviews have not met methodological standards, we familiarized ourselves with discussions of the nature and quality of research reviews. We pursued techniques of systematic reviews that use “systematic and explicit, accountable methods” (Gough et al., Citation2012, p. 2) and are recognized for transparency, reproducibility, and relevance (EPPI Centre, Citation2009).

We took a qualitative coding approach, synthesizing studies conducted within different paradigms and using diverse methods. Adapting Marcinkowski's (Citation2003) purposes for a research review, we intended to produce a review that would (a) identify quality EE research that empirically measured outcomes; (b) describe and characterize the EE programs implemented, audiences investigated, research designs and methods used, and findings reported; and (c) summarize and synthesize the resulting data related to trends and gaps.

We endeavor to present our methods clearly and completely. As appropriate, we provide support for our decisions, describing methodological and/or theoretical rationales. We developed those rationales through discussions with research team members and outside colleagues.

Methods

We began by considering previous EE research reviews (e.g., Hart & Nolan, Citation1999; Rickinson, Citation2001; Stern et al., Citation2014; Williams & Dixon, Citation2013) to identify structural strategies and methodological issues to address. Combined with the nature and scope of our topic, as well as target audiences, those considerations informed our review steps. (See and following.)

Figure 1. Elements of the review process.

Figure 1. Elements of the review process.

Literature search

We performed systematic pilot searches using varying terms, engines, and parameters to identify research focused on EE-related student outcomes with K-12 audiences (e.g., terms at this stage included: environmental education plus learning, student outcomes, outcomes). We examined the resulting output to determine the extent to which we had surfaced relevant articles; those results informed our search process. We selected the EBSCOhost search engine because of its function as a meta-engine that incorporates results from a variety of databases. We used the following databases: ERIC, Education Full Text, GreenFILE, Environment Index, Academic Search Premier, British Education Index, and Africa-Wide Information.

We followed Stern et al. (Citation2014) and used environmental education as the primary term. This limited our results to articles with environmental education in their title, keywords, abstract, article body, or references, or when in a journal with environmental education in its title.Footnote1 To isolate EE articles addressing K-12 student outcomes, we added student as a search term and excluded articles with college or university. This produced a manageable number of results while maintaining a focus on K-12 student outcomes. Our search included the following restrictions: that the articles were peer-reviewed, published between 1994 and 2013,Footnote2 and written in English. The final search terms and restrictions produced a sample of 2,034 articles for initial review.Footnote3

Identifying articles for inclusion

Our research team collaboratively developed a decision tree with criteria that would allow us to determine whether to include each of the 2,034 articles for further review. (See .) Each team member read abstracts from assigned subsets of the articles, applying those criteria.

Figure 2. Decision tree to determine inclusion/exclusion.

Figure 2. Decision tree to determine inclusion/exclusion.

Some of the most commonly excluded article types were those that lacked an empirical research component; focused on college-aged students, pre-kindergarten students, or educators; or lacked implementation of an EE program or intervention (i.e., were primarily conceptual in nature). When we applied those criteria to the abstracts, we excluded approximately 80% (n = 1,608) of the initial search results.

We more thoroughly reviewed the remaining 20% (n = 426) of articles; a different team member read each of these articles, applying the same exclusion criteria. This second-level review resulted in 142 articles that we coded to a number of categories. (See .) This process involved a higher level of scrutiny and deeper reading of the articles, leading to exclusion of an additional 23 articles that did not meet our criteria. After these pre-review rounds, we included 119 articles for full coding and analysis.

Table 1. Selected main codes.

Developing coding scheme and process

We developed our coding guide through inductive and deductive means. We began the study with stated objectives to examine outcomes measured; study characteristics, such as geographic location and participants; methods; and findings. Based on those features, we developed broad coding categories. Within the categories, we added codes to describe the programs, research paradigms, methods, and outcomes represented in the articles. (See .) We coded the data using NVivo 10 for Windows (QSR International), a qualitative data-analysis software package.

To establish inter-coder agreement, three coders initially read and coded the same four articles. The coders discussed quantitative (amount coded) and qualitative (type of information coded) differences in coding and revised accordingly. Articles then were divided between two coders. The research team continuously discussed the categories throughout the coding process; once completed, a single researcher reviewed coding across all articles for consistency.

Results

Describing the sample

Our resulting sample included 119 articles published in 36 journals: 39 (33%) of those articles were published in Journal of Environmental Education; 24 (20%) in Environmental Education Research; 7 (6%) each in International Journal of Science Education and International Research in Geographical and Environmental Education; and 5 (4%) in Applied Environmental Education and Communication. The remaining 31 journals included three or fewer articles each. (See Supplemental Information at https://purl.stanford.edu/xv847bw0331 for full bibliographic information of the reviewed articles.)

We found an increase in the number of peer-reviewed journal articles exploring measureable outcomes of EE with students during the 20-year search parameter period. Nearly half of the articles (52, or 45%) were published between 2009 and 2013 ().

Figure 3. Chronological distribution of the 119 articles.

Figure 3. Chronological distribution of the 119 articles.

The articles in our dataset reported on studies involving K-12 students from 33 countries. Fifty-six (47%) of the articles focused on students in the United States. Canada and Germany were geographic areas with the next-highest number of articles, although articles reporting on studies from those countries were still notably fewer than those reporting on U.S.-based studies: 8 (7%) and 6 (5%), respectively. (See Supplemental Table A for full list of countries represented in articles.)

Design and methods of reviewed articles

Most studies (78%) employed quasi-experimental design methods, using either pre- and post-measures or multiple groups to explore the influence of EE on specified outcomes. Researchers implementing quasi-experimental studies also used other descriptors for their work that often alluded to the purpose of their research. Those terms included a range of evaluation types, such as design-based and utilization-focused evaluation. The remaining 22% of the articles used a variety of other research designs. (See .)

Table 2. Research designs used in articles (n = 119).

Researchers most frequently collected data by asking students to respond to structured instruments (82% of articles), which included either closed- or open-ended items, or a mix of both. Although many were written instruments, some were administered verbally or electronically. In 29% of the studies, researchers reported conducting interviews, which ranged from highly structured to completely unstructured. As part of observational measures (reported in 19% of the articles), researchers played a range of roles, sometimes acting solely as an observer while other times interacting in more of a participant-observer role. (See .)

Table 3. Method of data collection (n = 119).

Post-program data collection most often occurred immediately after program completion. Of the reviewed articles, 34 (29%) included a delayed post-program measure; 76% of those delayed measures were implemented within one to six months after completion of the EE program. (See .)

Table 4. Length of follow-up in articles with delayed assessment (n = 34).

Range of EE programs under study

In terms of target age group, the largest percentage of the articles described programs for middle-school students (ages 11–14, 57%). The next-largest group was elementary (ages 5–11, 47%), with an emphasis on upper-elementary students (ages 9–11). (See .) Just over one-third (34%) of the reviewed articles examined EE with high-school students (ages 14–18).

Figure 4. Age range of participants.

Figure 4. Age range of participants.

Researchers investigated an array of program lengths, types, and settings. Programs ranged from one-time, one-hour experiences to multi-year projects. Other types included daylong field trips, semester-long activities recurring weekly, and weeklong residential programs. The topics covered ranged from defined issues, such as water or recycling, to the environment in general. Despite our emphasis on the K-12 lens, programs occurred both inside and outside school locations, such as aquariums, parks, and museums.

Outcomes under study

We identified 121 unique, empirically measured outcomes within this sample. Using thematic analysis, we organized those outcomes into broader categories, which mirrored outcomes described in guiding documents of the EE field, including the United Nations Educational, Scientific and Cultural Organization's (UNESCO) Tbilisi Declaration (1978) and the North American Association for Environmental Education's (NAAEE) Guidelines for Excellence series (Hollweg et al., Citation2011; NAAEE, Citation2010; Simmons, Citation1995). In addition to traditional EE outcomes of environmental knowledge, attitudes, and behavior, programs also included a focus on non-environmentally related outcomes, such as attitudes toward computers or verbal communication skills. Through this process, we identified the following broad categories: (1) knowledge (including awareness, perceptions, content knowledge, skills knowledge, socio-political knowledge, and issue-specific understandings); (2) dispositions (such as, interest, affect, attitude, and behavioral intentions); (3) competencies (skills, including cognitive and social); (4) behavior (actions); (5) personal characteristics (self-esteem and character development, among others); and (6) multi-domain outcomes (those spanning more than one domain, such as academic achievement, which involves at least knowledge and competencies).

Articles in our sample most frequently addressed changes in knowledge and dispositions: 68% and 61% of the articles, respectively, addressed those areas. provides information on outcome domains; the order of the outcomes under each domain reflects the frequency with which articles studied the outcome. We list outcomes by decreasing level of frequency. Within knowledge, for example, articles most frequently examined environmental knowledge, followed by environmental awareness and environmental perceptions. Supplemental Table B lists each study with associated outcomes and author-supplied keywords.

Table 5. Domain of outcome with all examples (n = 119).

In addition to the six domain categories, we parsed outcomes based on area of focus. More broadly, we considered whether the categories explicitly related to the environment. (See Supplemental Table C.)

Program findings

We tallied findings based on domain () and area of focus (). Authors reported positive findings in 94% of articles examining environmental outcomes and 95% of those considering non-environmental outcomes. Authors reported null findings as well: 40% for environmental outcomes and 37% for non-environmental outcomes.

Table 6. Article findings for specified outcome domains.

Table 7. Article findings for environmentally related versus non-environmentally related outcomes.

Discussion

Focusing on the content, structure, and outcomes of interest in 119 peer-reviewed articles related to environmental education with K-12 students and published over a 20-year span, our analysis indicates that the field of EE research is expanding across various dimensions, particularly the diversity of outcomes under study. Examining outcomes broadly facilitated consideration of numerous studies, providing a landscape perspective. Had we desired to conduct in-depth analyses on individual outcomes and/or practices, we would have narrowed our focus, as done successfully and helpfully by Leeming, Dwyer, Porter, and Cobern (Citation1993), Stern et al. (Citation2014), and Zelezny (Citation1999), among others.

The number of published articles focused on measurable student outcomes in EE has increased steadily from 1994 to 2013, with 44% of our sampled articles published in the most recent five years of that period. This growth may reflect a wider trend: an increasing emphasis on measurable outcomes in environmental conservation, as well as in U.S. education, more broadly. In the United States, for example, the 2001 No Child Left Behind act (repealed in December 2015) pushed accountability measures, often with quantifiable outcomes, resulting in major influences on both the formal and informal educational sectors. Those influences were also evident—and repercussions continue to be seen—in the funding sector through a growing emphasis on strategic philanthropy, often characterized by measurable outcomes and outcome-based evaluations (e.g., Brest & Harvey, Citation2008). Alternatively, the growth in EE studies documented in our review may reflect an overall upswing in EE research, writ large (e.g., Marcinkowski et al., Citation2013; R. B. Stevenson, Brody, Dillon, & Wals, Citation2013). Regardless of the reason, this expansion of research within EE signals the potential utility of and need for synthetic reviews. As Rickinson and Reid suggested, “Deeper insights and more powerful conclusions may be possible through the pooling and integration of numerous studies, as opposed to the reporting of studies individually” (Citation2016, p. 143).

A focus on EE outcomes

Our systematic search uncovered great variation among the program types highlighted in articles—from hour-long classroom programs, to residential programs, to recurring programs with informal, off-site partners; the diversity of desired outcomes reflects those varied programmatic structures and settings. Documenting certain outcomes, rather than broader categories, highlights the complexity and range of programmatic structures.

Our findings align with prior research (e.g., Ardoin et al., Citation2015; Ladwig, Citation2010; Schusler et al., Citation2009; West, Citation2015), indicating that EE operates in the educational and environmental realms and beyond. Our reviewed articles suggest a field with currency in a pluralistic, interdisciplinary paradigm, rather than one narrowly focused on a distinct conceptualization of EE (R. B. Stevenson, Wals, et al., Citation2013). This, perhaps, poses a challenge in characterizing EE, but also indicates a strength: it is a field diverse in form and function (Ardoin, Clark, & Wojcik, Citation2016).

To manage the large number of outcomes, we encountered and facilitated meaningful discussion and classified outcomes based on type (knowledge, dispositions, competencies, behavior, personal characteristics, and multi-domain), as well as area of focus (environmental, academic, social, civic, health, and personal). When we considered outcomes grouped by type, without regard to area of focus, we found a preponderance of studies with objectives related to knowledge (68%) and dispositions (61%)—perhaps not surprising given our focus on K-12 audiences in formal settings. Within that context, the emphasis is often on knowledge and dispositions, rather than competencies and behaviors, at least in terms of environmentally related concepts and issues. A potential issue related to this structure is a “doom-and-gloom” approach that might overwhelm students without concurrently providing skills and opportunities to undertake meaningful action (Busch, Citation2016; Chawla & Cushing, Citation2007).

We might also be seeing the “streetlight effect,” where researchers investigate what is easiest to observe or measure (Freedman, Citation2010). Researchers may be able to assess more easily knowledge and dispositions using items yielding quantifiable data (although the accuracy of such attitudinal measures may be questionable). By contrast, competencies may require more complicated, nuanced methods. Similarly, measuring changes in behaviors is notoriously difficult given the complexity of environmental behavior (Heimlich & Ardoin, Citation2008; Hollweg et al., Citation2011) and challenges related to parsing intentions versus actual behaviors (Fu, Peterson, Kannan, Shavelson, & Kurpius, Citation2015).

In terms of area of focus for outcomes, not surprisingly, 91% of the articles included environmentally related outcomes. Perhaps of more interest are the 9% of studies that did not measure environmentally related outcomes. A closer review reveals that 10 of those 11 articles spotlighted academic outcomes, including concepts such as systems thinking (Assaraf & Orpaz, Citation2010), critical thinking (Ernst & Monroe, Citation2004), decision-making (Gresch, Hasselhorn, Bögeholz, & Helge Gresch, Citation2013; Utzschneider & Pruneau, Citation2010), research skills (Cincera & Maskova, Citation2011), and science-process skills (Haney, Wang, Keil, & Zoffel, Citation2007). Three studies addressed academic content areas including chemistry (Bartusevica, Cedere, & Andersone, Citation2004) and mathematics (Haney et al., Citation2007); Tsevreni (Citation2011) explored civic engagement-related outcomes. In all of those, the authors described the studied outcomes as precursors to environmental action, while also discussing the skills and attitudes as important in non-environmental contexts. As such, although we did not code those 11 studies explicitly as having environmental outcomes, closer inspection suggests an implicit environmental link.

We commonly found overlap among outcomes, although our classification system may not sufficiently reflect those overlaps. Researchers sometimes described measuring intermediary outcomes in place of, or in addition to, measuring an ultimate outcome. They might report, for example, changes in knowledge related to action skills, with the intention of assessing current or future changes in behavior. The question arose, then, as to how to categorize appropriately studies focusing on action-skills knowledge. Do such studies belong in the knowledge domain or, as elements of skills and behaviors, are they more appropriately categorized in the skills or behavioral domain? Although the outcomes may fit in all three, for simplicity's sake and grounded in the notion that knowledge is a multi-dimensional construct (Frick, Kaiser, & Wilson, Citation2004; Jensen, Citation2002), we categorized action-skills knowledge, as well as other skills-related knowledge, as dimensions of knowledge.

Our finding that researchers did not measure academic outcomes nearly as frequently as environmentally related outcomes may also reflect our focused coding approach: this emphasis on environmentally related outcomes may initially seem surprising given the K-12 emphasis of the reviewed studies. Rather than academic outcomes not being of importance or interest to EE researchers, we believe that many outcomes classified as environmentally related also may be inherently academic. We classified climate knowledge, for example, as an environmental outcome, but it may also be appropriately considered as an academic outcome in contexts where students learn science-related content through curriculum developed around state standards and addressed in standardized testing.

However, including non-academic outcomes (e.g., environmental, civic engagement, health benefits) as an emphasis of EE programs for the K-12 audiences who are the focus of this review may reflect the push to extend what students learn in school beyond traditional academic outcomes (Ladwig, Citation2010). The natural overlap between established EE outcomes and academic outcomes (which makes parsing outcomes difficult) may facilitate this trend of redefining what students learn in school.

Although our review primarily intended to illuminate EE-related outcomes of interest to researchers focusing on K-12 students, in the process, we collected data about the findings associated with outcomes. Compiling findings reported in the peer-reviewed literature suggests an overwhelmingly positive influence of EE on K-12 students across a range of outcomes, including those related to the environment and other domains. The lowest-reported success rate within our sample related to outcomes in studies focused on environmental behaviors. Even among those, however, 83% of studies indicated some level of positive findings.

As designed and implemented in our sample, EE appears to be highly successful at meeting knowledge and competency outcomes. Although EE programs are not quite as successful at achieving outcomes involving dispositions and behavior, as stated, findings related to those outcomes were not negative; they were simply more likely to be null. Those null findings might also reflect the actual implementation of EE programs or result from practical difficulties in measuring change in dispositions and behaviors, which are constructs inherently more complex to characterize and measure than knowledge and competencies. Researchers examining how changes in attitudes are measured have reported inconsistent use of instruments, as well as misuse of established scales (Hawcroft & Milfont, Citation2010; Johnson & Manoli, Citation2011). A more in-depth analysis of the ways in which the reviewed studies investigated changes in dispositions and behavior may help illuminate measurement effects on reported findings.

Implications for EE research

As suggested by the previous section, reviewing outcomes and findings in this way encourages reflection on the overall state of EE research. Previous authors (e.g., Hart, Citation2003) have called on EE researchers to critically examine perspectives on and understanding of the philosophical underpinnings of the field and, in the course of that, consider what is meaningful and what counts as evidence (R. B. Stevenson, Brody, et al., Citation2013). In parallel, as those questions are asked, the repertoire of related research approaches, methods, and tools that align with those outcomes of interest must also be reconsidered, broadened, and diversified (e.g., Carleton-Hug & Hug, Citation2010; Hart & Nolan, Citation1999; Stern et al., Citation2014; Wheeler, Thumlert, Glaser, Schoelhamer, & Bartosh, Citation2007). Yet, most studies in this review (78%) indicated use of quasi-experimental designs. Perhaps this is not surprising given our initial focus on measurable outcomes: researchers often consider experimental and quasi-experimental designs especially appropriate for evaluating outcomes within that paradigm. A different epistemological and ontological positioning may suggest other approaches, such as ethnography, action research, and narrative; the outcome orientation guiding this review may not highlight studies within those traditions because such research may focus more on process rather than outcome. In terms of research tools, 82% of the studies in our sample collected data using structured instruments, such as questionnaires; again, the preponderance of their use may reflect the outcome orientation.

Continued exploration of our data related to methodology and methods among the reviewed studies suggests some interest in alternative research approaches: of the reviewed studies, 22% employed designs described as case studies, comparative studies, action research, ethnographies, autoethnographies, phenomenological, correlational, and descriptive. Blatt (Citation2013), for example, took an ethnographic approach and used participant observation and cogenerative dialogues to study changes in high school students' environmental identities and behavior as they participated in an environmental science course. Pruneau, Freiman, Barbier, and Langis (Citation2009) pursued an action-research study with Canadian third graders who worked with scientists to address sedimentation in a local river. The researchers used observation, interviews, and questionnaires with drawings, diagrams, and descriptions to assess the intended outcomes of improved problem-solving and representation skills.

The quasi-experimental studies also evidenced varied approaches. Some highlighted researchers' desires to achieve complementary integration of qualitative and quantitative data using drawings, concept maps, word association, repertory grids, and photovoice. Authors described the structured instruments used in 82% of the studies as including open-ended items designed to elicit responses producing qualitative data rather than strictly quantitative data.

We agree with past reviews (e.g., Leeming et al., Citation1993; Rickinson, Citation2001; Stern et al., Citation2014; Wheeler et al., Citation2007) that have also noted the opportunity for delayed follow-up measures to extend understanding beyond immediate post-program reflections. Only 29% of our reviewed studies included post-programmatic follow-up, with most of those conducted less than six months after the EE experience. Only two studies (Rioux & Pasquier, Citation2013; Schneller, Citation2008) included follow-up more than a year after the EE program. One approach to exploring longer-term effects of EE may be the retrospective methods used in Significant Life Experiences studies (e.g., Chawla, Citation1998; Tanner, Citation1980; Wells & Lekies, Citation2006), although researchers (c.f., Chawla, Citation2001) recognize the challenges of this work, including difficulties of isolating effects of single and/or particular experiences over time.

Our 119-article review sample includes an emphasis (57%) on the middle grades (ages 11 through 14), perhaps suggesting that some may view those as the “golden years” for EE in terms of students' moral development (Kahn & Kellert, Citation2002; K. T. Stevenson, Peterson, Bondell, Mertig, & Moore, Citation2013). Leeming et al.'s (Citation1993) EE outcomes review noted that few studies had been conducted and/or reported with young children (ages 8 and under); our findings suggest that this trend has continued. Although theory and research suggest that early childhood is a critical developmental stage for nature contact (c.f., Arnold, Cohen, & Warner, Citation2009; Cutter-Mackenzie, Edwards, Moore & Boyd, Citation2014), exploring programmatic outcomes with young students does not appear to be a focal area of EE research. Although some may interpret this as a literature gap, it more likely relates to the process, rather than outcome, orientation of peer-reviewed journal articles discussing early childhood EE.

Future reviews

The many outcomes described in our review speak to the versatility of EE; concurrently, they complicate the task of coalescing the diverse research bases across the field. Few areas of primary focus stand out among EE's outcomes: although we found many studies reporting a range of EE-related positive outcomes, commenting on individual outcomes is more challenging. Selecting precise outcomes of interest—such as critical thinking, self-esteem, or environmental behavior—may help make crisper statements about EE's effects, and facilitate deeper exploration to better assess existing literature.

Another potential future research avenue would be to review literature concerning fields whose outcomes align with, or closely relate to, traditional EE outcomes. Our review identified a handful of studies explicitly designed for dual purposes. In a study of Turkish secondary students learning English, Arikan (Citation2009) shared positive findings from implementing an EE program coupled with a peace-education initiative. Students displayed increased global awareness, demonstrating potential connections between the fields. Stern, Powell, and Ardoin (Citation2011) evaluated a middle-school-focused residential outdoor program emphasizing character development and EE. The authors discuss connections between EE and positive youth development in light of findings related to character development, leadership, and environmental responsibility. Peace education and positive youth development offer two examples, among many, of fields with EE-compatible outcomes.

Limitations and delimitations

Despite our efforts to cast a wide net, we certainly missed relevant research. To be systematic, we only reviewed articles returned in our database search. Although we included seven search engines, even those do not index every journal that publishes EE research. Our search captured only articles that met the criteria as described in the Methods section; although we attempted to be inclusive in our search terms and procedures, the decision tree certainly included subjectivity, such as deciding what counted as “empirically measured” outcomes. In the first vetting round, we only reviewed article abstracts and, in that process, we may have excluded articles meriting further analysis based on the abstract.

Similarly, including only articles published in peer-reviewed journals limited the sample. Publication bias may have influenced an overrepresentation of positive findings, and we necessarily excluded some high-quality work—notably evaluation studies, which may be available only in the grey literature. We retained the peer-reviewed focus, however, with the intention of standardizing the research process and maximizing opportunities for replication.Footnote4 We felt those benefits outweighed the challenges associated with attempting a grey-literature review—namely, the significant effort required to conduct a robust, systematic grey-literature search (Mahood, Eerd, & Irvin, Citation2014).

As noted, our sampling process surfaced many more U.S. studies than those from elsewhere. This may be because of the preponderance of U.S. researchers and programs; the U.S. emphasis on “measurable outcomes,” versus a more holistic focus in other countries and regions (P. Hart, personal communication, 2013); our English language-only search criterion; or some combination of factors. This characteristic of our sample prompts further inquiry into mechanisms for international sharing of research.

Due to the range of definitions of EE and the profusion of synonymous terms, we excluded studies that researchers or practitioners may view as EE but that authors did not explicitly label as such in their article. At no time, however, did we exclude studies in which authors labeled programs as EE and used other terms to describe the program. Those other terms included, but were not limited to, education for sustainability, sustainability education, conservation education, outdoor education, garden education, environmental science, and environmental peace education.

Finally, although we sought to include a variety of methods, this particular review included only articles with measured outcomes of EE. We recognize that this created a bias toward certain research designs and data. We, however, conceptualize EE as a holistic endeavor, and we recognize the challenges inherent in evaluating EE based solely on measured outcomes. As Ladwig (Citation2010) notes, measuring student outcomes has not been the main focus of EE research in part because “much of the push in environmental education directly argues against applying technical rationality, input-output models of education” (p. 127). Although we empathize with this view, we also see a time and a place for outcomes research in EE, given the needs of practitioners, administrators, funders, and other EE stakeholders. We believe this review can contribute to a broader conversation and, complementary to more process-oriented research, can lend support to calls for including EE in the formal educational system.

Conclusion

Our systematic review attempts to describe and critically analyze literature related to outcomes of EE in programs designed for K-12 students. The findings suggest that many existing EE programs, which occur across a range of settings and in various configurations, have positive outcomes in terms of environmental knowledge, attitudes, dispositions, and skills for this audience. Moreover, those studies suggest that environmental education can also affect outcomes less directly focused on the environment, such as those related to academic achievement and civic engagement.

By coalescing and analyzing a number of existing empirical studies in this area, we may start to better understand what exists and, by extension, what remains to be explored—or as Reid and Scott describe, the “blind spots, blank spots, and bald spots” (Citation2013, p. 520). The complex nature of environmental issues, coupled with the diversity of EE settings, audiences, and practices, make EE research challenging; the studies in our review sample suggest, however, that many EE researchers have risen to that challenge through pursuing innovative, rigorous research, yet, in the process of conducting our review, we did note some areas warranting further attention, such as a need for enhanced measurement with a focus on behavior and dispositions, as well as an opportunity for longitudinal studies.

Beyond a focus on outcomes, different research approaches—and, indeed, entirely new avenues of research—may more appropriately describe and characterize certain EE experiences. As we cannot solve many, perhaps even most, environmental issues in the short term, EE research perspectives and designs that look beyond immediate outputs and outcomes to consider EE across the lifespan may be more relevant. EE is most often about relationships, processes, and providing opportunities for transformative experiences, rather than a singular focus on a specific outcome; as such, an outcome-oriented research design may distort perceptions of perceived success, while missing the overall richness of experience. We, therefore, support calls for broader perspectives on what is measured and, relatedly, alternative research approaches and methods. As researchers building on and benefiting from the body of systematic reviews conducted by others, we are eager to dialogue with diverse research teams that offer insights into the past and future of EE research. Finally, we acknowledge the continued dedication of EE researchers and practitioners who strive to improve both the environment and the field of education.

Supplemental material

2017_05_23_Student_Outcomes_Supplementals-1.docx

Download MS Word (81.4 KB)

Acknowledgments

We are grateful to numerous colleagues who provided feedback on drafts of this paper, especially Bora Simmons and Charlotte Clark. We appreciate the constructive comments from the journal editor and anonymous reviewers; they substantially improved this manuscript. We appreciate Alan Reid's insightful perspective on systematic reviews. We thank Rhoda Wang for research assistance and Wendi Hoover for editorial assistance.

Funding

The Pisces Foundation provided primary support for this study, with additional support from the North American Association for Environmental Education.

Notes

1. By limiting search terms to environmental education, we did not intentionally exclude studies employing related terms such as education for sustainability or conservation education. Although we did not specifically search for those alternates, the final sample includes several studies that describe the educational intervention using those terms, a result of environmental education journals publishing such articles or from those articles including environmental education within the keywords.

2. At the time of the initial search, this represented the most recent 20-year period for which peer-reviewed articles were fully available.

3. We include enough detail so that others may replicate this search, although individual results returned may vary as publishers update databases; moreover, differences in institutional access to databases can affect results.

4. We did not assume that all studies were equal in quality as we acknowledge that the peer-review process is far from perfect. We included criteria to address quality-control issues in our exclusion/inclusion process for each article identified in our search.

References

  • Ardoin, N. M., Biedenweg, K., & O'Connor, K. (2015). Evaluation in residential environmental education: An applied literature review of intermediary outcomes. Applied Environmental Education & Communication, 14(1), 43–56.
  • Ardoin, N. M., Clark, C., & Kelsey, E. (2013). An exploration of future trends in environmental education research. Environmental Education Research, 19(4), 499–520.
  • Ardoin, N. M., Clark, C. C., & Wojcik, D. (2016). Looking toward the blue sky: Environmental education researchers' experience, influences, and aspirations. Applied Environmental Education & Communication, 15(1): 75–89.
  • *Arikan, A. (2009). Environmental peace education in foreign language learners' English grammar lessons. Journal of Peace Education, 6(1), 87–99.
  • Arnold, H. E., Cohen, F. G., & Warner, A. (2009). Youth and environmental action: Perspectives of young environmental leaders on their formative influences. Journal of Environmental Education, 40(3), 27–36.
  • *Assaraf, O. B.-Z., & Orpaz, I. (2010). The “Life at the Poles” study unit: Developing junior high school students' ability to recognize the relations between earth systems. Research in Science Education, 40(4), 525–549.
  • *Bartusevica, A., Cedere, D., & Andersone, R. (2004). Assessment of the environmental aspect in a contemporary teaching/learning model of chemistry in basic schools of Latvia. Journal of Baltic Science Education, 2(6), 43–51.
  • *Blatt, E. N. (2013). Exploring environmental identity and behavioral change in an environmental science course. Cultural Studies of Science Education, 8(2), 467–488.
  • Brest, P., & Harvey, H. (2008). Money well spent: A strategic plan for smart philanthropy. New York, NY: Bloomberg Press.
  • Busch, K. C. (2016). Polar bears or people? Exploring ways in which teachers frame climate change in the classroom. International Journal of Science Education, Part B, 6(2), 137–165.
  • Carleton-Hug, A., & Hug, J. W. (2010). Challenges and opportunities for evaluating environmental education programs. Evaluation and Program Planning, 33(2), 159–164.
  • Chawla, L. (1998). Research methods to investigate significant life experiences: Review and recommendations. Environmental Education Research, 4(4), 383–397.
  • Chawla, L. (2001). Significant life experiences revisited once again: Response to vol. 5(4), “Five critical commentaries on significant life experience research in environmental education.” Environmental Education Research, 7(4), 451–461.
  • Chawla, L., & Cushing, D. F. (2007). Education for strategic environmental behavior. Environmental Education Research, 13(4), 437–452.
  • *Cincera, J., & Maskova, V. (2011). GLOBE in the Czech Republic: A program evaluation. Environmental Education Research, 17(4), 499–517.
  • Cooper, H. (2010). Research synthesis and meta-analysis: A step-by-step approach (Vol. 2, 4th ed.). Thousand Oaks, CA: Sage.
  • Cutter-Mackenzie, A., Edwards, S., Moore, D., & Boyd, W. (2014). Young children's play and environmental education in early childhood education. New York, NY: Springer.
  • EPPI Centre. (2009). What is a systematic review? Retrieved from http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=67
  • *Ernst, J., & Monroe, M. (2004). The effects of environment-based education on students' critical thinking skills and disposition toward critical thinking. Environmental Education Research, 10(4), 507–522.
  • Freedman, D. H. (2010, July–August). Why scientific studies are so often wrong: The streetlight effect. Discover. Retrieved from http://discovermagazine.com/2010/jul-aug/29-why-scientific-studies-often-wrong-streetlight-effect
  • Frick, J., Kaiser, F. G., & Wilson, M. (2004). Environmental knowledge and conservation behavior: Exploring prevalence and structure in a representative sample. Personality and Individual Differences, 37(8), 1597–1613.
  • Fu, A. C., Peterson, L., Kannan, A., Shavelson, R. J., & Kurpius, A. (2015). A framework for summative evaluation in informal science education. Visitor Studies, 18(1), 17–38.
  • Gough, D. (2007). Weight of evidence: A framework for the appraisal of the quality and relevance of evidence. Research Papers in Education, 22(2), 213–228.
  • Gough, D., Oliver, S., & Thomas, J. (2012). An introduction to systematic reviews. Thousand Oaks, CA: Sage.
  • *Gresch, H., Hasselhorn, M., Bögeholz, S., Helge Gresch, M. H., & S. B. (2013). Training in decision-making strategies: An approach to enhance students' competence to deal with socio-scientific issues. International Journal of Science Education, 35(15), 2587–2607.
  • Hallinger, P. (2013). A conceptual framework for systematic reviews of research in educational leadership and management. Journal of Educational Administration, 51(2), 126–149.
  • *Haney, J. J., Wang, J., Keil, C., & Zoffel, J. (2007). Enhancing teachers' beliefs and practices through problem-based learning focused on pertinent issues of environmental health science. Journal of Environmental Education, 38(4), 25–33.
  • Hart, P. (2003). Reflections on reviewing educational research: (Re)searching for value in environmental education. Environmental Education Research, 9(2), 241–256.
  • Hart, P. (2007). Environmental education. In S. K. Abell & N. G. Lederman (Eds.), Handbook of research on science education (pp. 689–728). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Hart, P., & Nolan, K. (1999). A critical analysis of research in environmental education. Studies in Science Education, 34, 1–69.
  • Hawcroft, L. J., & Milfont, T. L. (2010). The use (and abuse) of the new environmental paradigm scale over the last 30 years: A meta-analysis. Journal of Environmental Psychology, 30(2), 143–158.
  • Heimlich, J., & Ardoin, N. (2008). Understanding behavior to understand behavior change: A literature review. Environmental Education Research, 14(3), 215–237.
  • Hollweg, K. S., Taylor, J. R., Bybee, R. W., Marcinkowski, T. J., McBeth, W. C., & Zoido, P. (2011). Developing a framework for assessing environmental literacy. Washington, DC: NAAEE.
  • Jensen, B. B. (2002). Knowledge, action and pro-environmental behaviour. Environmental Education Research, 8(3), 325–334.
  • *Johnson, B., & Manoli, C. C. (2011). The 2-MEV Scale in the United States: A measure of children's environmental attitudes based on the Theory of Ecological Attitude. Journal of Environmental Education, 42(2), 84–97.
  • Kahn, P. H., & Kellert, S. R. (2002). Children and nature: Psychological, sociocultural, and evolutionary investigations. Cambridge, MA: MIT Press.
  • Ladwig, J. G. (2010). Beyond academic outcomes. Review of Research in Education, 34(1), 113–141.
  • Leeming, F. C., Dwyer, W. O., Porter, B. E., & Cobern, M. K. (1993). Outcome research in environmental education: A critical review. Journal of Environmental Education, 24(4), 8–21.
  • Mahood, Q., Eerd, D. V., & Irvin, E. (2014). Searching for grey literature for systematic reviews: Challenges and benefits. Research Synthesis Methods, 5(3), 221–234.
  • Marcinkowski, T. (2003). Commentary on Rickinson's “Learners and Learning in Environmental Education: A critical review of the evidence” (EER 7(3)). Environmental Education Research, 9(2), 181–214.
  • Marcinkowski, T., Bucheit, J., Spero-Swingle, V., Linsenbardt, C., Engelhardt, J., Stadel, M., Santangelo, R., & Guzmon, K. (2013). Selected trends in thirty years of doctoral research in environmental education in dissertation. In R. B. Stevenson, M. Brody, J. Dillon, & A. E. J. Wals (Eds.), International handbook of research on environmental education (pp. 45–62). New York, NY: Routledge.
  • Mogensen, F. (1993, October). Action competence: Some viewpoints and perspectives for the future environmental education. Paper presented at the Third International Workshop in the Project Children as Catalysts of Global Environmental Change, Skive, Denmark.
  • North American Association for Environmental Education (NAAEE). (2010). Excellence in environmental education: Guidelines for learning (K–12). Washington, DC: NAAEE.
  • *Pruneau, D., Freiman, V., Barbier, P.-Y., & Langis, J. (2009). Helping young students to better pose an environmental problem. Applied Environmental Education and Communication, 8(2), 105–113.
  • Reid, A., & Scott, W. (2013). Identifying needs in environmental education research. In R. B. Stevenson, M. Brody, J. Dillon, & A. E. J. Wals (Eds.), International handbook of research on environmental education (pp. 518–528). New York, NY: Routledge.
  • Rickinson, M. (2001). Learners and learning in environmental education: A critical review of the evidence. Environmental Education Research, 7(3), 207–320.
  • Rickinson, M., & Reid, A. (2016). Synthesis of research in higher education for sustainable development (pre-publication version). In M. Barth, G. Michelsen, M. Riekmann, & I. Thomas (Eds.), Routledge handbook of higher education for sustainable development (pp. 142–160). New York, NY: Routledge.
  • *Rioux, L., & Pasquier, D. (2013). A longitudinal study of the impact of an environmental action. Environmental Education Research, 19(5), 694–707.
  • *Schneller, A. J. (2008). Environmental service learning: Outcomes of innovative pedagogy in Baja California Sur, Mexico. Environmental Education Research, 14(3), 291–307.
  • Schusler, T. M., Krasny, M. E., Peters, S. J., & Decker, D. J. (2009). Developing citizens and communities through youth environmental action. Environmental Education Research, 15(1), 111–127.
  • Simmons, D. (1995). The NAAEE standards project: Papers on the development of environmental education standards. Troy, OH: NAAEE.
  • *Stern, M. J., Powell, R. B., & Ardoin, N. M. (2011). Evaluating a constructivist and culturally responsive approach to environmental education for diverse audiences. Journal of Environmental Education, 42(2), 109–122.
  • Stern, M. J., Powell, R. B., & Hill, D. (2014). Environmental education program evaluation in the new millennium: What do we measure and what have we learned? Environmental Education Research, 20(5), 581–611.
  • Stevenson, K. T., Peterson, M. N., Bondell, H. D., Mertig, A. G., & Moore, S. E. (2013). Environmental, institutional, and demographic predictors of environmental literacy among Middle School Children. PLoS ONE, 8(3), e59519.
  • Stevenson, R. B., Brody, M., Dillon, J., & Wals, A. E. J. (Eds.). (2013). International handbook of research on environmental education. New York, NY: Routledge.
  • Stevenson, R. B., Wals, A. E. J., Dillon, J., & Brody, M. (2013). Introduction: An orientation to environmental education and the handbook. In R. Stevenson, M. Brody, J. Dillon, & A. E. J. Wals (Eds.), International handbook of research on environmental education (pp. 1–6). New York, NY: Routledge.
  • Tanner, T. (1980). Significant life experiences: A new research area in environmental education. Journal of Environmental Education, 11(4), 20–24.
  • *Tsevreni, I. (2011). Towards an environmental education without scientific knowledge: An attempt to create an action model based on children's experiences, emotions and perceptions about their environment. Environmental Education Research, 17(1), 53–67.
  • United Nations Educational, Scientific and Cultural Organization. (1978). Final Report: Intergovernmental Conference on Environmental Education. Paris, France: UNESCO.
  • *Utzschneider, A., & Pruneau, D. (2010). Students' decision-making process during a sustainable development project. International Journal of Sustainable Development & World Ecology, 17(1), 39–47.
  • Wells, N. M., & Lekies, K. S. (2006). Nature & the life course: Pathways from childhood nature experiences to adult environmentalism. Children, Youth and Environments, 16(1), 1–24.
  • West, S. E. (2015). Understanding participant and practitioner outcomes of environmental education. Environmental Education Research, 21(1), 45–60.
  • Wheeler, G., Thumlert, C., Glaser, L., Schoellhamer, M., & Bartosh, O. (2007). Environmental education report: Empirical evidence, exemplary models, and recommendations on the impact of environmental education on K–12 students. Olympia, WA: Washington Office of Superintendent of Public Instruction.
  • Williams, D. R., & Dixon, P. S. (2013). Impact of garden-based learning on academic outcomes in schools synthesis of research between 1990 and 2010. Review of Educational Research, 32(2), 211–235.
  • Zelezny, L. C. (1999). Educational interventions that improve environmental behaviors: A meta-analysis. Journal of Environmental Education, 31(1), 5–14.