5,604
Views
6
CrossRef citations to date
0
Altmetric
Articles

Measuring senior high school students’ self-induced self-reflective thinking

Pages 494-502 | Received 09 Jul 2015, Accepted 27 Oct 2015, Published online: 03 Sep 2016

ABSTRACT

Theoretically, reflection is known to be an essential skill for improving learning on a metacognitive level. In practice, students may not use it of their own accord to improve this kind of learning because it can be mentally demanding. The author reports on the legitimation of an instrument measuring self-induced self-reflective thinking, which is reflection of one's own accord focused on improving general knowledge of the learning process. In 2 studies, the psychometric properties and nomological validity of open-ended self-induced self-reflective thinking questions were examined. Senior high school students responded to these questions and several measures of general knowledge of the learning process. Results showed statistically significant relationships between self-induced self-reflective thinking and general knowledge of the learning process. Implications for educational research are discussed.

Educational researchers of different theoretical perspectives have emphasized that students should become accomplished in using reflection with respect to their learning process (Baird, Fensham, Gunstone, & White, Citation1991; Boud, Keogh, & Walker, Citation1985; Bransford, Brown, & Cocking, Citation2000; Ertmer & Newby, Citation1996; Zimmerman, 1989). When students use reflection, they do so to improve how they learn by critically evaluating their learning experiences (Dewey, Citation1910; Mezirow, Citation1990). However, it is also generally known that it is difficult to measure students’ reflection due to two problems.

The first problem involves the data collection method. Often self-report rating-scale questions are used to measure the frequency with which students consider reflection to be useful, effective, important, and etcetera (e.g., Kember et al., Citation2000; Mair, Citation2012; Phan, Citation2009). An advantage of this data selection method is that quantitative data is obtained, which provides for consistency in interpretation and it enables inferential and advanced statistical analyses. However, a disadvantage of this method is that students have little opportunity to explain their selected response. Moreover, circling a rating-scale question can be done quickly and may provide for ill-considered responses. These problems regarding rating-scale questions can be overcome by using qualitative data-collection methods (e.g., Bell, Kelton, McDonagh, Mladenovic, & Morrison, Citation2011), such as open-ended questions, interviews, observations, and reflective writing tasks (e.g., journals, portfolios, notebooks, diaries, rubrics, and online measures). Although these qualitative data-collection methods can provide for insights on students’ understanding of reflection, a difficulty is the obtaining of consistent grading of the constructed responses and the difficulties students can encounter when they have to provide many constructed responses (Cronbach, Citation1951).

The second problem in the measurement of reflection is that students can differ with regard to their actual use of reflection. Generally, research on reflection showed that students find it difficult to make use of reflection (cf., Grossman, Citation2009; Rogers, Citation2001; Van Velzen, Citation2012). In this respect, Brown (Citation1987) and Ertmer and Newby (Citation1996) argued that reflection can be a mentally demanding skill that requires practicing. For example, White and Frederiksen (Citation1998) found that seventh-grade students needed explicit instruction and practice of reflection. They also found that for the students to find reflection acceptable and enjoyable, to a point, they needed to understand that it was the performance and not the student that needed to be reconsidered. Other research pointed out that students can lack the motivation to engage in reflection (De Bruin, Van der Schaaf, Oosterbaan, & Prins, Citation2012).

Therefore, it can be expected that students may refrain from being engaged in reflection of their own accord to improve how they learn, because it can be mentally demanding. In addition, students who have been informed about reflection through instruction may be able to provide appropriate responses to questions inquiring about reflection, even though they do not actually make use of it of their own accord to improve their learning. For example, McNamara (Citation2011) referred to this, in the context of metacognitive strategies, as to whether students tend to reflect as a naturally occurring strategy as distinct from being induced or prompted by others to use reflection.

In this study, these problems regarding the measurement of students’ understanding of reflection were addressed via two separate studies. The purpose of these two studies was to examine the legitimation of a relatively short measurement instrument that consisted of three open-ended questions. The focus was on students’ reflection of their own accord to improve general knowledge of the learning process. This was called self-induced self-reflective thinking (SISRT). I begin with an overview of the research literature on how reflection is measured to explain the characteristics of the SISRT questions. Next, I explain and define the characteristics of SISRT through an overview of the research literature.

Measuring reflection

In educational research, reflection is measured via different instruments such as questions and prompts (Bannert, Citation2006), journal writing (Boud, Citation2001), and computer-based tasks (Downing & Chim, Citation2004). Generally, reflection can be measured in two ways: as an awareness and attitude, by focusing on the relatively stable beliefs and perceptions of reflection, and as a skill, by focusing on students’ understanding of reflection. Instruments that measure reflection as an awareness and attitude usually consist of closed-ended self-report and rating-scale questions. An example is the Questionnaire for Reflective Thinking (Kember et al., Citation2000) that was designed to measure individual differences in perceived reflection. This instrument specifically differentiates between nonreflection, reflection, and critical reflection (Thorpe, Citation2004). That is, nonreflection involves habitual and thoughtful action, reflection involves the reconsideration of a situation to understand that situation, and critical reflection involves a reconsideration of the situation to obtain improvements by making a critical evaluation.

Particularly, the distinction between reflection and critical reflection was found by exploring undergraduate students’ journal writing (Kember et al., Citation1999). That is, a qualitative method was used to discover this distinction. Generally, qualitative methods are useful to explore to obtain an understanding of phenomena. Recently, this qualitatively developed coding scheme by Kember et al., (Citation1999) was used to measure the levels of reflection in students’ written journals (Bell et al., Citation2011). In this case, reflection was measured as a skill in that the coding scheme provided insights into what students reflect upon.

However, a problem with qualitatively measuring students’ reflection via writing tasks is that the written texts do not have to include reflections. Reviews showed that reflective writing tasks often consist of descriptive accounts of learning events without mentioning reflections (Cowan, Citation2014; Dyment & O'Connell, Citation2011). For instance, portfolios that also included a justification of the selected student work consisted of a description of what has been learned instead of mentioning how learning can be improved (Calfee & Perfumo, Citation1996). Therefore, because students are often reluctant to reflect, it seems that prompting (Black & Wiliam, Citation1998; Cacciamani, Cesareni, Martini, Ferrini, & Fujita, Citation2012) and instructor feedback (Ash, Clayton, & Atkinson, Citation2005; Elshout-Mohr, Citation1994) have to be included regarding reflective writing tasks.

For example, students can be asked to write an essay for their portfolio by reflecting on their progress, areas of weakness, and plans for improvement (O'Sullivan et al., Citation2012). Other researchers also reported about the importance of prompting in online reflection tasks (Chen, Wei, Wu, & Uden, Citation2009). Particularly the type of online reflection tasks that aims to develop and demonstrate reflection on the process of learning, can support students in tracking their developmental changes over a period of time and encourage them to engage in self-directed learning (McDonald, Citation2012).

The point is that the use of questions in qualitative data-collection methods seems essential to ensure that students provide reflective responses. For example, McCrindle and Christensen (Citation1995) found that university students who were keeping a learning journal on their learning process that included questions, performed significantly better on the final exam than the control-group students who did not keep such a learning journal. Furthermore, Berthold, Nückles, and Renkl (Citation2007) found that students who were asked to reflect in learning journals improved their learning outcomes when they also received cognitive or a combination of cognitive and metacognitive prompts. Accordingly, it can also be important to quantitize the obtained qualitative data to examine its merits. For example, Lew and Schmidt (Citation2011), using coding software to analyze the content of 690 university students’ reflections on their learning goals, found positively weak and nonsignificant correlations between reflection and knowledge acquisition.

In the studies in this article, three open-ended SISRT questions were used to enable the students to respond comprehensively to, what was assumed for them to be, difficult questions. In this way, qualitative data was obtained of which the content could be analyzed relatively quickly because there were only three questions.

Defining self-induced self-reflective thinking

So far, the word reflection was employed as this is commonly used to refer to the evaluation of experiences with the aim to obtain improvements. However, in educational research, reflection is defined in varied ways and used in different contexts (Denton, Citation2011; Moon, Citation1999). In most definitions of reflection, a reference is made to Dewey (Citation1910), who described reflection as a systematic way of thinking through one's experiences to attain additional or new information. Mezirow (Citation1990) elaborated this definition by distinguishing between reflection and critical reflection, of which the latter refers to a person focused on meaningful changes. Furthermore, Von Wright (Citation1992) added the prefix self to reflection to emphasize that self-reflection involves seeing oneself as the active agent with different alternatives. In the studies in this article, self-reflection was defined as critically evaluating experiences to obtain improvements while taking into account oneself as the active agent with different alternative. In addition, this self-reflection was retrospective or reflection-on-action (Schön, Citation1983).

Research on reflection also includes different contexts (Baird, Citation1986; Grossman, Citation2009). For example, reflection has been studied regarding primary and secondary education (Brown, Citation1997), higher education (Rogers, Citation2001), and work-based education (Warhurst, Citation2008). In the studies in this article, the context consisted of senior high school students because the focus was on the improvement of metacognitive knowledge. This knowledge is essential for students to direct their cognitive processes, while relatively few educational research studies investigate it (Cotterall & Murray, Citation2009; Efklides & Misailidi, Citation2010). Generally, metacognition is divided into metacognitive knowledge and the executive processes. Metacognitive knowledge is known to include an awareness and understanding of one's cognitive processes (Brown, Citation1987; Flavell, Citation1979; Pintrich, Citation2002). To develop metacognitive knowledge, some researchers have focused specifically on declarative metacognitive knowledge or knowing what is needed to direct cognitive processes as distinct from knowing how to regulate cognitive processes (Brown, Citation1987; Kluwe, Citation1982; Kuhn, Citation2000).

In the context of learning, this declarative metacognitive knowledge refers to general knowledge of the learning process or knowing what is needed to learn effectively (Van Velzen, Citationin press). However, self-reflective thinking and general knowledge of the learning process are barely studied concepts and little is known about their relationship. Presently, these are the concepts of a theoretical model. Therefore, theoretically it was expected that the kind of reflections needed to improve general knowledge of the learning process would be different from the kind of reflections that are focused on regulating cognitive processes. In the studies in this article, the term thinking was added to self-reflection to point out that self-reflective thinking aims to improve knowledge regarding the thinking through of what is needed to learn effectively. In other words, self-reflective thinking was defined as the retrospective reconsiderations of learning experiences to improve general knowledge of the learning process in that critical evaluations were made while taking into account oneself as an active agent with alternatives.

Furthermore, it was expected that students’ self-reflective thinking could occur either consciously or unconsciously. Conscious self-reflective thinking is intentional and can lead to an improved understanding of general knowledge of the learning process. However, this can be initiated either because others ask students to reflect or because students do so of their own accord. Therefore, to include the deliberate initiative of a student to use self-reflective thinking, in the studies in this article the term self-induced was added. The term self-induced with regard to reflection was also used by Dunlap and Grabinger (Citation2003), who argued that people engage in lifelong learning do so of their own accord because they want to improve the quality of their lives. In the same vein, other researchers have pointed out that to prepare people for lifelong learning, educational opportunities to develop the self-induced use of metacognition and self-reflection are essential (cf., Wiersema & Licklider, Citation2007).

Although research studies exist on reflection to improve metacognitive knowledge in the context of learning (Brown, Citation1987; Cotterall & Murray, Citation2009; Pintrich, Citation2002; White & Frederiksen, Citation2005), no studies were found that focused on measuring students’ use of SISRT. Therefore, the following research questions were stated:

  • Research Question 1. Do the SISRT questions provide for interpretability?

  • Research Question 2. Do the SISRT questions provide for nomological validity?

General method

In this article, two separate studies are presented that investigated the legitimacy of the measurement of SISRT. These two studies were similar with respect to the target group of participants, the SISRT questions, the procedure in collecting the data, and the data analysis of the SISRT questions. This is described in more detail in this introductory paragraph.

Target group of participants

Because self-reflection is known to be an active and effortful enterprise, adolescent students can be expected to be better in reflecting on their learning processes than younger students (Brown, Citation1987; Ertmer & Newby, Citation1996). Senior high school students (i.e., Grade 9–12 university-preparatory education) were selected as the target group because these students were expected to have developed metacognitive knowledge (Schneider, Citation2008). In addition, these students are also still developing their metacognitive knowledge (Weil et al., Citation2013), which makes it worthwhile to measure their SISRT. Therefore, convenience sampling was used in that the senior high schools students who were available for the study were recruited as participants.

The participants all came from high schools in the surroundings of Amsterdam in the Netherlands. The invited participants all participated and were included in the data analyses because all had filled in, or partly filled in, the SISRT questions. The participants followed the schools’ obligatory study-skill courses regarding study techniques, planning, reflection, and school-life in general, which take place from Grades 7 to 12. In these obligatory study-skill courses the participants had been informed about what reflection is and for which purposes it can be used. Because all high school students in the Netherlands follow these obligatory study-skill courses, they can be expected to know what reflection means. However, the participants were specifically trained in neither SISRT nor in developing general knowledge of the learning process.

Data-collection procedure

The schools informed the parents about the research investigation that would take place and gave the parents the opportunity to exclude their child or children from participating. The data were collected in one session that provided the participants with enough time to respond to the questions. The data-collection session was kept during a lesson and was chaired by a familiar teacher or mentor. The teachers and mentors received strict instructions to inform the participants that their responses would be used only for the research purpose of obtaining an understanding about reflection. Next, the teachers and mentors distributed the questions. The questions were administered in a paper-and-pencil format in which the participants were specifically asked to respond to the questions as comprehensively as possible. The participants first answered the questions regarding general knowledge of the learning process and then the SISRT questions. Finally, the teachers were expressly instructed not to react to and answer the participants’ questions.

SISRT questions

Three open-ended SISRT questions were used to enable the participants to write down a response in their own words by stating what they knew about SISRT. The three questions followed the process of reflection (Boud et al., Citation1985; Moon, Citation1999), which begins with reconsidering one's understanding by obtaining an overview of the situation. Accordingly, the focus is on the essential features of the situation. Finally, critical evaluation of the situation is taking place to establish improvements.

In other words (see the Appendix), the first SISRT question referred to analyzing or the obtaining of a general understanding and interpretation of learning experiences. The second SISRT question referred to the evaluation of essential features or outcomes of learning experiences. Finally, the third SISRT question referred to the critical evaluation or the synthesizing of information from learning experiences. The questions were stated in this order to provide support to the participants in responding, because it was assumed that these questions would be difficult for them.

Data analysis regarding the SISRT questions

The responses of the participants were carefully interpreted in line with the operationalization of SISRT that was established in a previous and preliminary study (Van Velzen, Citation2015). That is, the operationalization of senior high school students’ SISRT provided for three hierarchical categories, of which the highest level (i.e., self-induced) was defined as indicating an understanding of the process of SISRT, which was differently interpreted per SISRT question (see ). For this operationalization, intercoder reliability was established through another person who was informed (i.e., not trained) about the categories, which resulted in a reliability of κ = .74 that was considered good (Robson, Citation1993).

Table 1. Operationalization of the SISRT questions.

This operationalization included that relevant responses not referring to the process of self-reflective thinking were interpreted as providing general statements and examples. Contrarily, relevant responses including a reference to the process of self-reflective thinking were interpreted as providing an indication of self-induced, because the use of self-reflective thinking of one's accord will lead to an understanding of what the process of self-reflective thinking encompasses. To clarify this difference, examples of both kinds of responses are presented per SISRT question. Examples regarding the first SISRT question were the following: “That I will have to work on it” and “That I will check if I need more or additional information and explanations.” The latter response indicated the process of self-reflective thinking in that it included checking and this was done for a specific purpose, suggesting that one knows why and how to undertake the activity.

Examples regarding the second SISRT question were “That I ask myself if it useful to me” and “That I can see how it helps me to understand things that I encounter in my life.” Here the latter example indicated the process of self-reflective thinking in that it stated the actual usefulness of having this kind of reconsideration, suggesting a focus on growth in learning and personal development. Finally, examples regarding the third SISRT question were the following: “That I will do it differently the next time” and “That I will need to repeat subject matter to prevent that I make mistakes.” The latter example indicated the process of self-reflective thinking in that it stated the usefulness of a particular improvement, suggesting that one can be specific about why the application of a study technique could be effective.

In line with the operationalization obtained in the previous study, in the two studies presented in this article the responses of the participants to the SISRT questions were scored as (a) blank responses and responses unrelated to the question were scored as absent self-reflective thinking, (b) responses that were related to the question though that did not include a reference to the process of self-reflective thinking were scored as self-reflective thinking, and (c) responses that were related to the question and also included a reference to the process of self-reflective thinking were scored as SISRT. In the case that multiple responses were given per question, the highest score was noted down.

Next, to examine the reliability of the SISRT questions, Cronbach's alpha was calculated. Also, interitem correlations and principal component analysis were calculated to examine the interrelatedness of the questions (Cronbach & Shavelson, Citation2004). The principal component analysis were based on an extraction with eigenvalues, which was used to examine interrelationships among the items to identify if the items formed a principal (i.e., first) dimension (Hair, Black, Babin, & Anderson, Citation2010). More specifically, because the interpretability of a test depends on its homogeneity, the level of the factor loadings was examined (Cronbach, Citation1951).

Nomological validity was established because the relationship between SISRT and general knowledge of the learning process involves concepts of a theoretically based model. That is, theoretically general knowledge of the learning process is related to SISRT; however, evidence for this relationship is not yet available. Moreover, because the instruments that measure general knowledge of the learning process are under development, principal component analyses were calculated to establish suitable items. That is, to obtain a set of items that is reliable and not capable of division into discrete clusters (Cronbach, Citation1951); only the items that loaded statistically significant on the main component (Hair et al., Citation2010) were included in further analyses.

Nonparametric analyses were used because the data were of an ordinal level. Spearman rank-order correlation was calculated to examine the statistical significance of the obtained correlations because the participants were not specifically trained in SISRT and general knowledge of the learning process. Therefore, a relationship with SISRT was expected to be present in terms of whether or not this relationship would occur by chance.

Study 1

Study 1 considers the interpretability of the SISRT questions and the nomological validity with general knowledge of the learning process.

Method

Participants

The participants were 108 eleventh-grade students (66 women, 41 men, one unknown; Mage = 17 years). The participants came from six high schools that had reacted to a call for research participants that had been sent to 20 high schools. The data were collected in the beginning of the school year. Because relatively few men were included in Study 1, the SISRT data were examined for sex differences, showing no statistical significant differences (p = .94; Mann-Whitney U test at the specified .05 level).

Materials

To examine nomological validity, 14 open-ended questions were used that measured students’ understanding of general knowledge of developing cognitive knowledge (KDCK). The KDCK was recently developed (Van Velzen, Citationin press) and provided ordinal data or three hierarchical scores (i.e., absent, tacit, and explicit KDCK) in line with the theory of Schraw and Moshman (Citation1995). The development of the KDCK showed a reliability of α = .85 and preliminary construct validation (rs = .24, p = .05). The KDCK included three kinds of items: declarative, procedural, and conditional general knowledge of the learning process in line with Jacobs and Paris (Citation1987) and Schraw and Moshman (Citation1995). Examples of the KDCK questions were regarding declarative “I know if I understand the essence of subject matter, because I focus on…” regarding procedural “I know if I can summarize subject matter, because I focus on…,” and regarding conditional “I know when a learning technique is needed, because I focus on…”

Results and discussion

The reliability of the SISRT questions showed α = .60, which was acceptable when taking into account the amount and type of the questions (Kehoe, Citation1995; Nunnally, Citation1978). Examination of the interrelationships between the SISRT questions through interitem correlations showed positive homogeneity (see ). That is, although one correlation was below the required .30, the mean correlation was .32 (Robinson, Shaver, & Wrightsman, Citation1991). Homogeneity below .30 can in this case be the consequence of that each SISRT question referred to a different element (i.e., analyze, evaluate, and synthesize) of the self-reflective thinking process.

Table 2. Interitem correlation for the SISRT questions in the first study.

Also, principal component analysis showed one component and the factor loadings for the SISRT questions 1–3 were .82, .71, and .69, respectively. When taking into account the sample size, the level of these factor loadings were assumed statistically significant on a .05 level with a power level of 80% and standard errors assumed to be twice those of conventional correlation coefficients (Hair et al., Citation2010). Therefore, it was concluded that the interpretability of the SISRT was acceptable in that it was interpretable while all students were able to respond to the questions (Cronbach, Citation1951).

Descriptive statistics of the SISRT data showed an average mean score (M = 1.91, SD = 0.43), and the percentages were the following: absent self-reflective thinking = 15%, self-reflective thinking = 78%, and SISRT = 7%.

Regarding the KDCK, shows the results of the principal component analysis based on an extraction with eigenvalues to obtain items with factor loadings on the first component that were > .55. That is, these factor loadings were assumed statistically significant on a .05 level with a power level of 80% and standard errors assumed twice those of conventional correlation coefficients when taking into account the sample size (Hair et al., Citation2010). Examination of the interpretability of the KDCK with 14 items showed a reliability of α = .88 (M = 1.94, SD = 0.41) and a mean interitem correlation of .36 (see ). It was concluded that the KDCK was acceptable.

Table 3. Factor loadings for principal component analysis based on an extraction with eigenvalues for KDCK.

Table 4. Interitem correlation for the GKLP in the first study.

The relationship between the KDCK and the SISRT was statistically significant (rs = .33, p = .01), showing that the relationship was not a result of chance. This raised the question of whether the same results could be obtained with another measures of general knowledge of the learning process.

Study 2

Study 2 considers the interpretability of the SISRT questions and the nomological validity with two other instruments measuring general knowledge of the learning process.

Method

Participants

The participants were 125 ninth-grade students (81 women, 44 men; Mage = 15 years). The participants came from one high school that had reacted to a call for research participants that was sent to 6 high schools. The data were collected in June at the end of the school year. Examination for sex differences by using the SISRT data showed no statistical significant differences (p = .23; Mann-Whitney U test at the specified .05 level).

Materials

To examine nomological validity, the Metacognitive questionnaire for General Knowledge of the Learning process (MGKL) was used, to measure students’ understanding of general knowledge of the learning process, together with a rating-scale instrument that measured the frequency with which students used STudy Techniques questions (STT; Van Velzen, Citation2013). The MGKL had shown to be reliable (α = .57 for seven items), which is acceptable for a multiple-choice instrument (Kehoe, Citation1995). However, only four items provided the required > .55 factor loadings (i.e., principal component analysis based on an extraction with eigenvalues). Therefore, the STT was included (α = .58 for 6 items) due to its strong correlation with the MGKL (rs = .79, p = .01). Both instruments have five-point scoring. An example of the MGKL was “I understand the essence of subject matter: (a) when I understand it; (b) when subject matter is repeated often; (c) when I can explain it to myself; (d) when the teacher says that it is important; and (e) when I learn it again.” An example of the SST questions was “How often do you make summaries?: (a) never; (b) sometimes; (c) neutral; (d) often; and (e) always.”

A secondary measure to examine nomological validity consisted of two questions developed to measure general knowledge of the learning process (GKLP; Van Velzen, Citation2013). The GKLP included two choices regarding unconsidered (i.e., habitual) versus considered learning. That is, the students had to choose between “I always learn in the same way” versus “Beforehand I consider the best way to learn” and between “Whenever my marks are low I reconsider how to improve my learning” versus “Whenever my marks are low I will learn for a longer period of time.” The GKLP was scored as (a) blank or no response provided, (b) low level or one blank and one habitual or unconsidered learning response, (c) medium level or one habitual or unconsidered and one considered learning response, and (d) high level or considered learning responses.

Results and discussion

The reliability of the SISRT questions showed α = .62, which is acceptable for this kind of question (Kehoe, Citation1995; Nunnally, Citation1978). The interitem correlations showed positive homogeneity (see ) in that none was below the .30 (Robinson et al., Citation1991). Principal component analysis showed one component and the factor loadings for the SISRT questions 1–3 were .81, .74, and .73, respectively. These factor loadings were statistically significant on a .05 level (with a power level of 80% and standard errors, assuming twice those of conventional correlation coefficients when taking into account the sample size; Hair et al., Citation2010). Therefore, it was concluded that the interpretability of the SISRT questions was acceptable in that it was interpretable and all students were able to respond to the questions (Cronbach, Citation1951).

Table 5. Interitem correlation for the SISRT questions in the second study.

Descriptive statistics of the SISRT data showed an average mean score (M = 1.97, SD = 0.37) and the percentages were 12% for absent self-reflective thinking, 83% for self-reflective thinking, and 5% for SISRT.

Regarding the MGKL/STT, shows the results of the principal component analysis based on an extraction with eigenvalues to obtain items with factor loadings on the first component that were > .50. These factor loadings were statistically significant on a .05 level, with a power level of 80% and standard errors assuming twice those of conventional correlation coefficients when taking into account the sample size (Hair et al., Citation2010). The MGKL/STT provided 5 items with an acceptable reliability of α = .60 for an instrument that also includes multiple-choice questions (Kehoe, Citation1995). Although the mean interitem correlation of .22 was below the required .30 (see ), which may have been the result of the mixing of two instruments, it was decided to use the MGKL/STT based on its factor loadings and reliability.

Table 6. Factor loadings for principal component analysis based on an extraction with eigenvalues for MGKL/STT.

Table 7. Interitem correlation for the MGKL/STT questions in the second study.

Descriptive results for the MGKL/STT showed M = 2.82 (SD = 0.80) and the percentages of the responses were 4% for wrong or never, 36% for weak or sometimes, 39% for moderate or neutral, 21% for good or often, and 0% for explanatory or always. The percentages for the GKLP were 9% blank and 21% low-level, 37% medium-level, and 33% high-level responses.

shows that the examination of nomological validity provided for statistically significant results for MGKL/STT (p = .01) and GKLP (p = .05), though the positive correlations were small (rs = .23 and rs = .21, respectively). When taking into account the results of the interpretability of the validation instruments, the statistical significant results suggest that the SISRT questions were validated. However, the small correlations indicated that more validation instruments are needed to include other contributing variables (Campbell, Citation1988).

Table 8. Psychometric properties and nomological validity for the major study variables in the second study.

Summary and concluding discussion

The results regarding the measurement of SISRT that were reported in this article seem encouraging in that the interpretability of the SISRT questions appeared to be acceptable when the uniqueness and complexity of the questions were taken into account. In addition, the lengthening of the SISRT questions by developing more items may increase reliability though it may also have the disadvantage that when students have to respond to more questions, interpretability can decrease (Cronbach, Citation1951).

Furthermore, a limitation of the studies in this article was that the results regarding validation seem incomplete. That is, nomological validity was partly established in that statistically significant relationships between SISRT and the measures of general knowledge of the learning process were obtained; however, the correlations were positive but small. This suggests that more variables are involved in the theoretically based model (Robinson et al., Citation1991).

When taking into account these limitations of the studies in this article, the results suggested that a relatively low percentage of the participants appeared to be self-induced self-reflective thinkers. That is, 7% of the responses in Study 1, and 5% in Study 2, were scored as SISRT. This result further emphasizes the importance of measuring senior high school students’ SISRT accurately in educational research.

This raises the question of which further research studies could be conducted when using the SISRT questions to obtain a better understanding of students’ SISRT. First, it seems likely that senior high school students can provide information about the reasons for being or not being engaged in SISRT. In this respect, research on reflection indicated that students often encounter barriers toward reflection in higher education (Davis, Citation2003) and professional education (Gunn, Citation2010; Holloway & Gouthro, Citation2011). Therefore, in future studies, senior high school students could be asked to provide further insights about the factors that hinder and encourage them to use SISRT.

Second, senior high school students may also provide information regarding the influence of the teaching of reflection. That is, the low percentage of SISRT could be related to the teaching of reflection. Possibly, teaching may not always support senior high school students to such a degree that they become self-induced in their self-reflective thinking. For example, to become self-induced in self-reflective thinking senior high school students are likely to need both structured and unstructured self-reflective thinking tasks, though the latter kind task is not often included in schools (cf., Dean, Sykes, Agostinho, & Clements, Citation2012). Furthermore, it also seems likely that senior high school students need to come to understand the value of putting effort in using SISRT through teaching. Finally, it is also well known that self-reflective thinking is a cognitively demanding activity for senior high school students and, it may very well be that the mental efforts required in performing self-reflective thinking may refrain them from becoming self-induced in self-reflective thinking. That is, senior high school students may be taught about cognitive overload reduction techniques (Van Merriënboer & Sweller, Citation2005) with regard to self-reflective thinking. Therefore, in future studies, the influence of the teaching of self-reflective thinking could be taken into account when studying senior high school students’ SISRT.

Third, senior high school students’ abilities to analyze, evaluate, and synthesize their learning experiences might need to be taken into account in future studies. For example, it is known that students who reflect often appear to report about this in the form of descriptions instead of evaluations (Dyment & O'Connell, Citation2011). Research studies on this topic often mention the training of self-assessment (cf., Ash et al., Citation2005) and critical evaluation (Val Velzen, Citation2016) to support the learning of self-reflection and self-reflective thinking, respectively.

In conclusion, the results of the studies presented in this article suggest that the SISRT questions can measure senior high school students’ SISRT. However, a nomological network needs to be developed that can further explain the theoretical relationships between SISRT and other constructs.

References

  • Ash, S. L., Clayton, P. H., & Atkinson, M. P. (2005). Integrating reflection and assessment to capture and improve student learning. Michigan Journal of Community Service Learning, 11(2), 49–60.
  • Baird, J. R. (1986). Improving learning through enhanced metacognition: A classroom study. European Journal of Science Education, 8, 263–282.
  • Baird, J. R., Fensham, P., Gunstone, R., & White, R. (1991). The importance of reflection in improving science teaching and learning. Journal of Research in Science Teaching, 28, 163–182.
  • Bannert, M. (2006). Effects of reflection prompts when learning with hypermedia. Journal of Educational Computing Research, 35, 359–375.
  • Bell, A., Kelton, J., McDonagh, N., Mladenovic, R., & Morrison, K. (2011). A critical evaluation of the usefulness of a coding scheme to categorise levels of reflective thinking. Assessment and Evaluation in Higher Education, 36, 797–815.
  • Berthold, K., Nückles, M., & Renkl, A. (2007). Do learning protocols support learning strategies and outcomes?: The role of cognitive and metacognitive prompts. Learning and Instruction, 17, 564–577.
  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy, and Practice, 5, 7–74.
  • Boud, D. (2001). Using journal writing to enhance reflective practice. New Directions for Adult and Continuing Education, 90(2), 9–18.
  • Boud, D., Keogh, R., & Walker, D. (1985). Promoting reflection in learning: A model. In D. Boud, R. Keogh, and D. Walker (Eds.), Reflection: Turning experience into learning (pp. 18–40). Oxon, UK: Routledge.
  • Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.
  • Brown, A. L. (1987). Metacognition, executive control, self-regulation, and other more mysterious mechanisms. In F. E. Weinert & R. H. Kluwe (Eds.), Metacognition, motivation, and understanding (pp. 65–116). Hillsdale, NJ: Erlbaum.
  • Brown, A. L. (1997). Transforming schools into communities of thinking and learning about serious matters. American Psychologist, 52, 399–413.
  • Cacciamani, S., Cesareni, D., Martini, F., Ferrini, T., & Fujita, N. (2012). Influence of participation, facilitator styles, and metacognitive reflection on knowledge building in online university courses. Computer and Education, 58, 874–884.
  • Calfee, R. C., & Perfumo, P. (Eds.). (1996). Writing portfolios in the classroom. Mahwah, NJ: Erlbaum.
  • Campbell, D. T. (1988). Definitional versus multiple operationism. In E. S. Overman (Ed.), Methodology and epistemology for social science: Selected papers (pp. 31–36). Chicago, IL: University of Chicago Press.
  • Chen, N. S., Wei, C. W., Wu, K. T., & Uden, L. (2009). Effects of high level prompts and peer assessment on online learners’ reflection levels. Computers and Education, 52, 283–291.
  • Cotterall, S., & Murray, G. (2009). Enhancing metacognitive knowledge: Structure, affordance and self. System, 37, 34–45.
  • Cowan, J. (2014). Noteworthy matters for attention in reflective journal writing. Active Learning in Higher Education, 15, 53–64.
  • Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334.
  • Cronbach, L. J., & Shavelson, R. J. (2004). My current thoughts on coefficient alpha and successor procedures. Educational and Psychological Measurement, 64, 391–418.
  • Davis, M. (2003). Barriers to reflective practice: The changing nature of higher education. Active Learning in Higher Education, 4, 243–255.
  • De Bruin, H. L., Van der Schaaf, M. F., Oosterbaan, A. E., & Prins, F. J. (2012). Secondary-school students’ motivation for portfolio reflection. Irish Educational Studies, 31, 415–531.
  • Dean, B. A., Sykes, C., Agostinho, S., & Clements, M. (2012). Reflective assessment in work-integrated learning: To structure or not to structure, that was our question. Asia-Pacific Journal of Cooperative Education, 13, 103–113.
  • Denton, D. (2011). Reflection and learning: Characteristics, obstacles, and implications. Educational Philosophy and Theory, 43, 838–852.
  • Dewey, J. (1910). How we think. Boston, MA: D.C. Heath.
  • Downing, K., & Chim, T. M. (2004). Reflectors as online extraverts? Educational Studies, 30, 265–276.
  • Dunlap, J. C., & Grabinger, S. (2003). Preparing students for lifelong learning: A review of instructional features and teaching methodologies. Performance Improvement Quarterly, 16(2), 6–25.
  • Dyment, J. E., & O'Connell, T. S. (2011). Assessing the quality of reflection in student journals: A review of the research. Teaching in Higher Education, 16, 81–97.
  • Efklides, A., & Misailidi, P. (2010). Introduction: The present and the future of metacognition. In A. Efklides & P. Misailidi (Eds.), Trends and prospects in metacognition research (pp. 1–19). New York, NY: Springer.
  • Elshout-Mohr, M. (1994). Feedback and self-instruction. European Education, 26, 58–73.
  • Ertmer, P. A., & Newby, T. J. (1996). The expert learner: Strategic, self-regulated, and reflective. Instructional Science, 24, 1–24.
  • Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive developmental inquiry. American Psychologist, 34, 906–911.
  • Grossman, R. (2009). Structures for facilitating student reflection. College Teaching, 57, 15–22.
  • Gunn, C. L. (2010). Exploring MATESOL students’ resistance to reflection. Language Teaching Research, 14, 208–223.
  • Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Upper Saddle River, NJ: Prentice Hall.
  • Holloway, S., & Gouthro, P. A. (2011). Teaching resistant novice educators to be critically reflective. Discourse: Studies in the Cultural Politics of Education, 32, 29–41.
  • Jacobs, J. E., & Paris, S. G. (1987). Children's metacognition about reading: Issues in definition, measurement, and instruction. Educational Psychologist, 22, 255–278.
  • Kehoe, J. (1995). Basic item analysis for multiple-choice tests. Practical Assessment, Research and Evaluation, 4(1). Retrieved from http://eric.ed.gov/ED398237
  • Kember, D., Jones, A., Loke, A. Y., McKay, J., Sinclair, K., Tse, H., … Yeung, E. (1999). Determining the level of reflective thinking from students’ written journals using a coding scheme based on the work of Mezirow. International Journal of Lifelong Education, 18, 18–30.
  • Kember, D., Leung, D. Y. P., Jones, A., Loke, A. Y., McKay, J., Sinclair, K., … Yeung, E. (2000). Development of a questionnaire to measure the level of reflective thinking. Assessment and Evaluation in Higher Education, 25, 381–395.
  • Kluwe, R. H. (1982). Cognitive knowledge and executive control: Metacognition. In D. R. Griffin (Ed.), Animal mind—human mind (pp. 201–224). New York, NY: Springer-Verlag.
  • Kuhn, D. (2000). Metacognitive development. Current Directions in Psychological Science, 9, 178–181.
  • Lew, M. D. N., & Schmidt, H. G. (2011). Self-reflection and academic performance: Is there a relationship? Advances in Health Sciences Education, 16, 529–545.
  • Mair, C. (2012). Using technology for enhancing reflective writing, metacognition, and learning. Journal of Further and Higher Education, 36, 147–167.
  • McCrindle, A. R., & Christensen, C. A. (1995). The impact of learning journals on metacognitive and cognitive processes and learning performance. Learning and Instruction, 5, 167–185.
  • McDonald, B. (2012). Portfolio assessment: Direct from the classroom. Assessment and Evaluation in Higher Education, 37, 335–347.
  • McNamara, D. S. (2011). Measuring deep, reflective comprehension and learning strategies: Challenges and successes. Metacognition and Learning, 6, 195–203.
  • Mezirow, J. (1990). How critical reflection triggers transformative learning. In J. Mezirow (Ed.), Fostering critical reflection in adulthood: A guide to transformative and emancipatory learning (pp. 1–20). San Francisco, CA: Jossey-Bass.
  • Moon, J. (1999). Reflection in learning and professional development. London, England: Kogan Page.
  • Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York, NY: McGraw-Hill.
  • O'Sullivan, A. J., Harris, P., Hughes, C. S., Toohey, S. M., Balasooriya, C., Velan, G., … McNeil, H. P. (2012). Linking assessment to undergraduate student capabilities through portfolio examination. Assessment and Evaluation in Higher Education, 37, 379–391.
  • Phan, H. P. (2009). Exploring students’ reflecting thinking practice, deep processing strategies, effort, and achievement goal orientations. Educational Psychology: An International Journal of Experimental Educational Psychology, 29, 297–313.
  • Pintrich, P. R. (2002). The role of metacognitive knowledge in learning, teaching, and assessing. Theory into Practice, 41, 219–225.
  • Robinson, J. P., Shaver, P. R., & Wrightsman, L. S. (1991). Criteria for scale selection and evaluation. In J. P. Robinson, P. R. Shaver, & L. S. Wrightsman (Eds.), Measures of personality and social psychological attitudes (pp. 1–16). San Diego, CA: Academic Press.
  • Robson, C. (1993). Real world research: A resource for social scientists and practitioner-researchers. Oxford, England: Blackwell.
  • Rogers, R. R. (2001). Reflection in higher education: A concept analysis. Innovative Higher Education, 26, 37–57.
  • Schneider, W. (2008). The development of metacognitive knowledge in children and adolescents: Major trends and implications for education. Mind, Brain, and Education, 2, 114–121.
  • Schön, D. A. (1983). The reflective practitioner. London, England: Temple Smith.
  • Schraw, G., & Moshman, D. (1995). Metacognitive theories. Educational Psychological Review, 7, 351–371.
  • Thorpe, K. (2004). Reflective learning journals: From concept to practice. Reflective Practice: International and Multidisciplinary Perspectives, 5, 327–343.
  • Van Merriënboer, J. J. G., & Sweller, J. (2005). Cognitive load theory and complex learning: Recent developments and future directions. Educational Psychological Review, 17, 147–177.
  • Van Velzen, J. H. (2012). Teaching metacognitive knowledge and developing expertise. Teachers and Teaching: Theory and Practice, 19(3), 365–380.
  • Van Velzen, J. H. (2013). Assessing high-school students' ability to direct their learning. Assessment in Education: Principles, Policy and Practice, 20(2), 170–186.
  • Van Velzen, J. H. (2015). Are students actually using self-reflection to improve how they learn? Conceptualizing self-induced self-reflective thinking. Reflective Practice, 16(4), 522–533.
  • Van Velzen, J. H. (2016). Metacognitive learning: Advancing learning by developing general knowledge of the learning process. Cham, Switzerland: Springer.
  • Van Velzen, J. H. (in press). Students' general knowledge of the learning process: A mixed methods study illustrating integrated data collection and data consolidation. Journal of Mixed Methods Research. doi: 10.1177/1558689816651792.
  • Von Wright, J. (1992). Reflections on reflection. Learning and Instruction, 2, 59–68.
  • Warhurst, R. (2008). Reflections on reflective learning in professional formation. Studies in the Education of Adults, 40, 176–191.
  • Weil, L. G., Fleming, S. M., Dumontheil, I., Kilford, E. J., Weil, R. S., Rees, G., … Blakemore, S. J. (2013). The development of metacognitive ability in adolescence. Conscious cognition, 22, 264–271.
  • White, B. Y., & Frederiksen, J. R. (1998). Inquiry, modeling, and metacognition: Making science accessible to all students. Cognition and Instruction, 16, 3–118.
  • White, B. Y., & Frederiksen, J. R. (2005). A theoretical framework and approach for fostering metacognitive development. Educational Psychologist, 40, 211–223.
  • Wiersema, J. A., & Licklider, B. L. (2007). Developing responsible learners: The power of intentional mental processing. The Journal of Scholarship of Teaching and Learning, 7, 16–33.
  • Zimmerman, B. J. (1989). A social cognitive view of self-regulated academic learning. Journal of Educational Psychology, 81, 329–339.

Appendix: The self-induced self-reflective thinking questions

  1. When you retrospectively reconsider if you have understood the material, then you think about …

  2. When you retrospectively reconsider why the material that was taught is relevant to you, then you think about …

  3. When you retrospectively reconsider how you can improve the learning of the material, then you think about …