1,955
Views
2
CrossRef citations to date
0
Altmetric
Research Article

Not all Bloom and gloom: assessing constructive alignment, higher order cognitive skills, and their influence on students’ perceived learning within the practical components of an undergraduate biology course

ORCID Icon, ORCID Icon & ORCID Icon

ABSTRACT

Students must develop higher order cognitive skills (HOCS) that allow them to think critically and use their learning in novel situations. However, little is known about how including HOCS in teaching and assessment affects students’ perception of learning. We combine quantitative and qualitative data to determine whether the inclusion of HOCS and the presence of constructive alignment influences students' perceptions of learning within the first-year practical components of an undergraduate biology degree at a UK research intensive university. We applied the Blooming Biology Tool (BBT) and Bloom’s Dichotomous Key (BDK) to quantify the proportion of HOCS present during practical sessions and their related assessments and found that a combination of tools can be used to reliably assess the requisite cognitive skills required to complete tasks. Students completed an online survey and provided free-text responses regarding which practical sessions they perceived had the most beneficial effects on their learning. Students valued both LOCS and HOCS for their learning but could only recognise and value HOCS in practical sessions featuring high proportions of HOCS. Our research provides methods for assessing and improving constructive alignment in the teaching of biology and furthers our understanding of when students will recognise and value HOCS.

Introduction

In the age of information, we are faced with an endless onslaught of problems to solve and decisions to make. With an increasingly complex society and almost limitless information at our fingertips, it is vital to be able to think critically about the information, data, and problems at hand, in order to base decisions on evaluated evidence. Employers desire graduates with critical thinking skills, and graduates that have these skills are more likely to be employed (Sarkar et al. Citation2016). However, many graduates are entering the workplace with weak critical thinking skills (Flores et al. Citation2012; Karimi and Pina Citation2021). Critical thinking requires skills such as applying, analysing, evaluating and creating, which are classified as higher order cognitive skills (HOCS) (Zoller Citation1993). Due to the importance of such HOCS, it is important to reflect on how we are teaching our graduates to develop these skills (Tsui Citation2002) and how students perceive the inclusion of these skills in their learning.

Traditionally, lectures are passive learning experiences focusing on lower order cognitive skills (LOCS) such as recall and comprehension. More recently, and particularly over the past 15 years or so, there has been a paradigm shift towards learning experiences such as practical sessions, workshops or tutorials focusing on HOCS (Rutherford and Ahlgren Citation1990; Singer, Nielsen, and Schweingruber Citation2012). However, biology courses have been criticised for their focus on ‘factual minutiae’ (Zheng et al. Citation2008). Requiring students to prioritise the memorisation of facts for assessments may be more commonplace than thought. One large study found that 93% of biology assessment items in American universities require LOCS only (Momsen et al. Citation2010) (though other studies have found lower percentages (e.g. Zheng et al. Citation2008)). Perhaps, a similar trend worldwide could explain why our students are not developing these skills.

Students are motivated by assessment (Brown Citation1997), and the idea of the assessment dictating student learning is not new (Hummel et al. Citation1994). If we wish to incentivise students to prioritise and develop the HOCS being taught, then the corresponding assessments must also contain HOCS (Biggs Citation1996). The synchrony between intended learning outcomes (ILOs), teaching activities and assessments is known as constructive alignment (Biggs Citation1996). Constructive alignment has been shown to enhance student learning, motivation, performance, and students’ perceptions of teaching (Morris Citation2008; Roßnagel, Christian, and Lo Baido Citation2020; Taylor and Canfield Citation2007; Wang et al. Citation2013; Adams Citation2020). Even if teaching activities are designed to develop HOCS, if the related assessments do not require HOCS, then students will not prioritise study strategies developing these skills as they are not rewarded by the assessment (Entwistle and Entwistle Citation1992; Leber et al. Citation2018). Identifying whether teaching is constructively aligned requires a reliable system for determining the cognitive skills required for a given learning activity or assessment. A tool for carrying out such a task is Bloom’s taxonomy of educational objectives for the cognitive domain (Bloom et al. Citation1956). The revised Bloom’s Taxonomy defines six cognitive domains: remember and understand (considered LOCS) and apply, analyse, evaluate and create (considered HOCS) (Anderson and Krathwhol Citation2001).

Bloom’s taxonomy is a widely used tool, but it is not without criticism. Inexperienced users have preconceived ideas about how difficult a task is, which influences whether they believe a learning activity is HOCS or LOCS (Lemons and Lemons Citation2013). Furthermore, even experienced practitioners fail to agree completely on which cognitive domain a learning activity belongs (Crowe et al. Citation2008; Zheng et al. Citation2008). This has led to the demand for and creation of tools and frameworks, which improve the accuracy and reliability with which Bloom’s taxonomy can be used.

The Blooming Biology Tool (BBT) (Crowe et al. Citation2008) gives detailed examples of multiple tasks relating to different biological fields that demonstrate where questions belong within Bloom’s cognitive domains. For a given field, such as phylogenetic trees or Hardy Weinberg analyses, the authors provide examples of the kinds of questions that can be used to assess each of the levels of Bloom’s taxonomy. Crowe et al. (Citation2008) show that the BBT improves the agreement between practitioners on which Bloom’s level a given task belongs. However, the BBT still has its limitations, with some disparity in agreement with the final Bloom’s level, particularly between instructors with limited knowledge of the taxonomy (Semsar and Casagrand Citation2017). A more recent tool is the Bloom’s Dichotomous Key (BDK) which uses a key to increase the agreement between instructors on the final Bloom's level (Semsar and Casagrand Citation2017). The BDK is presented as a general key that is designed to be used to assign the Bloom’s level of a broad range of questions. A consideration when using the BBT is that assessors may fail to consider the knowledge and examples previously shared with students which can lead to misclassifications (Semsar and Casagrand Citation2017). For example, if a student had been shown exactly how to solve an application question and then met an identical question in the exam, they could solve this with recall alone. This potential issue with instructors using the BBT is specifically addressed by using the BDK.

To develop critical thinkers, we need to teach and assess HOCS. Instructors who consciously include HOCS in their teaching see improvements in their learner’s abilities to develop these skills (Miri, David, and Uri Citation2007; Zheng et al. Citation2008; Zoller Citation1993). The importance of teaching critical thinking skills is well documented (Flores et al. Citation2012; Miri, David, and Uri Citation2007; Rutherford and Ahlgren Citation1990; Sarkar et al. Citation2016; Tsui Citation2002; Zoller Citation1993) but less is known about how students perceive the inclusion of HOCS in their teaching and learning activities. It seems likely that student perceptions and motivations play an important role in facilitating critical thinking (Stupple et al. Citation2017; Manalo et al. Citation2015). For example, Stupple et al. (Citation2017) showed a positive correlation between students who valued critical thinking and student performance.

In this study, we examined the use of Bloom’s taxonomy to assess constructive alignment and its impact on students’ perception of learning in the practical components within the first year of an undergraduate biology course. To do this, we quantified the proportion of HOCS in each of the practical sessions using the BBT and BDK tools. We assessed the level of constructive alignment by comparing the prevalence of HOCS within teaching and learning activities within the practical with the prevalence of related assessments. This was then used to assess the impact of constructive alignment and the inclusion of HOCs on students’ perceptions of learning, based on quantitative and qualitative survey results.

This methodology allowed us to address two novel research questions. (1) Does the inclusion of HOCS and constructive alignment influence students’ perceptions of their learning in the context of a practical course? And (2) How effective are the BBT and BDK tools at quantifying the proportion of HOCS in the practical sessions and related assessments; what are their strengths and weaknesses?

Addressing these questions allows us to make suggestions for instructors who wish to improve critical thinking and constructive alignment on their practical courses.

Approach and methodology

Ethical approval for the project was granted on the 23.1.2019 by CREATE (Cultivating Research-rich Education and Teaching Excellence) and RED (Research and Enterprise Development) at the University of Bristol.

Mixed methods approach

To determine whether students recognised and valued HOCS and constructive alignment in their learning, we deemed an interpretivist paradigm as most appropriate (Hamlin Citation2015). The research intended to explore students' lived experience, namely their perception of learning during the practical sessions. An interpretivist stance can be used effectively with a mixed methods approach for a more complete analysis (McChesney and Aldridge Citation2019).

This study utilised a convergence model of mixed-methods triangulation design (Creswell Citation1999) where the quantitative and qualitative data were collected and analysed separately, and the different results were converged during interpretation. We used this model as we wanted to quantitatively compare the proportion of HOCs taught and assessed in practical sessions with qualitative data on students' perceptions of learning. Similarly, quantitative analyses will also enable the presence or absence of constructive alignment to be determined and paired with students’ perceptions of their learning. This methodology allows student perceptions to be quantitatively linked with specific aspects of biology course design to explore our first research question (whether the inclusion of HOCS and constructive alignment influences students’ perceptions of learning) in more detail.

Structure of courses studied

We evaluated our teaching and learning activities and the assessments related to practical sessions for two mandatory units, titled Diversity of Life and Life Processes, in the academic year 2018–2019. The two units, Diversity of Life and Life Processes, together make up two-thirds of the first-year Biological Sciences degree programme at a research-intensive UK university. Diversity of Life covers topics such as evolution, speciation, and biodiversity. Life Processes examines the structure and function of cells and their macromolecular components. Both units run for the length of the academic year and both units are comprised of multiple ‘blocks’ focusing on different topics (see for a list of blocks and the aim for each block). Each ‘block’ runs across a week during term time, and each block contains three 1-hour lectures and one 3-hour practical session. In the practical sessions, students complete laboratory or field-based exercises and then answer questions based on these exercises in a practical handbook. The exercises in the handbook are formative and thus not assessed. The summative assessments for both units, Diversity of Life and Life Processes, follow the same structure (see ).

Table 1. List of teaching blocks within the unit Diversity of Life with related forms of assessment and description of the aims of the block. MCQ refers to multiple-choice question assessment. Teaching blocks shaded in grey were removed from analysis. ‘Mammals Skulls and Teeth’ and ‘Birds from Slimbridge’ were removed from the analysis as these blocks were scheduled after data collection. ‘Lab Techniques Guide’ and ‘Botanic Garden Visit’ were removed from the analysis as these blocks did not contain any questions in the practical handbook.

Table 2. List of teaching blocks within the unit Life Processes with related forms of assessment and description of the aims of the block. MCQ refers to multiple-choice question assessment. Teaching blocks shaded in grey were removed from analysis. ‘Steart Marshes’, ‘Field Ecology’ and ‘Bats and Echolocation’ were removed from the analysis as these blocks were scheduled after data collection.

Table 3. Formative and summative assessments for the units, Diversity of Life and Life Processes, including the weightings for the summative assessments. Continuous assessment is determined by summing the marks across multiple assessments. See text and for more details of the individual assessments that constitute ‘continuous assessment’.

Continuous assessment makes 20% of the final grade for each of the two units. The final continuous assessment mark is determined by summing the marks achieved across multiple assessments from individual blocks. Continuous assessment is typically in the form of an online test composed of multiple-choice questions, which are machine marked. These are open book assessments related to the material delivered within a particular block, and students have 2 days to submit their answers. Students receive a quantitative score of correct answers with automated explanations for incorrect answers as feedback. For the block Mammals as prey the assessment that contributes towards the final continuous assessment mark is a written report based on the data students collected in the practical session. For the block Ecology -The great debate the assessment that contributes to the final continuous assessment mark is an assessed presentation. The assessment that contributes to the final continuous assessment mark for blocks Molecular genetics and Plant virology, is a combined closed-book multiple-choice question exam, sat under timed examination conditions. It should be noted that not every block contains an assessment that contributes towards the final continuous assessment mark. For example, the block ‘Botanic garden visit’ comprises students visiting the University Botanic gardens with the aim of inspiring and enthusing students about plant biology. There are no questions in the practical handbook or assessment associated with this block. See for a list of blocks and their associated assessments.

Practical examinations for both units required students to be observed completing a range of practical tasks under examination conditions and recording their results on a pre-formatted script. Both the observations by demonstrators and the completed script contributed to the student’s final mark for this assessment. At the end of the unit, multiple-choice question exams were written by the lecturers who taught each block, with the number of questions proportional to the number of lectures given. The summer essay examination required students to answer 3 essays from a choice of 9, which were marked by the lecturer who set the question.

Rating protocols and analysis

In this study, we assessed the number of questions at each of the levels of Bloom’s taxonomy in (1) the formative exercises in the handbooks for the practical sessions, (2) the related assessments which contributed to the continuous assessment as described above and in . (3) the summative practical exam, for each of the two units. As we were interested in the teaching and learning associated with the practical sessions, we did not include the theory examinations as they predominantly assess lecture content rather than practical content. Multi-part or sequential questions were grouped and classified at the highest Bloom’s taxonomy level required to answer the questions within the group. All questions in the practical handbooks were categorised, except questions which required no explicit response, e.g. ‘observe the specimen’ or whose sole focus was implementation of a practical skill.

To reduce bias, and to account for variations in the classification of questions between assessors, an assessment panel was created. Each panel member classified the questions independently before these collective data were collated. The panel (comprising the authors) was composed of three teaching focussed members of academic staff from a UK University, each with a different biological speciality. Two members of the panel were based in the School in which the units are taught, and the third member was from a different School. Only one member of the panel was involved in the design and delivery of the unit. The levels of Bloom’s taxonomy used and classified as LOCS were as follows: remember, understand and classified as HOCS were as follows: apply, analyse, evaluate and create. It should be noted that we consider the classification apply of Bloom’s taxonomy to be a HOCS, with previous authors contrastingly considering apply to be either a HOCS (Zheng et al. Citation2008) a LOCS (Zoller Citation2016), or transitional between the two (Crowe et al. Citation2008). To classify questions and activities, the panel designed and followed a protocol for assigning Bloom’s values. Panel members first attempted to assign a level with the Bloom’s Dichotomous Key. If they failed to reach a satisfactory decision, they then attempted to use the Blooming Biology Tool to classify the tasks. If this proved unsuccessful, the assessor used their own judgement to assign a Bloom’s level. The number of times each tool was used by each panel member to assign a Bloom’s level was recorded.

The panel met for a preliminary assessment panel and rated a series of practice questions, and then discussed the results. Subsequently, the two panel members who were unfamiliar with the unit material were informed which questions within the handbooks and related assessments could be answered by memorising material from previous classes or lectures by the panel member familiar with the unit material, which is recommended practice for use of the BDK (Semsar and Casagrand Citation2017). This relied somewhat on the memory of one panel member and their access to the online practical materials and recorded versions of the lecture material delivered. Consulting with every individual instructor whilst impractical in this case, would have improved the accuracy and reduced the bias of this step. Any questions that could be answered from memory, regardless of the intended Bloom’s level required for completion, should be classified as ‘knowledge’ (Allen and Tanner Citation2002). Panel members then worked independently to assign Bloom’s levels to all the questions in the practical handbooks and related assessments. Following independent classification, the percentage agreement between panel members and the deviation from the mean Bloom’s score was calculated (see Zheng et al. Citation2008).

To achieve a singular level in Bloom’s taxonomy for each question or activity, the following process was applied: where at least two panel members were in agreement over the classification of a question or activity, that level was accepted as the classification for that question or activity; where all three assessors classified the question or activity differently, the panel members justified their reasoning and agreed on the accepted classification for that question or activity through discussion.

For the assessments in the blocks, Mammals as Prey (an essay) and The great debate (a presentation), the Bloom’s level was determined through considering the cognitive skills required to complete the task and through the assessment criteria used for marking. The frequency and percentage of questions or activities that required HOCS in the teaching material (practical handbook) and related assessment were calculated for each block.

Determining constructive alignment

Individual blocks were defined as ‘constructively aligned’ if the following conditions were met: (1) there were questions (that could be Bloomed) both in the practical handbook and in the assessment associated with the practical; and (2) there was at least one question that tests HOCS in both the practical handbook and related assessment, which we define as HOCS/HOCS constructively aligned), or the questions in the practical handbook and related assessment tested for LOCS only which we define as LOCS/LOCS constructively aligned.

Survey data collection

An online survey was distributed to students in their first year of university who were enrolled in the two units assessed in this study. This survey was voluntary to complete and had a 36% response rate out of 138 students, a common rate of response for surveys of this type (Porter and Umbach Citation2006) and a common rate of response within the School. The survey included the following questions:

Survey Question 1: Which practical sessions in the ‘Diversity of Life’ unit do you feel contributed most significantly to your learning; please add comments to indicate why you feel this was the case?

Survey Question 2: Which practical sessions in the ‘Life Processes’ unit do you feel contributed most significantly to your learning; please add comments to indicate why you feel this was the case?

Survey Question 3: List your 3 favourite practical sessions, why did you select those sessions?

Survey Question 4: Do you think the post-laboratory tests are useful for your learning? Explain.

The survey was released prior to the end of the units, so student responses occurred before five of the blocks took place. For this reason, these blocks have been removed from the analysis. One block was removed from the analysis because the practical handbook only contained instructions to implement practical skills related to using general laboratory facilities One additional block was removed from the analysis because there was no related assessment or questions in the practical handbook as the content of the block was a trip to the Botanic Gardens with the core focus to enthuse students about plants rather than on teaching and learning. This left 17 blocks for analysis in Diversity of Life and 15 blocks for analysis in Life Processes. See for a list of blocks included in the analysis.

Qualitative analysis

In the qualitative analysis, some blocks were combined because respondents to the survey did not distinguish between two such blocks but combined them in their open text answers. For example, the blocks, Plant Virology 1 and Plant Virology 2, were combined as Plant Virology.

To better understand the rationale behind why students perceived they learnt more from certain blocks over others, we conducted a thematic analysis of student responses to the survey. Survey data from each student were assigned codes which represented their key ideas, and these codes were used to derive the themes. Student responses were coded separately by two of the authors according to an inductive and latent approach which takes the implicit meaning behind the student explanations, as per the method outlined by Braun and Clarke (Citation2006, Citation2012). For example, a response which described at length how hard the work was and consequently that this was why they learnt so much was assigned a code of ‘intellectual challenge or difficulty’. In essence, the codes seek to describe succinctly why the student perceived the learning to be significant. A comparison of codes between authors led to the identification of the most relevant themes, and infrequent or irrelevant codes were discounted from the final visualisation.

Each coded student response related to a particular block for which they had explained why it was significant for their learning. Tallying the frequency with which codes arose, and to which theme each code belonged to, enabled us to ascertain which blocks related to which themes.

Results

Use of tools

On average, the panel members used the BDK to classify 57% of questions, the BBT to classify 26% of questions and used their judgement to classify the remaining 17% of questions in the practical handbooks. The frequency of tools used for classifying questions and activities within the two units by each panel member is illustrated in .

Figure 1. The percentage usage of different Bloom’s classification tools shown by panel members for (A) Diversity of Life and (B) Life Processes. Tools used were: Blooming Biology Tool (BBT), Bloom’s Dichotomous Key (BDK) and the panel members own judgment using Bloom’s taxonomy descriptors.

Figure 1. The percentage usage of different Bloom’s classification tools shown by panel members for (A) Diversity of Life and (B) Life Processes. Tools used were: Blooming Biology Tool (BBT), Bloom’s Dichotomous Key (BDK) and the panel members own judgment using Bloom’s taxonomy descriptors.

From reflections of the panel members, both the BDK and BBT tools were least effective at classifying questions including scientific drawings and scientific key use (dichotomous and polytomous) because these were not directly addressed within the tools. These types of questions made up a large proportion of the cases where judgement was required to classify them. The BDK was most effective at classifying questions based on LOCS, but the BBT was often needed to identify questions based on HOCS.

Agreement between panel members

Following the independent classification of questions and activities by all three panel members, for Diversity of Life, all three panel members agreed on the classification for 38% of the practical handbook questions and 64% of the assessments; two of the three panel members agreed on the classification for 40% of the practical handbook questions and 35% of the assessment questions. For 22% of the questions in the practical handbook and 1% of questions in the assessments, there was no initial agreement between the three panel members, with each selecting a different Bloom’s classification.

For Life Processes, all three panel members agreed on the classification for 30% of the questions in the practical handbooks and 58% of questions in the assessments; two of the three panel members agreed on 46% of the questions in the practical handbooks and 29% of the assessments. For 24% of questions in the practical handbook and 13% of questions in the assessment, there was no initial agreement.

Panel member deviation in Bloom’s score

To determine variation between panel members in the Bloom’s classification of questions and activities, average deviation scores were calculated, to determine how far from the mean the Bloom’s levels were for each panel member as per the method described in Zheng et al. (Citation2008) (see ). These deviations are comparable to inter-rater average deviations in the literature (Semsar and Casagrand Citation2017; Zheng et al. Citation2008). A deviation of 1 would mean that the panel members disagreed by one Bloom’s level. In all cases, the average deviation is less than half a Bloom’s level demonstrating a suitable accuracy of agreement between the panel members.

Table 4. Average deviation scores of panel members rating the Bloom’s level of questions and activities within the units, Diversity of Life and Life Processes.

Lower and higher order cognitive skills and constructive alignment

The proportion of questions classified at each of the levels of Bloom’s Taxonomy for both the practical handbook and the assessments for both units are shown in .

Figure 2. Proportion of questions classified to the different levels of Bloom’s Taxonomy for both the teaching activity (practical handbook) questions and the MCQ assessments for the units (A) Diversity of Life and (B) Life Processes.

Figure 2. Proportion of questions classified to the different levels of Bloom’s Taxonomy for both the teaching activity (practical handbook) questions and the MCQ assessments for the units (A) Diversity of Life and (B) Life Processes.

In Diversity of Life, 86% of the 147 questions in the practical handbook were scored as LOCS and 14% as HOCS. In the assessments for this unit 99% of the 160 questions scored were LOCS and 1% were HOCS. In Life Processes, 49% of the 107 questions in the practical handbook were scored as LOCS and 51% as HOCS; in the assessments for this unit 76% of the 135 questions scored were LOCS and 24% were HOCs. Therefore, in both units, HOCS were less frequent in the assessments than they were in the teaching sessions.

Questions classified as the level remember were over three times as frequent in the assessment in comparison to the teaching sessions for both units. In contrast, understand and apply questions were much less frequent in the assessments of both units. There were no questions that tested student’s abilities to analyse, evaluate or create in the MCQ assessment of either unit, though questions in all three categories were present in the practical handbooks.

Most assessments associated with each block were tests comprised of multiple-choice questions. Some blocks contain alternative assessments. The report for the block Mammals as Prey required all the skills of Bloom’s taxonomy but most strongly relates to the Bloom’s level analyse. We have therefore considered this an assessment requiring HOCS. The oral presentation assessment for Ecology – The Great Debate used all the skills within Bloom’s taxonomy but most strongly relates to the Bloom’s level evaluate. We have therefore considered this activity an assessment requiring HOCS.

We identified 21% and 50% of the questions in the Practical Exam for Diversity of Life and Life Processes respectively as assessing for HOCS.

We compared the qualitative data on students' perception of learning to the proportion of questions related to HOCS taught and assessed in blocks and the presence or absence of constructive alignment. This allowed us to determine whether the inclusion of HOCs and constructive alignment influences students’ perception of learning ().

Table 5. List of teaching blocks for unit Diversity of Life (a) and unit Life Processes (b) with frequency and percentage of HOCS questions in the teaching material (practical handbook) and related assessment; frequency of respondent comments referring to each block from qualitative analysis of the student response survey for the question ‘Which practical sessions in the Diversity of Life/Life Processes unit do you feel contributed most significantly to your learning?’ with examples of student comments, and the theme each block was allocated to during qualitative analysis. Shaded blocks are constructively aligned: * Block is constructively aligned with HOCS in both formative and summative assessment (HOCS/HOCS constructively aligned); † block is constructively aligned with only LOCS in both formative and summative assessments (LOCS/LOCS constructively aligned).

Across the two units, we identified 15 of the 32 (47%) blocks as being constructively aligned according to our definition. Four blocks (13%) were LOCS/LOCS constructively aligned with only LOCS in both the practical handbook and the assessment, and 11 (3%) were HOCS/HOCS constructively aligned by having HOCS in both the practical handbook and the assessment.

There were 87 coded student survey responses where students identified a particular block from either the Diversity of Life or Life Processes unit that they felt contributed significantly to their learning. See for frequencies of coded responses related to each block.

Seventy-seven per cent of responses where students identified a particular block as contributing to their learning were for ‘constructively aligned’ practical sessions. Eighteen per cent were for LOCS/LOCS constructively aligned practical sessions, and 59% were for HOCS/HOCS constructively aligned sessions. This suggests that when teaching sessions are constructively aligned with their corresponding assessments, students are more likely to recognise the value of the learning from the session. Further, this suggests that when practical sessions are HOCS/HOCS constructively aligned, students are more likely to recognise the value of the learning from the session.

Across the two units, 26 of 32 blocks (81%) tested HOCS in the practical handbook, and of these, 17 (65%) were identified as contributing to student learning from the student survey responses. This indicates that students are not demonstrating a recognition between the presence of HOCS in teaching (when constructive alignment is not considered) and whether they feel the block contributed significantly to their learning. This suggests that the presence of HOCS in teaching material is not enough for students to recognise the value of the learning from the session.

Thematic analysis of survey data

Through thematic analysis, we identified three themes for why students perceived practical sessions contributed significantly to their learning: ‘the session provided an opportunity for deep learning’; ‘the session provided opportunity for knowledge accumulation’; and ‘the session was engaging’.

We defined these themes using the coded student responses. A visual representation of the themes generated from coded student responses can be seen in .

Figure 3. Visual representation of the reasons why students felt practical sessions significantly contributed to their learning in the units, Diversity of Life and Life Processes. Three themes (square box and bold text) were identified: ‘the session provided an opportunity for deep learning’; ‘the session provided opportunity for knowledge accumulation’; and ‘the session was engaging’. Coded responses (rounded box) and examples of student comments (rounded box with quotes) are presented for each theme.

Figure 3. Visual representation of the reasons why students felt practical sessions significantly contributed to their learning in the units, Diversity of Life and Life Processes. Three themes (square box and bold text) were identified: ‘the session provided an opportunity for deep learning’; ‘the session provided opportunity for knowledge accumulation’; and ‘the session was engaging’. Coded responses (rounded box) and examples of student comments (rounded box with quotes) are presented for each theme.

Theme: the session was engaging.

Responses to the session was engaging theme commonly referenced how ‘exciting’, ‘fun’, ‘active’ or ‘interesting’ the blocks were, and typically did not reference any cognitive skills. The blocks most frequently identified as the best for learning related to this theme were as follows: Nervous Systems, Animal Adaptations, Fish Dissection, Mammals as Prey, and references to ‘dissection practicals’ (see ). Except for Mammals as Prey, all of the blocks listed here had no or low (<34%) levels of questions testing HOCS in the practical handbook. Apart from the Animal Adaptations practical which had no assessment, these sessions were constructively aligned; however, the students had not yet completed the assessment for Mammals as Prey prior to the survey.

Theme: the session provided an opportunity for knowledge accumulation.

Responses in the session provided an opportunity for knowledge accumulation theme focused solely on the learning of LOCS with reference to how the practical session supported and complemented the theory from lecture material, such as: ‘I felt these practicals most supplemented my knowledge of their corresponding lecture topics’. With regard to this theme, the most referenced blocks were ‘dissection practicals’. Blocks containing dissections were Myriapods and Insects, Fish Dissection, Chelicerates and Crustaceans, Nervous Systems, and Molluscs (see ). These blocks contained no or a low (<34%) number of questions which tested for HOCS in the practical handbook and in all but one case no HOCS questions in the assessments. With reference to ‘dissection practicals’, respondents commonly commented on how these sessions were an engaging way to complement lecture content and learn anatomy.

Theme: the session provided an opportunity for deep learning.

Responses in the session provided an opportunity for deep learning theme focused on the cultivation of HOCS or the intellectual rigour of the sessions, such as problem-solving skills or that it was ‘hard’ or ‘challenging’. Sometimes a term like ‘deeper understanding’ was used, which we would ordinarily attribute to LOCS, but given that the students typically are not aware of Bloom’s taxonomy and the associated terminology we attributed these comments to an implicit desire for deeper learning. This theme was most frequently associated with the blocks: Mendelian Genetics, Ecology – The Great Debate, Human DNA Isolation and Chordate Cladistics (see ). These blocks had many questions which tested for HOCS in the practical handbook (>70%). Whilst the percentage of questions testing for HOCS in the Chordate Cladistics practical handbook seems low, the task required students to spend the majority (>85%) of their time on a single complex question that tested HOCS. This was reflected in student responses where language related to HOCS, such as ‘problem solving’, was frequently used as a reason why they felt the practical significantly contributed to their learning.

The frequency with which blocks are paired with a theme provides an insight into why students perceive a given block to be significant for their learning. Student comments relating to the session provided an opportunity for deep learning theme to explain their learning, were related to blocks which contained a larger proportion of questions which tested for HOCS in the practical handbook. Students did not use comments relating to this theme in blocks with low levels of HOCS in the practical handbook. Comments related to this theme usually related to blocks which were either constructively aligned (i.e. Molecular Genetics and Ecology – the great debate) or had no related assessment (i.e. Human DNA Isolation and Chordate Cladistics). Comments relating to deep learning were not linked to any blocks where the questions in the practical handbook and the related continuous assessment were misaligned. This seems to suggest that students are sensitive to the presence of HOCS within teaching and assessment by associating this inclusion with ‘deep learning’, but only when there are very high levels of HOCS, or the blocks are constructively aligned.

For the themes, the session was engaging or the session provided an opportunity for knowledge accumulation, students selected blocks with low or no HOCS (except for Mammals as Prey) in the practical handbooks and related assessments, instead considering the interactive and engaging ways of building their knowledge as significant to their learning. We would have expected students to attribute the Mammals as Prey session to the theme session provided opportunity for deep learning as this had high levels of HOCS in the practical handbook. We note that students had not yet received the assessment and that the session also featured a live owl flying within the practical. These reasons might explain why student responses related to this block focus on their enjoyment of the session rather than deeper learning as one might expect.

Discussion

HOCS, constructive alignment and students’ perception of learning

Our results suggest that students consider engaging sessions, knowledge accumulation and deeper learning as valuable and significant components of their learning. We found that students are more likely to perceive practical sessions as contributing significantly to their learning if sessions are constructively aligned with their related assessments. Moreover, students were most likely to select HOCS/HOCS aligned sessions as beneficial to their learning. This suggest that both constructive alignment and the inclusion of HOCS in teaching and assessment are valued by students as contributing to their learning.

However, it appears that students only identify practical sessions that were constructively aligned with high levels of HOCS in teaching and assessment or with high levels of HOCS included in teaching with no related assessment, as significant to their deeper learning. The higher levels of Bloom’s taxonomy are typically associated with a deep approach to learning, which involves gaining a personal understanding of ideas, for example, by relating disparate concepts or evaluating evidence (Marton and Säljö Citation1976) and our results seem to suggest students are sensitive to this link between HOCS and a deep approach to learning. The small number of practical sessions falling into the theme the session provided an opportunity for deeper learning makes it difficult to comment with any certainty on which feature of practical sessions (whether the blocks were constructively aligned or whether there was inclusion of a high level of HOCS in teaching) was needed for students to recognise a deep approach to learning. However, it does tend towards previous research suggesting that assessments dictate which learning strategies are deemed important and in turn prioritised by students (Entwistle and Entwistle Citation1992; Leber et al. Citation2018). This further emphasises the importance of designing assessments for learning, where assessments are designed to create feedback that can help students to learn (e.g. Sambell et al. Citation2013). Many of the sessions in our study included only a small number of HOCS questions and it may well be that if we want to develop cohorts of students who recognise and value HOCS as significant to their learning that a couple of questions or small tasks related to HOCS within a session are not sufficient. Allocating a considerable proportion of questions and tasks to HOCS and ensuring that this is aligned with the assessment is more likely to produce a stronger recognition of the value of these skills. We were also limited by the number of practical sessions which included high levels of HOCS in their teaching and assessment. Having more practical sessions of this nature would have enabled us to state with more conviction that a high level of HOCS was the determining factor in students perceiving deeper learning from their sessions.

Staff have been shown to place a high value on the acquisition of knowledge (Kunen, Cohen, and Solman Citation1981) and we found this to be the case for students too. However, knowledge does not need to be acquired in isolation and can be gained through learning activities involving HOCS (Kunen, Cohen, and Solman Citation1981), potentially explaining why when practical sessions focussed on HOCS but where only LOCS were assessed, the students had still acquired the knowledge to answer these questions.

We also found that, despite only 47% of our teaching meeting our definition of constructive alignment, these sessions made up 77% of those selected as significant for learning by students. This further suggests students show a learning preference towards constructively aligned sessions. The delivery of teaching and learning activities which include HOCS and demonstrate constructive alignment between the teaching, learning and assessment is generally considered to be good practice (Kember and McNaught Citation2007; Phillips Citation2005). When staff were mentioned in student’s survey responses as being the feature of a practical that contributed significantly to their learning, these practicals (Plant Virology, Mendelian Genetics and Gene Cloning) were constructively aligned. This supports findings by Taylor and Canfield (Citation2007) who found that students taught on constructively aligned courses rate their educators as more proficient.

To be constructively aligned, the teaching and learning activities selected must be those which are likely to lead students to achieve the intended learning outcomes (Biggs Citation1996). The definition we use for constructive alignment in this research is limited in that it assumes that the inclusion of HOCS in the teaching and learning activities and in the assessment of each block reflects the unit ILOs.

To consider how the teaching and assessment for each block relate to the ILOs for each unit, we considered the student responses in the context of the overall aims and ILOs for the two units. The aims for Diversity of Life are more focussed on LOCS. The outcome of this unit is for students to gain a knowledge and understanding of the diversity and evolution of biological organisms. The blocks in this unit that were considered significant to learning typically fell into the ‘engaging’ and ‘knowledge accumulation’ themes in our qualitative analysis. Life Processes aims to foster the intellectual and practical skills required by biologists and has a greater focus on HOCS in its assessments. Practical sessions in the theme the session provides an opportunity for deeper learning belonged almost exclusively to this unit with students placing value in being able to improve skills such as problem solving. Taken in this context our results show that unit outcomes may influence which type of learning (HOCS or LOCS) is more likely to be perceived as significant by students. However, this effect might well be indirect and led by the emphasis placed on activities in the units and their assessment by staff, rather than due to students being directly influenced by having studied the unit outcomes.

Using Bloom’s taxonomy to quantify HOCS and constructive alignment

Prior to this study, panel members had training and experience with Bloom’s Taxonomy but had limited experience applying the BBT and BDK tools. Despite this, panel members were able to rank questions with a similar consistency to previously published studies using both the BBT and BDK (Semsar and Casagrand Citation2017; Zheng et al. Citation2008). The BBT and BDK (Crowe et al. Citation2008; Semsar and Casagrand Citation2017) were found to achieve both consistency and objectivity.

However, we found some questions challenging to classify. For example, classifying organisms using dichotomous and polytomous keys is a skill that is taught in several of the blocks we evaluated, and it is not clear from the BBT or BDK which level of Bloom’s Taxonomy these keys represent. We concluded that the use of keys is likely to be identified as the Bloom's level apply, though this decision was not readily reached using the available tools.

In other cases, we found it hard to assess whether the answer to a question could be memorised. For example, if the students have been taught to label the limb bones in a cat, does an assessment involving labelling the same bones in a dog simply require memory, or is understanding also required? What about labelling these bones in a bat or whale?

When struggling to decide, we found that BBT had the most helpful descriptors (and most closely matched to the Bloom’s taxonomy descriptors) but the tool only works effectively for specified topics, and some topics in our curriculum, such as the use of dichotomous and polytomous keys, were not included. We found the BBT to be a useful tool, particularly in ensuring consistency for multiple users, but an expansion of the topics covered would be beneficial for educators.

We also noted some discrepancies between the tools. For example, the BDK suggests that if students are using data from a table to calculate a variable, then this is the Bloom’s level apply. However, BBT suggests that this is only the case if students are required to select the correct equation and variables. If the equation and variables are given this would be considered a ‘plug and chug’ question that is ranked as remember by the BBT. By using the original Bloom’s taxonomy descriptors, we felt the Bloom’s level identified by the BBT was more accurate, in the case of using data from a table to calculate a variable. We found that the BBT tool categorised questions relating to mathematical topics such as calculations and graphing more clearly. However, the speed with which the BDK can be used to categorise many questions and its consideration for prior teaching made the BDK effective when paired with the BBT.

Using a combination of tools and our own judgement enabled us to produce consistent classifications. The BDK enabled us to consider students' prior knowledge and quickly classify questions as testing for LOCS, whereas the BBT enabled us to more accurately classify questions as HOCS (when the topics were covered by the tool). Allowing use of our own judgement prevented us spending too long on a given question when it could not be matched using either the BBT or BDK. We would recommend the combined use of these tools for educators who have not specialised in Bloom’s Taxonomy but want to assess the types of cognitive skills they are teaching. Whilst the BDK can be used across a range of STEM subjects, the BBT is biology specific, and we believe there would be great value in developing similar tools for other disciplines.

There will no doubt be some questions assigned to the wrong Bloom’s level due to weaknesses in the tools and limits of user experience. Whilst these are relevant to the methods and reliability of our research, they are of little importance to educators designing teaching sessions in practice. This is because both these tools were effective at differentiating between LOCS and HOCS, which is key to assessing the broad alignment between Intended Learning Outcomes, teaching and learning activities, and assessment. We found that using this metric alone was insightful, and our results show that students can identify the presence of HOCS within teaching and recognise the importance of the HOCS they are taught. Additionally, both tools seem to produce a reliable snapshot of the range of HOCS within teaching and assessment activities, enabling the educator to spot missing skillsets from either the taught activities or the assessments. We are sure more lessons could be learnt by breaking down the results further and analysing the frequency of specific levels of Bloom’s taxonomy in our teaching but for busy academics in higher education considering LOCS and HOCS seems a time effective compromise.

We also found that when designing assessment questions, it is important to consider what students have been taught so that recall alone cannot be used to solve questions intended to assess HOCS (Semsar and Casagrand Citation2017). Previous research suggests some biologists believe providing practice HOCS is ‘giving too much away’ (Lemons and Lemons Citation2013) but in reality developing these skills requires practice. However, assessment questions should be significantly different from the practice questions encountered during the teaching and students should be aware that they will be assessed on HOCS.

Using Bloom’s taxonomy to improve course design

Contrary to claims that undergraduate biology courses focus predominantly on LOCS (Momsen et al. Citation2010) we found that our teaching contained an array of these skills. However, in both units, we found that the frequency of HOCS was much lower in the assessments than in the practical handbooks, and we found an over representation of LOCS and no HOCS in the above application on Bloom’s taxonomy. We noticed that in particular the assessment within the Diversity of Life unit is not as constructively aligned as desired. Such discrepancies highlight the importance of considering constructive alignment at an early stage of course design. ‘Backward Design’ (Wiggins and McTighe 1998) is a useful approach that advocates starting the design process by establishing the learning goals of the course, before considering how these will be assessed and only then selecting the appropriate activities for teaching and learning.

We also learnt the value of our diversity of assessments: as essays, presentations and the practical examinations were assessing students on a much wider range of HOCS. The assessments of the units we studied are almost entirely limited to either multiple-choice questions or to those that require a single-word answer. This design is largely to allow the work of a large cohort (>200 students) to be automatically marked allowing quick feedback and reducing instructor workload. It is more challenging to design multiple-choice questions that test HOCS. It is certainly impossible to test creating, which requires the student to create a piece of work, and some have argued that evaluating can also not be tested using multiple-choice questions (Masters et al. Citation2001), though others have shown ways in which this can be done with carefully designed questions (Crowe et al. Citation2008; Gormally, Brickman, and Lutz Citation2012). The analysis can be tested using multiple-choice questions, for example, by presenting the student with a list of observations and asking them to select the correct interpretation or diagnosis (Brady Citation2005; Demetrulias and McCubbin Citation1982). However, this approach has been criticised on the grounds that few real-world situations involve being presented with a series of options, where one is known to be correct (Veloski et al. Citation1999).

Regardless, it is possible to test at least some HOCS using multiple-choice questions, but designing these questions well is challenging. On balance, we would caution against over reliance on multiple-choice questions. There is promising work, mostly in Mathematics (Jordan Citation2013), focused on more sophisticated machine marking that can give instantaneous tailored feedback on free-text responses. However, recent projects (Beyond Multiple Choice Citation2021) and publications in biology (Uhl et al. Citation2021) suggest that the availability of software with the ability to machine-mark free-text responses in biology is on the horizon.

More effort to create multiple-choice post-laboratory tests that include more HOCS would enhance the constructive alignment between our teaching activities and our continuous assessments. However, this approach can only go so far and other assessment methods are more effective at assessing HOCS. While these approaches may be more labour intensive, we consider them important for developing students with the ability to think critically.

Limitations

The principal limitation of this study is the small sample size and low response rate. The student-responses were rich and clearly fell into the three themes, but capturing more student responses would have increased our confidence in the broader applicability of our themes and importantly which blocks they encompassed. Secondly, despite measuring constructive alignment, we did not measure whether the students themselves perceived that alignment; which could also have influenced their motivation and learning strategies (Roßnagel, Christian, and Lo Baido Citation2020). Finally, the study did not employ numerical data such as Likert scales for each practical, rating the learning that took place. This would then have allowed the use of statistical tests to draw conclusions about perceived learning and the inclusion of HOCS. The definition we use for constructive alignment and the data we present in do not consider the finer Bloom’s levels for questions in the practical handbook and assessment, which may be seen as a limitation of the study. However, we did not think that it would be reasonable to assume that students would be able to identify the skills taught and assessed at such a fine level, but we could predict that students may be sensitive enough to tell whether HOCS or LOCS skills were present in teaching or assessment.

We did not collect any identifying data from participants such as current attainment scores or gender. This may be a limitation of our study as we were not able to confirm whether the respondents to the survey were representative of the cohort or whether there were any biases in who responded to the survey. At the time of the survey, students would only have their continuous assessment grades from their degree programme so we would only be able to report estimates of attainment at a university level. We could have gathered A-level or equivalent pre-university attainment, but given the entry requirements for the institution, it is likely there would have been homogeneity in the recordable attainment. We were also concerned that asking students about their attainment in the survey might influence students’ perception of learning to focus on attainment. We did not record gender as gender effects on perceptions of learning were not a focus of this study.

Concluding remarks

We suggest that Bloom’s taxonomy is a useful tool during the design of teaching activities to ensure that the intended skills are taught and assessed. Instructors can reliably use a combination of tools for ‘Blooming’ activities to assess constructive alignment in existing teaching. This is a particularly worthwhile endeavour if the teaching was not originally designed with Bloom’s taxonomy in mind or if multiple educators are involved in the design or Blooming of questions. We would encourage research to develop new and existing ‘Blooming tools’ particularly those that would enable novices with Bloom’s taxonomy to achieve accurate results.

Our thematic analysis of students’ free text responses suggests that students value both LOCS and HOCS in their practical teaching and that they only perceived that ‘deep learning’ had occurred in practical sessions that had a large proportion of HOCS tasks and were constructively aligned (or not assessed). We believe our results support the existing literature suggesting constructive alignment supports the development of HOCS; in our study, it appeared to enhance students’ recognition of HOCS as contributing to their learning. Future work should explore student perceptions of learning, alongside metrics of actual learning and seek to determine which specific tasks elicit both perceived and actual learning benefits in critical thinking.

To conclude, perhaps the problem of problem solving, is actually a problem with problem setting, particularly within assessments, or at least, this is something we hope our readers will think critically about.

Ethical approval

Ethical approval for the project was granted in 23.1.2019 by CREATE and RED at the University of Bristol.

Acknowledgements

The authors acknowledged Hannah Grist, Jane Pritchard and Sheila Amici-Dargan for their valuable advices and insights, as well as the anonymous reviewers of this manuscript.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Adams, C. J. 2020. “A Constructively Aligned First-Year Laboratory Course.” Journal of Chemical Education 97 (7): 1863–1873. doi:10.1021/acs.jchemed.0c00166.
  • Allen, D., and K. Tanner. 2002. “Approaches to Cell Biology Teaching: Questions about Questions.” Cell Biology Education 1 (3): 63–67. doi:10.1187/cbe.02-07-0021.
  • Anderson, O., and D. Krathwhol. 2001. A Taxonomy for Learning. Teaching, and Assessing (A Revision of Bloom’s Taxonomy of Educational Objectives). New York: Addision Wesley Longman.
  • Beyond Multiple Choice (2021). “Conference Agenda.” Accessed December 13 2021. http://beyond-multiple-choice.com/wp-content/uploads/2021/08/BMC2021-Agenda-1.pdf
  • Biggs, J. 1996. “Enhancing Teaching through Constructive Alignment.” Higher Education 32 (3): 347–364. doi:10.1007/Bf00138871.
  • Bloom, B. S., M. D. Engelhart, E. J. Furst, W. H. Hill, and D. R. Krathwohl. 1956. “Taxonomy of Educational Objectives: The Classification of Educational Goals.”New York: Longmans Publishing.
  • Brady, A. M. 2005. “Assessment of Learning with multiple-choice Questions.” Nurse Education in Practice 5 (4): 238–242. doi:10.1016/j.nepr.2004.12.005.
  • Braun, V., and V. Clarke. 2006. “Using Thematic Analysis in Psychology.” Qualitative Research in Psychology 3 (2): 77–101. doi:10.1191/1478088706qp063oa.
  • Braun, V., and V. Clarke. 2012. “Thematic Analysis”.” In APA Handbook of Research Methods in Psychology: Research Designs: Quantitative, Qualitative, Neurophysical, and Biological, edited by H. Cooper, P. M. Camic, D. L. Long, A. T. Panter, D. Rindskopf, and K. J. Sher, 57–71.American Psychological Association. doi:10.1037/13620-004
  • Brown, G. 1997. Assessing Student Learning in Higher Education. London: Routledge.
  • Creswell, John W. ”Mixed-method research: Introduction and application.” In Handbook of educational policy, pp. 455–472 doi:https://doi.org/10.1016/B978-012174698-8/50045-X. Academic press, 1999.
  • Crowe, A., C. Dirks, M. P. Wenderoth, and M. Sundberg. 2008. “Biology in Bloom: Implementing Bloom’s Taxonomy to Enhance Student Learning in Biology.” CBE—Life Sciences Education 7 (4): 368–381. doi:10.1187/cbe.08-05-0024.
  • Demetrulias, D. A. M., and L. E. McCubbin. 1982. “Constructing Test Questions for Higher Level Thinking.” Nurse Educator 7 (5): 13–17. doi:10.1097/00006223-198200750-00003.
  • Entwistle, A., and N. Entwistle. 1992. “Experiences of Understanding in Revising for Degree Examinations.” Learning and Instruction 2 (1): 1–22. doi:10.1016/0959-4752(92)90002-4.
  • Flores, K. L., G. S. Matkin, M. E. Burbach, C. E. Quinn, and H. Harding. 2012. “Deficient Critical Thinking Skills among College Graduates: Implications for Leadership.” Educational Philosophy and Theory 44 (2): 212–230. doi:10.1111/j.1469-5812.2010.00672.x.
  • Gormally, C., P. Brickman, and M. Lutz. 2012. “Developing a Test of Scientific Literacy Skills (TOSLS): Measuring Undergraduates.” Evaluation of Scientific Information and Arguments CBE—Life Sciences Education 11 (4): 364–377. doi:10.1187/cbe.12-03-0026.
  • Hamlin, B. 2015. “Paradigms, Philosophical Prisms and Pragmatism in HRD Research.” In In Handbook of Research Methods on Human Resource Development. Edward Elgar Publishing 13–31 . doi:10.4337/9781781009246.00009
  • Hummel, J., W. Huitt, R. Michael, and L. Walters. 1994. “What You Measure Is What You Get.” GaASCD Newsletter: The Reporter 10–11.
  • Jordan, S. 2013. “E-assessment: Past, Present and Future.” New Directions in the Teaching of Physical Sciences 9: 87–106. doi:10.11120/ndir.2013.00009.
  • Karimi, H., and A. Pina. 2021. “Strategically Addressing the Soft Skills Gap among STEM Undergraduates.” Journal of Research in STEM Education 7 (1): 21–46. doi:10.51355/jstem.2021.99.
  • Kember, D., and C. McNaught. 2007. Enhancing University Teaching: Lessons from Research into award-winning Teachers. London: Routledge. doi:10.4324/9780203962947
  • Kunen, S., R. Cohen, and R. Solman. 1981. “A levels-of-processing Analysis of Bloom’s Taxonomy.” Journal of Educational Psychology 73 (2): 202. doi:10.1037/0022-0663.73.2.202.
  • Leber, J., A. Renkl, M. Nückles, and K. Wäschle. 2018. “When the Type of Assessment Counteracts Teaching for Understanding.” Learning: Research and Practice 4 (2): 161–179. doi:10.1080/23735082.2017.1285422.
  • Lemons, P. P., and J. D. Lemons. 2013. “Questions for Assessing Higher-Order Cognitive Skills: It’s Not Just Bloom’s.” CBE-Life Sciences Education 12 (1): 47–58. doi:10.1187/cbe.12-03-0024.
  • Manalo, E., T. Kusumi, M. Koyasu, Y. Michita, and Y. Tanaka. 2015. “Do Students from Different Cultures Think Differently about Critical and Other Thinking Skills?” The Palgrave Handbook of Critical Thinking in Higher Education (New York: Palgrave Macmillan) 299–316. doi:10.1057/9781137378057_19.
  • Marton, F., and R. Säljö. 1976. “On Qualitative Differences in Learning: I—Outcome and Process.” British Journal of Educational Psychology 46 (1): 4–11. doi:10.1111/j.2044-8279.1976.tb02980.x.
  • Masters, J. C., B. S. Hulsmeyer, M. E. Pike, K. Leichty, M. T. Miller, and A. L. Verst. 2001. “Assessment of multiple-choice Questions in Selected Test Banks Accompanying Text Books Used in Nursing Education.” Journal of Nursing Education 40 (1): 25–32. doi:10.3928/0148-4834-20010101-07.
  • McChesney, K., and J. Aldridge. 2019. “Weaving an Interpretivist Stance Throughout Mixed Methods Research.” International Journal of Research & Method in Education 42 (3): 225–238. doi:10.1080/1743727X.2019.1590811.
  • Miri, B., B.-C. David, and Z. Uri. 2007. “Purposely Teaching for the Promotion of higher-order Thinking Skills: A Case of Critical Thinking.” Research in Science Education 37 (4): 353–369. doi:10.1007/s11165-006-9029-2.
  • Momsen, J. L., T. M. Long, S. A. Wyse, and D. Ebert-May. 2010. “Just the Facts? Introductory Undergraduate Biology Courses Focus on Low-Level Cognitive Skills.” CBE-Life Sciences Education 9 (4): 435–440. doi:10.1187/cbe.10-01-0001.
  • Morris, M. M. 2008. “Evaluating University Teaching and Learning in an outcome-based Model: Replanting Bloom.”
  • Phillips, R. 2005. “Challenging the Primacy of Lectures: The Dissonance between Theory and Practice in University Teaching.” Journal of University Teaching & Learning Practice 2 (1): 2. doi:10.53761/1.2.1.2.
  • Porter, S. R., and P. D. Umbach. 2006. “Student Survey Response Rates across Institutions: Why Do They Vary?” Research in Higher Education 47 (2): 229–247. doi:10.1007/s11162-005-8887-1.
  • Roßnagel, S., N. F. Christian, and K. Lo Baido. 2020. “Constructive Alignment and the Learning Experience: Relationships with Student Motivation and Perceived Learning Demands.” Higher Education Research & Development 1–14. doi:10.1080/07294360.2020.1787956.
  • Rutherford, F. J., and A. Ahlgren. 1990. Science for All Americans. Oxford University Press.
  • Sambell, K. 2013. “Involving Students in the Scholarship of Assessment.” Reconceptualising Feedback in Higher Education: Developing Dialogue with Students 102–134.
  • Sarkar, M., T. Overton, C. Thompson, and G. Rayner. 2016. “Graduate Employability: Views of Recent Science Graduates and Employers.” International Journal of Innovation in Science and Mathematics Education (Formerly CAL-laborate International) 24 (3):31-48.
  • Semsar, K., and J. Casagrand. 2017. “Bloom’s Dichotomous Key: A New Tool for Evaluating the Cognitive Difficulty of Assessments.” Advances in Physiology Education 41 (1): 170–177. doi:10.1152/advan.00101.2016.
  • Singer, S. R., N. R. Nielsen, and H. A. Schweingruber. 2012. Discipline-Based Education Research: Understanding and Improving Learning in Undergraduate. Science and Engineering National Academies Press. doi:10.17226/13362
  • Stupple, E. J. N., F. A. Maratos, J. Elander, T. E. Hunt, K. Y. F. Cheung, and A. V. Aubeeluck. 2017. “Development of the Critical Thinking Toolkit (Critt): A Measure of Student Attitudes and Beliefs about Critical Thinking.” Thinking Skills and Creativity 23: 91–100. doi:10.1016/j.tsc.2016.11.007.
  • Taylor, R., and P. Canfield. 2007. Learning to Be a Scholarly Teaching Faculty: Cultural Change through Shared Leadership. Sydney University Press.
  • Tsui, L. 2002. “Fostering Critical Thinking through Effective Pedagogy: Evidence from Four Institutional Case Studies.” The Journal of Higher Education 73 (6): 740–763.
  • Uhl, J. D., K. N. Sripathi, E. Meir, J. Merrill, M. Urban-Lurain, and K. C. Haudek. 2021. “Automated Writing Assessments Measure Undergraduate Learning after Completion of a computer-based Cellular Respiration Tutorial.” CBE—Life Sciences Education 20: 3. doi:10.1187/cbe.20-06-0122.
  • Veloski, J. J., H. K. Rabinowitz, M. R. Robeson, and P. R. Young. 1999. “Patients Don’t Present with Five Choices: An Alternative to multiple-choice Tests in Assessing Physicians’ Competence.” Academic Medicine 74 (5): 539–546. doi:10.1097/00001888-199905000-00022.
  • Wang, X. Y., Y. L. Su, S. Cheung, E. Wong, and T. Kwong. 2013. “An Exploration of Biggs’ Constructive Alignment in Course Design and Its Impact on Students’ Learning Approaches.” Assessment & Evaluation in Higher Education 38 (4): 477–491. doi:10.1080/02602938.2012.658018.
  • Wiggins, G. P., and J. McTighe. 2005. Understanding by Design, 13–34. ASCD.
  • Zheng, A. Y., J. K. Lawhorn, T. Lumley, and S. Freeman. 2008. “Assessment - Application of Bloom’s Taxonomy Debunks the “MCAT Myth”.” Science 319 (5862): 414–415. doi:10.1126/science.1147852.
  • Zoller, U. 1993. “Are Lecture and Learning Compatible - Maybe for Locs - Unlikely for Hocs”. Journal of Chemical Education 70 (3): 195–197. doi:10.1021/ed070p195.
  • Zoller, U. 2016. ”From Algorithmic Science Teaching to “Know” to Research-Based Transformative Inter-Transdisciplinary Learning to “Think”: Problem Solving in the STES/STEM and Sustainability Contexts.” In N. Papadouris, A. Hadjigeorgiou, C. Constantinou edited by Insights from Research in Science Teaching and Learning. Contributions from Science Education ResearchVol. 2. Springer: Cham. 10.1007/978-3-319-20074-3_11