337
Views
1
CrossRef citations to date
0
Altmetric
Original Articles

Towards Assessment Practices ‘for’ Learning in Irish Built Environment Higher Education

&
Pages 73-93 | Published online: 15 Dec 2015

Abstract

Research in built environment education has begun to emerge as a distinct field. Within that context the investigation and exploration of assessment practices has received very little attention, particularly in the area of formative assessment. This type of assessment and the use of effective feedback mechanisms have been areas of concern for built environment educators for some time. The aim of this research is to improve the quality of student learning in built environment undergraduate programmes through the development of a theoretical framework of formative assessment based on the analysis of the issues and factors highlighted by the research literature and those involved in the research.

The philosophical paradigm which forms the basis for the research is discussed and the philosophical issues surmounted in the choice of research design are addressed. The application of a mixed methods approach and more particularly a constructivist stance to the research is explored and rationalised. The results of the first and second phases of the study present the views of senior academics on assessment. Institutional and programme documentation on assessment are analysed and presented. This work offers a scholarly model for the assessment of built environment undergraduates where student learning is enhanced.

Introduction

‘What the student does is actually more important in determining what is learned than what the teacher does’ (CitationShuell, 1986, p.411).

The words of Shuell are as appropriate today as they were in 1986 when he proffered the view that student-centred learning should be the focus of our approach to knowledge. How assessment can contribute to a modern constructivist's approach to a student-centred learning experience is an important aspect for those involved in Higher Education (HE). This paper focuses on some of the wide-ranging changes that have taken place in assessment practices employed in HE. It begins with exploring learning in HE and sets out how learning has developed over the last two decades by focusing on some of the main drivers of change. It explicitly investigates the current and emerging approaches to student assessment and the issues associated with the changing HE environment. This is set in the context of built environment undergraduate education where this theoretical entity is defined for the purpose of the research. The boundary for the research is National Qualifications Authority of Ireland (NQAI) level 8 degree programmes offered on the island of Ireland. The overall research design is considered and explored and the chosen methodology explained and defended. The scope of the research project is offered along with an analysis of its results. Some early considerations are presented and the next stage of the process identified. The paper concludes with suggestions to show how the initial research might impact on the adoption of a more scholarly approach to assessment practices that will support student learning in built environment education.

Learning in Higher Education

In contextualising the research topic it is appropriate to reflect on the ideals of HE, on the purposes of learners investing commitment, time, emotion and money in attending HE and on the role of academics in providing the necessary and appropriate experiences and space. Importantly, the role HE plays in modern society, and how it contributes to informed citizenry and democracy are important factors worth consideration. The role and position of assessment in this experience is significant from both a measurement and enhancement perspective as it can have a significant effect on the educational journey of each learner (CitationDee Fink, 2003). Aligned to this is what students should learn and be assessed on whilst in pursuit of an undergraduate degree. While this is not the main focus of the research enquiry, it is important to attempt to highlight and discuss the tensions between education for self versus education for society and the economy. Barnett proffers that the university has come to stand ‘for a set of universal aspirations, principally turning on the sense of the institution that embodies and promotes a life of reason’ (2003, p.1). In the context of continuous rapid change and increased globalisation, indeed ‘a world of intellectual, cognitive and value mayhem’ (CitationBarnett, 2005, p.17), what is the role of the university? It has been argued by many that the ideal of the university has emerged from a focus on rationality such as that espoused by the philosophers Descartes and Spinoza; or is it solely aimed at producing graduates who can take their place in the economy and advance society through a focus on developing the economy?

The approaches to learning and teaching in HE are different today and have changed significantly over the last few decades. The drivers of that change are numerous with the pressures for change occurring globally. Changes in the demographic profile of societies, as well as changes in society and the economy in general, have led to an increased demand for and participation in HE on a global scale. Higher Education in Ireland has not been ignored on this front. Changes have been brought about in quality assurance arrangements; a National Framework of Qualifications has been introduced and at institutional level many HE establishments have a stated objective of enhancing the student learning experience. An example of this is the Dublin Institute of Technology, a multi-disciplinary provider of HE in Ireland. Through its learning and teaching strategy the institute has put in place a strategic imperative to develop a multi-level learner-centred learning environment through the roll out of a modular structure for undergraduate programmes (CitationDublin Institute of Technology Strategic Plan, 2008). A new learning environment has been supported by the requirement of the NQAI that all awards should be defined in terms of learning outcomes, the achievement of which would be confirmed through the use of appropriate assessment strategies.

Over recent years there has been considerable emphasis on the concept of ‘student engagement’. This may appear uncharacteristic at first glance, given that the mission of universities is surely to engage students in learning through providing the conditions and the environment in which learning will flourish. However, there are many tensions inherent in academia today. The core business of universities is, or should be, creating an environment where students have the opportunity to engage in significant learning experiences. In a context of mass HE, increasing diversity of the student population, globalisation and the new marketing of education, and increased competition between the providers of HE, it is difficult to define ‘the best learning environment possible for all students’. It is also becoming more challenging to articulate the purpose of a university education with so many different agendas to satisfy.

The changing nature of the student population should convince academic and related staff to reflect on and question their assumptions about what motivates students to learn. The demographics of the student population have shifted considerably with a higher percentage of international students (CitationHigher Education Authority, 2008) and a greater number of mature students, both undergraduate and postgraduate, entering university courses and programmes with different life experiences to the ‘traditional’ school leaver and higher numbers of students who are the first in a family to enter HE. In meeting the needs of built environment undergraduate learners we need to provide a diversity of authentic assessment practices that allows them to engage in meaningful learning.

Assessment in Higher Education

  • ‘We continue to assess student learning — and to graduate and certify students much as we did in 1986, 1966, or 1946, without meaningful reference to what students should demonstrably know and be able to do’ (CitationAngelo, 1996, p.12).

  • ‘Assessment systems dominate what students are oriented towards in their learning. Even when lecturers say that they want students to be creative and thoughtful, students often recognise that what is really necessary, or at least what is sufficient, is to memorise’ (CitationGibbs, 1999, p.43).

  • ‘Assessment is at the heart of the student experience’ (Brown and Knight, 1994, p.12).

  • ‘From our students’ point of view, assessment always defines the actual curriculum' (CitationRamsden, 1992, p.187).

  • ‘Assessment defines what students regard as important, how they spend their time and how they come to see themselves as students and then as graduates…If you want to change student learning then change the methods of assessment’ (CitationBrown et al., 1997, p.7).

The quotations above reflect some of the points emphasised in the seminal literature and are an indication of how assessment and the approaches to assessment focus on the measurement of learning. They emphasise the important role assessment plays from the perspectives of both the learner and the learned. There is a need for built environment academics to engage with and command an understanding of the scholarship of assessment. The traditional approaches to assessment in HE typically place heavy reliance on tacit understanding of standards. This can be a point of strain in the current environment of rapidly changing contexts, including massification and larger class sizes (economies of scale), new kinds of teaching and learning, computer-aided assessment, new approaches to intended learning outcomes and declining resources (CitationHussey and Smith, 2002).

When we research assessment and evaluation in HE it would appear that it has been the subject of wide-ranging transformation over the last fifteen years (CitationBryan and Clegg, 2006). Debate about the current state of assessment often reveals unease as to its suitability for the twenty-first century and the need for it to be ‘fit for purpose’ (CitationBrown, 2004–05, p.81). CitationKnight (2002, p.275) talks of ‘practices in disarray’ where assessment becomes a site of conflict or power struggle, founded on an unequal relationship between the two parties (student and institution) and hampered by an in-built lack of clarity in the methods used to convey judgement on performance. Assessment in the discipline of the built environment, like in other disciplines, is required to fulfil a multiplicity of purposes and play many different and often conflicting roles. The provision and embedding of opportunities for assessment to aid learners through more formative ways has been highlighted as currently failing students. So how do we go about addressing this?

Assessment and student learning

Assessment practices in HE have been undergoing wide-ranging changes over the last number of years and this has been particularly evident in a number of disciplines (CitationD'Andrea and Gosling, 2005). These changes are in response to stimuli including a move towards greater accountability, new developments in the use of learning technology and concerns about what graduates need to know, to understand and to be able to do following graduation (CitationDee Fink, 2003). The academic field of the built environment has been receiving some attention in this regard and the validity and effectiveness of traditional modes of assessment have begun to receive consideration. While research in this area is quite limited, that available would suggest that in many areas of built environment undergraduate education traditional approaches to assessment dominate and the preferred approach to student assessment is the traditional summative examination (CitationScott and Fortune, 2009).

Assessment in HE is recognised as being very complex. It is something that is experienced by almost all involved in HE. Therefore it is important that an assessment system is recognisable and understood by all. There are many reasons to assess students and CitationBrown et al. (1996) present ten. Five of these refer to traditional summative assessment and the need for evidence and the classification of learning. The other five focus on formative assessment through guidance for improvement; providing opportunity for students to rectify mistakes to diagnose faults; motivation; providing variety in assessment method and providing direction to the learning process. The fact that both summative and formative assessment are attributed equal numbers of reasons might imply that equal importance is placed on both, but this is not usually the case. Having studied the assessment practices in use in undergraduate programmes in the built environment, indications are that while the ‘tide is starting to turn’ there is still an over reliance on the traditional summative examination at the end of a module or unit of learning (CitationScott and Fortune, 2009). CitationGibbs and Simpson (2004) in their seminal research on formative assessment and the use of feedback mechanisms indicate that these strategies have begun to be recognised as a driving force for enhancing student learning. This has yet to have a real impact at programme or module level in many undergraduate built environment programmes (CitationScott and Fortune, 2009). Research literature informs us that assessment is most effective when it is closely aligned to the stated learning outcomes. While some academic literature (CitationSmyth and Dow, 1998) suggest that the generic use of broad learning outcomes can be problematic, it is asserted that when learning outcomes are aligned with the specific context of a module and a distinct body of knowledge they provide both the learner and the lecturer with clarity as to what is to be achieved (CitationBiggs, 2003).

CitationCross (1996) refers to assessment and feedback as providing one of the three conditions for learner success. It is generally acknowledged that a student's approach to learning and the quality of learning achieved will be influenced by the way in which this learning is to be assessed (e.g. CitationGibbs, 1999; CitationEntwistle and Ramsden, 1983). Adopting a holistic approach to curriculum design that seeks to align constructively assessments with the learning outcomes, and teaching and learning methods that create a seamlessly inter-related curriculum (CitationBiggs, 2003) is important if a diversity of desired learning outcomes is to be achieved (e.g. CitationGibbs, 1999). CitationBoud (1995) also identifies a need to move from seeing particular assessments in isolation to recognising them as being linked to the other kinds of assessment used, as well as the proximity, frequency and also the context of their usage. Furthermore, it is asserted that the bunching of similar types of assessment at certain key points, perhaps at the middle and end of programmes, is likely to result in students' adoption of a surface approach to learning and the attainment of a limited number of lower-level learning outcomes (CitationScouller, 1996). In other words, cross programme strategic planning of appropriate assessments is fundamental if the intention is for students to attain higher-level learning outcomes such as problem solving and critical thinking (CitationBiggs, 2003; CitationGibbs, 1999).The critical importance of formative assessment (assessment that contributes to the student's learning through the provision of feedback about performance, CitationYorke (2003)) should not be underestimated by lecturers and is confirmed by the review work of CitationBlack and Wiliam (1998).

Assessment for learning, more commonly understood as formative assessment, is defined by CitationBlack and Wiliam (1998, p.22) as ‘all those activities undertaken by teachers and/or by their students, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged’. In very simple terms, assessment may be defined as such activities that measure student learning. However, CitationBoud (1990) posited that assessment has two purposes, firstly that of improving the quality of learning where learners engage in activities and are given feedback that will direct them to effectiveness in their learning (commonly referred to as formative feedback). The second concerns that of the accreditation of knowledge or performance, which occurs generally for the award of a degree or diploma (commonly referred to as summative assessment).

Today, students are more focused and they approach assessment with a better understanding of what is involved. CitationBloxham and Boyd (2007, p.69) refer to students as ‘being cue conscious concentrating on passing an assessment’. Because of both internal and external pressures brought to bear on HE we now hear academics speak in terms of formative and summative assessment, however, we have a long way to come before assessment and feedback become central to learning and, in turn, to the student experience. With the concept of life-long learning beginning to permeate through HE (CitationHigher Education Academy, 2008), and with the impact of the National Frameworks of Qualifications in Ireland, greater, more explicit attention is being paid to learning outcomes and competencies. A student-centred learning framework puts the learner at the centre of the learning process, in which assessment plays an important part. It is widely accepted that assessment has a direct impact on students' learning (CitationAskham, 1997; CitationBlack and Wiliam, 1998; CitationStiggins, 2002). We are all familiar with the term that assessment drives learning; this is true in many instances, where the learner looks at what has to be learned in terms of what he or she needs to do to get a good grade. Research indicates that what students focus on during the course of their studies is hugely influenced by the assessment methods employed to measure the learning experienced (CitationRamsden, 1992). Therefore, the importance of taking cognisance of assessment for learning and assessment of learning has relevance for lecturers in the design of their assessment strategies. Assessment of learning is where assessment for accountability purposes is paramount; its function is to determine a student's level of performance on a specific task or at the conclusion of a unit of teaching and learning. The information gained from this kind of assessment is often used in reporting and is purely of a summative nature. However, assessment for learning, on the other hand, acknowledges that assessment should occur as a regular part of teaching and learning and that the information gained from assessment activities should be used to shape the teaching and learning process. It can, most importantly, also be used by the learner to enhance learning and achievement. CitationGibbs and Simpson (2004) have developed a model ( below) that promotes eleven conditions under which assessment supports learning. Seven of the eleven conditions refer to feedback.

Table 1 Conditions under which assessment supports learningTable Footnote

The Built Environment

While not the main focus of this paper, it is necessary to consider and conceptualise the field of the built environment, particularly as it is the context in which the research is set. Human society has found it necessary to categorise the different forms of knowledge since the times of Aristotle and Plato in an attempt to make the world more intelligible (CitationGaarder, 1995). Those associated with developing the foundation of a theory for the built environment are no different in this regard. It has begun to emerge as a distinct discipline in the more recent past; however in the discourse around that theory its distinctiveness has been identified as problematic. CitationBoyd (2007) refers to the general conception of the built environment as one of a development process and he argues that it does not exist theoretically. CitationRatcliffe (2007), on the other hand, proffers that while the built environment is both vague and elusive it is a generic phrase of distinction and pertinence and is best portrayed and understood as a set of processes rather than one single entity. This set of processes includes the planning process, design process, construction process, regulatory process, financial process, transportation process and information process. CitationGriffiths (2004) describes it as a range of practice-orientated subjects concerned with the design, development and management of buildings, spaces and places.

In HE the field of the built environment has begun to make significant headway as a recognised discipline where Schools of Built Environment have been set up. The UK Research Assessment Exercise (RAE) sub-panel made reference to the field as encompassing architecture, building science and engineering, construction and landscape urbanism (CitationHigher Education Funding Council for England, 2005). While school and department configuration is often a matter of the culture of a HE institution, reference to the built environment by the RAE is an acknowledgement of the existence of this discipline. In the Irish HE context, while considered very much at a developmental stage, the field of the built environment has begun to be recognised and embedded as a distinct discipline. Again, schools and faculty have emerged in the organisational structure of HE institutions across the country.

For the purpose of this research within the built environment, the focus will be on the disciplines of architecture, architectural technology, construction management and construction economics, as they are the professions with the greatest concentration of undergraduate built environment education in Ireland. They are also the most representative group in terms of built environment programmes offered at honours degree (level 8) on the island of Ireland.

Rationale for Research Design

Human beings have always tried to come to terms with their environment and to make sense of and understand the nature of the phenomena to their senses (CitationCohen et al., 2001). At the commencement of any research project many questions occupy the thoughts of the researcher. What does this journey entail? Where to start? What philosophical stance should one take? What research methods should be employed to achieve effectively the goal(s) of the research? All research needs to be subjected to careful methodological assessment and reflection while theory provides the discourse and a vocabulary to describe what we think. In this regard, the principal aim of this research is to help to improve the quality of student learning in built environment undergraduate education using formative assessment as a vehicle. The central research question therefore can be summarised as:

Are assessment practices currently in use in built environment education maximising their potential to improve the quality of students' learning?

In attempting to address the aim of the research several research questions are posed:

  • How are academics in built environment undergraduate education currently assessing learning?

  • To what extent do academics align their assessment practices to educational theory?

  • Are the institutional procedures around assessment in conflict with the embedding of a student-focused assessment strategy?

  • What are students' experiences of formative assessment and feedback?

  • To what extent do the existing assessment methods encourage a deep approach to learning?

  • Do students get an opportunity to reflect on their learning?

  • What framework or model can be developed that will enhance the experiences of students with respect to assessment?

  • How will the improvements brought about by any new model/framework be measured?

A research framework gives the theoretical background to a research project and most researchers ‘struggle’ to come to terms with the theoretical aspects of the task. The research onion model advocated by CitationSaunders et al. (2003), while developed in the context of business, provides an appropriate framework for this research enquiry. Traditional research design strategies usually rely on a literature review leading on to the formation of a hypothesis which can be put to test by experimentation in the real world. The use of ethnographic methods and case study is mixed methods research and so involves the integration of quantitative and qualitative strategies to impact on the generation of new knowledge (CitationCreswell, 2008). The most appropriate mixed methods guidelines are adopted in line with twenty-first century methodological approaches in this phase of the study. The analysis of data from each of the interview phases, along with data gathered from the survey of academics will influence the emergent theoretical model and be influenced by reference to the educational research literature on assessment as well as analysis of empirical built environment documentation.

The purpose of mixed methods research is not to replace either qualitative or quantitative research, but to extract appropriately the strengths and diminish the weaknesses in both approaches within a single study (CitationJohnson and Onwuegbuzie, 2004). Supporters of mixed method research contend that the researcher needs to evaluate the most appropriate methodological approach to answer the specific research question (CitationMertens, 2005; CitationMorse 2003). Mixed methods research is not limited to a single purpose for conducting a study but may have numerous reasons for its use. Linked to the mixed methods paradigm is an acknowledgement of taking a pragmatic approach. Pragmatism (CitationBiesta and Burbules, 2003), developed in America during the late nineteenth and early twentieth centuries is, as the name suggests, focused towards those ideas that apply practically, disclaiming philosophy's reputation of being excessively idealistic and abstract. The theory of pragmatism is about meaning: the meaning of ideas lies in their consequences rather than in the ideas themselves. Included in pragmatism's approach is willingness to change and a readiness to respond to particular circumstances in which human beings are inevitably placed. Pragmatists attempt to emphasise the importance of trying different methods and then evaluating them with regard to their effectiveness. Pragmatism is not faithful to any one system of philosophy or reality and the pragmatic researcher focuses on the ‘what’ and ‘how’ of the research enquiry (CitationCreswell, 2003). It may be said, however, that mixed methods could be used with any paradigm. The pragmatic paradigm places the research problem as central and applies all approaches to understanding the problem (CitationCreswell, 2003). With the research question ‘central’, data collection and analysis methods are chosen as those most likely to provide insights into the question with no philosophical loyalty to any alternative paradigm. The combination of quantitative and qualitative methods in one study has been, until recently, termed between-method triangulation (CitationHalcomb and Andrew, 2005). Although widely used, the term triangulation has variable meanings in the research literature (CitationHalcomb and Andrew, 2005). Nurse researchers have been singled out for criticism for the continued misuse of the term (CitationTashakkori and Teddlie, 2003), however, it is an appropriate term in the context of this research. The term mixed methods is used to describe research which utilises both qualitative and quantitative data collection and analysis techniques in either parallel or sequential phases (CitationCreswell, 2003; CitationMertens, 2005; CitationTashakkori and Teddlie, 2003), which explains the approach taken in this research.

The research enquiry was conducted in two phases, as outlined in below, beginning with the quantitative review of institutional policies and programme documentation for the selected institutions. Feeding into this study was international research on student assessment along with Dublin Institute of Technology (DIT) institutional student research where the views of students with regard to assessment had been captured. Subsequently the second phase of the research involved qualitative interviews held with senior academics with a goal of capturing the views and opinions in regard to assessment at undergraduate level.

Figure 1 Research enquiry research process

Phase I: The quantitative analysis of programme and institutional documentation

Documents were reviewed and analysed for the six participant institutions with respect to policy and procedures on assessment, programme assessment strategies, regulations and cultural specific aspects along with the types of assessment used at programme level (see ). The analysis concentrated on the referenced documents and the review typically focused on the procedural elements of assessment, the requirements under the Quality Assurance policies and procedures in each institution. In the case of three of the Institutes of Technology, those policies were linked to an overall governing authority, while in the case of the other three institutions (one Institute of Technology and two Universities) the policies were institutional. One Institute of Technology made reference to assessment for learning and the importance of this forming part of the learning experience for the students.

Table 2 Institutional policy documentation research enquiryTable Footnote

Reference to the procedures around the external examining process, a process where an external academic and industry professional in the discipline are engaged annually by the institution to review and evaluate the academic standards, was evident in all institutional documentation. This further emphasises the embedded nature of the ‘measurement model’ in operation within the HE sector which is validated through the external examination process. However, while it is acknowledged that the tacit transfer of practices among academics is a genuine outcome of this process, it is not reflected in any of the documentation produced.

The review of the programme documentation uncovered documents that tended to be procedural in nature with the main focus being on marks for assessments, the number of summative assessments, the number of module credits, progression requirements and the level of awards relative to institutional procedures. Where module descriptors formed part of the programme documentation, assessment strategies tended to focus on the type of assessment with the allocation of weighting being a normal inclusion. Significantly, in only one instance was there any reference to assessment ‘for’ learning and in that case the module details made reference to the opportunity to make a draft submission of assignment where feedback would be provided. identifies the assessment type included in programme documentation by institution.

Table 3 Types of assessment identified in programme documentation

Phase II the qualitative analysis of senior academic interviews

This phase of the research adopted a qualitative approach where the perspectives of eight senior academics in management were sought during September and November 2009 in Schools in the University/Institutes of Technology sector around the island of Ireland. The interviews lasted up to one hour and were taped and transcribed with all interviewees agreeing to be recorded. Because the overall research enquiry has adopted a mixed methods approach the analysis involved a qualitative assignment of concepts and themes to the data gathered, a process recognised as coding. This process was adopted in the case of the data from the interviews. From this analysis the emerging themes and concepts are identified in the below from this first phase of the research.

Table 4 Concepts and codes arising from the first phase of the research enquiry

One emerging concept from among the Senior Academics interviewed is the difference in philosophical position with respect to assessment and how they view the assessment of student learning. The analysis of the data reflects differing positions as evidenced by the quotes below:

  • “The academic staff of the School working with the students are not looking at just the final product as presented but are looking at the process by which the final product was arrived at” Interviewee A.

  • “From a management perspective …I see it being engaged a lot with the compliance with National Framework of Qualifications and adopting changes in relation to learning outcomes process” Interviewee B.

  • “This is because we have always proportioned our assessment into end of year exams and coursework” Interviewee C.

Tradition and academic discipline influence the attitude towards the approach to assessment, while the type of educational organisation too has a distinct impact, as identified from the analysis of the interviews. For example, the use of the ‘crit’ in architecture has been an underpinning of the approach to assessment in this discipline.

The significance of assessment in the educational process and the important part that formative assessment plays in the mechanism by which this was achieved differed between each Head. There is a disparity of understanding of the purposes of assessment, particularly as we move towards a more student-centred learning environment. This is evidenced by the approach taken in the different institutions with respect to the design of the assessment strategies at module and programme level. The re-alignment of modules has taken place and there is evidence of the learning outcomes paradigm being adopted. However, much work needs to be done to ‘constructively align’ learning outcomes, content and assessment as advocated by CitationBiggs (1999, p.5). For example interviewee B stated:

“what we haven't done is link assessment methodologies to module learning outcomes.”

This further emphasises the traditional approach adopted in many programmes and the reliance on the measurement assessment strategy as opposed to a more holistic strategy.

There is evidence of student engagement in active learning tasks as referred to by interviewees C, D, and E, however those tasks are not linked to the overall assessment strategy. Students are required to take a summative examination at the end of a module where they may have demonstrated the achievement of the learning outcomes during the active learning tasks. This is a clear example of ‘over assessment’ and a reliance on the traditional summative examination. This is a common position that prevails not just in the built environment but across many other disciplines as academics engage in reflecting on and introducing a learning outcomes based approach (CitationGibbs and Simpson, 2004). One interviewee (D) indicated that academic lecturing staff are unaware that they are ‘empowered’ to make the appropriate changes to effect learning and hence the more traditional approaches are the preserve. There is still an over reliance on the ‘formal summative assessments’ or controlled examination as a means of verifying student attainment.

There is clear tension between the summative assessment and the formative assessment processes and using this knowledge/information to help teaching and learning. Again, the diverse position of each School along a continuum is in evidence. In some instances there has been full engagement in the alignment of programme and module learning outcomes while other Schools have only just begun to grapple with this. This has a direct relationship with their approach and configuration of the assessment strategies employed. This is allied to a complete agreement of the need to strengthen the processes of assessment and in particular the formative assessment elements. However, there is no sense or vision of how this might be achieved. The notion of developing reflective practice through assessment and its contribution to the enhancement of student learning and motivation is referred to, but, again there is no real sense of how senior leaders might influence its achievement.

Student involvement in assessment where the academics can benefit from the use of peer assessment on various levels was identified as problematic. The analysis suggests that it happens in a very limited amount of cases. Interviewee E indicated that students:

“do not perhaps participate as much as they should and that there should be more opportunities to engage the learner more.”

The often laborious process of marking student work can be potentially reduced if some of the assessment is carried out by the students. More fundamentally it can be used to open meaningful dialogue about the work and enhance feedback opportunities. Time constraints and the difficulties associated with peer assessment are cited as the issues associated with engaging students in the assessment.

The risks of involving students in their own or colleagues' assessment should not be underestimated. There continues to be intense pressure on the HE sector to maintain standards (CitationFullan, 2007) and so there has been a relatively slow take up of initiatives to effect change in the area of assessment. Any change to assessment practice must be able to withstand scrutiny and above all be rigorous and transparent (CitationRace, 2001). The fear of putting assessment in the hands of the students was expressed by some of those interviewed, where it was indicated that it would make the assessment less reliable. To ensure consistency, measures can be built in, including multiple assessment of the same piece of work by a number of students. Clear definition of marking criteria is another essential element of successful peer assessment. Criteria may be developed with students, but if this is not possible, at the very least they must be made clear prior to students attempting the exercise.

Another emergent theme was the need for inter and intra collegial opportunities to discuss not only assessment practices in the built environment but also other pertinent pedagogic matters. Ways should be explored of how we might share best practice and how this might begin to effect change in the discipline. This emerged where interviewees made comment on the need for staff development and training. CitationBanta et al. (2002) refer to the scholarship of teaching and learning where an emphasis in more recent years has been placed on an expansion of the range of scholarly activities in this area. CitationHutchings and Shulman (1999) suggest that academics should involve themselves in the investigation of questions that relate to student learning where an improvement in student learning might be explored in their own learning/teaching setting and in time where these dicoveries might also advance practice in the field and beyond . The point is made that the scholarship of assessment is following the path of scholarship of learning and teaching in that there is an awareness of the isues that surround assessment where an appetite to improve practice exists. Building in support frameworks that are founded on best practice will build towards this scholarly approach. From the research literature and analysis of the first two phases of this research the following elements should be included in a framework for formative assessment:

  • Opportunity for learners/learned to understand the concept of assessment

  • Authentic learning tasks linked with formative assessment tasks

  • Aligned learning, teaching and assessment approaches.

Concluding Thoughts

This paper has provided a summary overview of the authors' research to date with the analysis of the views and experiences of senior academics regarding assessment practices in undergraduate built environment education. It has also presented the findings from a quantitative review of both institutional and programme documentation from the institutions of those interviewed. As this is an important aspect of academic engagement, the paper focused on the methodology employed and a number of key issues emerging from the segment of data analysed thus far. There is a strong history of assessment underpinning programmes offered in built environment undergraduate programmes, particularly the more formal summative assessment strategies. Building on the developments at both discipline level and beyond, a framework that will support academics in building a scholarly approach to assessment should include the following aspects:

  • Creating assessment opportunities that are useful to learners

  • Building students' understanding of the purposes of assessment — both formative and summative

  • Creating and developing appropriate learning tasks that support formative assessment opportunities

  • Developing learners' conception as self regulated learners

  • Creating opportunities for academics in the built environment to develop and share practice.

The opportunities exist for the creation of authentic assessment tasks within programmes and modules at undergraduate level. It just requires academics to be motivated and directed to make that transition to providing transformative assessment activities.

Also, building towards students' knowledge of how and why assessment takes the form it does, raising their awareness of ongoing assessment tasks as well as final processes, creating an environment where they can be self- and peer assessors, and exposing them to how critical thinking about assessment can be an integral part of the learning process, should be primary aims of all HE teachers/lecturers. It is also of critical importance to involve students in the rationale behind assessment practices.

As regards future research, one of the questions to be addressed is the extent to which academics are engaging with the most up to date and effective assessment processes that will enhance student learning. A survey of the built environment academic community should endeavour to address this. Further work could continue to develop the framework above for formative assessment that will provide built environment undergraduates with the opportunity to enhance their learning while academics adopt a more scholarly approach to assessment practices.

Perhaps a radical step forward is to move away from being overly concerned with whether an assessment is formative or summative in nature and to see the various types of assessment as a continuum of the formative learning experience and embed them in the students' learning experiences. Building students' knowledge and understanding of how and why assessment strategies are designed, raising awareness of their ongoing development, revealing how critical it is to engage in assessment tasks as learning tasks with linking this to the overall learning process should be one of the primary goals of lecturers/teachers. Of most importance to this process is the involvement of students.

References

  • Angelo T. A. (1996). Improving classroom assessment to improve learning: Guidelines from research and practice. Assessment Update, 8 (1), 12-13.
  • Askham P. (1997). An instrumental response to the instrumental student: Assessment for learning. Studies in Educational Evaluation, 23 (4), 299-317.
  • Banta T. & Associates. (2002). Building a scholarship of assessment. 2nd ed. San Francisco: Jossey-Bass.
  • Barnett R. (2003). Beyond all reason: Living with ideology in the university. Buckingham: Society for Research into Higher Education and Open University Press.
  • Barnett R. (2005). Recapturing the universal in the university. Educational Philosophy and Theory, 37 (6), 785-797.
  • Biesta G. & Burbules N. (2003). Pragmatism and educational research. Lanham, MD: Rowman & Littlefield Publications.
  • Biggs J. (1999). Teaching for quality learning in university: What the student does. Buckingham: Society for Research into Higher Education & Open University Press.
  • Biggs J. (2003). Teaching for quality learning in university: What the student does. Buckingham: Society for Research into Higher Education & Open University Press.
  • Black P. & Wiliam D. (1998). Assessment and classroom learning. Assessment in Education, 5 (1), 7-74.
  • Bloxham S. & Boyd P. (2007). Developing effective assessment in higher education. Maidenhead: Open University Press.
  • Boud D. (1990). Assessment and the promotion of academic values. Studies in Higher Education, 15 (1), 101-111.
  • Boud D. (1995). Assessment and learning: Contradictory or complimentary. In: Knight P. (Ed.). Assessment for learning in higher education. London: Kogan Page, pp.35-48.
  • Boyd D. (2007). Searching for a unified theory of property and construction. In: Kosela L. & Roberts P. (Eds.). Towards the Foundation of Theory for the Built Environment Symposium. 18–19 June 2007, University of Salford, Research Institute for the Built Environment.
  • Brown G., Bull J. & Pendlebury M. (1997). Assessing student learning in higher education. London: Routledge.
  • Brown S. (2004–05). Assessment for learning. Learning and Teaching in Higher Education, 1, 81-89.
  • Brown S. & Knight P. (1994). Assessing learners in higher education. London: Kogan Page.
  • Brown S., Race P. & Smith B. (1996). 500 tips on assessment. London: Kogan Page.
  • Bryan C. & Clegg K. (2006). Innovative assessment in higher education. London: Routledge.
  • Cohen L., Manion L. & Morrison K. (2001). Research methods in education. 5th ed. London & New York: Routledge/Falmer.
  • Creswell J. W. (2003). Research design: Qualitative, quantitative and mixed methods. 2nd ed. Thousand Oaks, CA: Sage Publications.
  • Creswell J. W. (2008). Educational research: Planning, conducting, and evaluating quantitative and qualitative research. 3rd ed. New Jersey: Pearson Prentice Hall.
  • Cross K. P. (1996). Improving teaching and learning through classroom assessment and classroom research. In: Gibbs G. (Ed.). Improving student learning: Using research to improve student learning. Oxford: Oxford Centre for Staff Development, pp. 3-10.
  • D'Andrea V. & Gosling D. (2005). Improving teaching and learning in higher education: A whole institution approach. Maidenhead: Open University Press.
  • Dee Fink L. (2003). Creating significant learning experiences — an integrated approach to designing college courses. San Francisco, CA: Jossey-Bass.
  • Dublin Institute of Technology. (2008). Strategic plan 2008. Dublin: DIT.
  • Entwistle N. J. & Ramsden P. (1983). Understanding student learning. London: Croom Helm.
  • Fullan M. (2007). The new meaning of educational change. New York: Teachers College Press.
  • Gaarder J. (1995). Sophie's world: A novel about the history of philosophy. London: Phoenix.
  • Gibbs G. (1999). Using assessment strategically to change the way students learn. In: Brown S. & Glasner A. (Eds.). Assessment matters in higher education: Choosing and using diverse approaches. Buckingham: Society for Research into Higher Education & Open University Press, pp. 41-53.
  • Gibbs G. & Simpson C. (2004). Conditions under which assessment supports student learning. Learning and Teaching in Higher Education, 1, 3-31.
  • Griffiths R. (2004). Knowledge production and the research-teaching nexus: The case of the built environment disciplines. Studies in Higher Education, 29 (6), 709-726.
  • Halcomb E. J. & Andrew S. (2005). Triangulation as a method for contemporary nursing research. Nurse Researcher, 13 (2), 71-82.
  • Higher Education Authority. (2008). Strategic Plan 2008–2010. Dublin: HEA.
  • Higher Education Funding Council for England. (2005). RAE 2008 consultation on Assessment Panels' draft criteria and working methods — UOA30, Architecture and the Built Environment. London: HEFCE.
  • Hussey T. & Smith P. (2002). Learning outcomes : A conceptual analysis. Active Learning in Higher Education, 13 (1), 107-115.
  • Hutchings P. & Shulman L. (1999). The scholarship of teaching: New elaborations, new developments. Change, 31 (5), 10-15.
  • Johnson B. R. & Onwuegbuzie A. J. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33 (7), 14-26.
  • Knight P. T. (2002). Summative assessment in higher education: Practices in disarray. Studies in Higher Education, 27 (3), 275-286.
  • Mertens D. M. (2005). Research and evaluation in education and psychology: Integrating diversity with quantitative, qualitative and mixed-methods. 2nd ed. Thousand Oaks, CA: Sage Publications.
  • Morse J. M. (2003). Principles of mixed methods and multimethod research design. In: Tashakkori A. & Teddlie C. (Eds.). Handbook of mixed methods in social and behavioral research. Thousand Oaks CA: Sage Publications, pp. 189-208.
  • Race P. (2001). A briefing on self, peer and group assessment. LTSN (Learning and Teaching Subject Network) Generic CentreAssessment Series 9. URL: http://www.bioscience.heacademy.ac.uk/ftp/Resources/gc/assess09SelfPeerGroup.pdf
  • Ramsden P. (1992). Learning to teach in higher education. London: Routledge.
  • Ratcliffe J. (2007). Built environment futures: Adopting the foresight principle in formulating and applying a theoretical approach towards the creation of a sustainable built environment. In: Kosela L. & Roberts P. (Eds.). Towards the Foundation of Theory for the Built Environment Symposium 18–19 June 2007, University of Salford, Research Institute for the Built Environment.
  • Saunders M., Lewis P. & Thornhill A. (2003). Research methods for business students. Harlow: Prentice Hall.
  • Scott L. & Fortune C. (2009). Promoting student-centred learning: Portfolio assessment on an undergraduate construction management program. In: Proceedings of the 45th Associated School of Construction Conference, Gainsville, Florida.
  • Scouller K. (1996). Influence in assessment methods on students' learning approaches, perceptions and preferences: Assignment essay versus short answer questions. Research and Development in Higher Education, 19 (3), 776-781.
  • Shuell T. (1986). Cognitive conceptions of learning. Review of Research in Education, 19, 405-450.
  • Smyth J. & Dow A. (1998). What's wrong with outcomes? Spotter planes, action plans, and steerage of the educational workplace. British Journal of Sociology of Education, 19 (3), 291-303.
  • Stiggins R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan, 83 (10), 758-765.
  • Tashakkori A. & Teddlie C. (Eds.). (2003). Handbook of mixed methods in social and behavioral research. Thousand Oaks, CA: Sage Publications.
  • Yorke M. (2003). Formative assessment in higher education: Moves towards theory and the enhancement of pedagogic practice. Higher Education, 45 (4), 477-501.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.